Docker Fixes Critical Ask Gordon AI Flaw Allowing Code Execution via Image Metadata
好的,我现在需要帮用户总结这篇文章的内容,控制在100字以内。首先,我得通读整篇文章,抓住主要信息。 文章讲的是一个安全漏洞,影响Docker Desktop和CLI中的AI助手Ask Gordon。这个漏洞被命名为DockerDash,已经修复。攻击者可以利用恶意元数据标签在Docker镜像中植入指令,导致代码执行和数据泄露。问题出在AI将未经验证的元数据当作可执行命令,绕过安全边界。修复版本是4.50.0。 接下来,我需要把这些要点浓缩到100字以内。重点包括:漏洞名称、影响范围、攻击方式、后果、问题根源和修复措施。 可能的结构是:描述漏洞及其影响,攻击手段,后果,问题所在和修复版本。 最后检查字数,确保不超过限制,并且表达清晰。 </think> Docker Desktop和CLI中的AI助手Ask Gordon存在严重漏洞(DockerDash),攻击者可通过恶意元数据标签注入指令,导致代码执行或数据泄露。该漏洞利用AI对未经验证元数据的处理机制,在MCP协议支持下绕过安全边界。已通过4.50.0版本修复。 2026-2-3 16:41:0 Author: thehackernews.com(查看原文) 阅读量:0 收藏

Artificial Intelligence / Vulnerability

Cybersecurity researchers have disclosed details of a now-patched security flaw impacting Ask Gordon, an artificial intelligence (AI) assistant built into Docker Desktop and the Docker Command-Line Interface (CLI), that could be exploited to execute code and exfiltrate sensitive data.

The critical vulnerability has been codenamed DockerDash by cybersecurity company Noma Labs. It was addressed by Docker with the release of version 4.50.0 in November 2025.

"In DockerDash, a single malicious metadata label in a Docker image can be used to compromise your Docker environment through a simple three-stage attack: Gordon AI reads and interprets the malicious instruction, forwards it to the MCP [Model Context Protocol] Gateway, which then executes it through MCP tools," Sasi Levi, security research lead at Noma, said in a report shared with The Hacker News.

"Every stage happens with zero validation, taking advantage of current agents and MCP Gateway architecture."

Successful exploitation of the vulnerability could result in critical-impact remote code execution for cloud and CLI systems, or high-impact data exfiltration for desktop applications.

The problem, Noma Security said, stems from the fact that the AI assistant treats unverified metadata as executable commands, allowing it to propagate through different layers sans any validation, allowing an attacker to sidestep security boundaries. The result is that a simple AI query opens the door for tool execution.

With MCP acting as a connective tissue between a large language model (LLM) and the local environment, the issue is a failure of contextual trust. The problem has been characterized as a case of Meta-Context Injection.

"MCP Gateway cannot distinguish between informational metadata (like a standard Docker LABEL) and a pre-authorized, runnable internal instruction," Levi said. "By embedding malicious instructions in these metadata fields, an attacker can hijack the AI's reasoning process."

In a hypothetical attack scenario, a threat actor can exploit a critical trust boundary violation in how Ask Gordon parses container metadata. To accomplish this, the attacker crafts a malicious Docker image with embedded instructions in Dockerfile LABEL fields. 

While the metadata fields may seem innocuous, they become vectors for injection when processed by Ask Gordon AI. The code execution attack chain is as follows -

  • The attacker publishes a Docker image containing weaponized LABEL instructions in the Dockerfile
  • When a victim queries Ask Gordon AI about the image, Gordon reads the image metadata, including all LABEL fields, taking advantage of Ask Gordon's inability to differentiate between legitimate metadata descriptions and embedded malicious instructions
  • Ask Gordon to forward the parsed instructions to the MCP gateway, a middleware layer that sits between AI agents and MCP servers.
  • MCP Gateway interprets it as a standard request from a trusted source and invokes the specified MCP tools without any additional validation
  • MCP tool executes the command with the victim's Docker privileges, achieving code execution

The data exfiltration vulnerability weaponizes the same prompt injection flaw but takes aim at Ask Gordon's Docker Desktop implementation to capture sensitive internal data about the victim's environment using MCP tools by taking advantage of the assistant's read-only permissions.

The gathered information can include details about installed tools, container details, Docker configuration, mounted directories, and network topology.

It's worth noting that Ask Gordon version 4.50.0 also resolves a prompt injection vulnerability discovered by Pillar Security that could have allowed attackers to hijack the assistant and exfiltrate sensitive data by tampering with the Docker Hub repository metadata with malicious instructions.

"The DockerDash vulnerability underscores your need to treat AI Supply Chain Risk as a current core threat," Levi said. "It proves that your trusted input sources can be used to hide malicious payloads that easily manipulate AI’s execution path. Mitigating this new class of attacks requires implementing zero-trust validation on all contextual data provided to the AI model."

Found this article interesting? Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.


文章来源: https://thehackernews.com/2026/02/docker-fixes-critical-ask-gordon-ai.html
如有侵权请联系:admin#unsafe.sh