Claude Code Source Leaked via npm Packaging Error, Anthropic Confirms
嗯,用户让我用中文帮他总结一下这篇文章,控制在100个字以内,而且不需要以“文章内容总结”或“这篇文章”这样的开头,直接写描述。好的,首先我得仔细阅读文章内容。 文章主要讲的是Anthropic公司的Claude Code代码泄露事件。他们的人为错误导致了内部代码的意外发布。虽然没有涉及敏感数据,但泄露了近2000个TypeScript文件和超过51万行代码。安全研究员Shou在X上曝光了这件事,导致代码被广泛传播到GitHub上。 泄露的代码让开发者和竞争对手了解了Claude Code的工作原理,包括自愈内存架构、工具系统、查询引擎、多代理编排和双向通信层等。此外,还提到了KAIROS功能和Undercover Mode等细节。 同时,文章也提到了攻击者可能利用这些泄露的信息进行恶意活动,比如绕过安全措施或注入恶意命令。还有攻击者注册了与npm包名称相似的包名,试图进行依赖混淆攻击。 最后,文章提到这是Anthropic一周内的第二个重大失误,之前他们的AI模型细节也被泄露了。 总结一下,我需要把所有这些关键点浓缩到100字以内。要涵盖泄露的原因、影响、具体内容以及后续的安全问题。 可能的结构是:Anthropic因人为错误泄露Claude Code源码(近2000个文件),包含内部架构和功能细节;安全研究员曝光后代码广泛传播;可能导致安全风险和攻击;公司已采取措施防止再次发生。 这样应该能控制在100字左右,并且直接描述事件。 </think> Anthropic因人为错误意外泄露其AI工具Claude Code的源代码,包含近2000个TypeScript文件和51万行代码。该事件由安全研究员发现并公开后引发广泛关注。泄露的代码揭示了Claude Code的内部架构、功能模块及独特模式如KAIROS和Undercover Mode等。尽管未涉及敏感数据,但此次事件可能为攻击者提供利用漏洞的机会,并引发后续依赖混淆攻击风险。 2026-4-1 06:12:0 Author: thehackernews.com(查看原文) 阅读量:6 收藏

Data Breach / Artificial Intelligence

Anthropic on Tuesday confirmed that internal code for its popular artificial intelligence (AI) coding assistant, Claude Code, had been inadvertently released due to a human error.

"No sensitive customer data or credentials were involved or exposed," an Anthropic spokesperson said in a statement shared with CNBC News. "This was a release packaging issue caused by human error, not a security breach. We’re rolling out measures to prevent this from happening again."

The discovery came after the AI upstart released version 2.1.88 of the Claude Code npm package, with users spotting that it contained a source map file that could be used to access Claude Code's source code – comprising nearly 2,000 TypeScript files and more than 512,000 lines of code. The version is no longer available for download from npm.

Security researcher Chaofan Shou was the first to publicly flag it on X, stating "Claude code source code has been leaked via a map file in their npm registry!" The X post has since amassed more than 28.8 million views. The leaked codebase remains accessible via a public GitHub repository, where it has surpassed 84,000 stars and 82,000 forks.

A source code leak of this kind is significant, as it gives software developers and Anthropic's competitors a blueprint for how the popular coding tool works. Users who have dug into the code have published details of its self-healing memory architecture to overcome the model's fixed context window constraints, as well as other internal components.

These include a tools system to facilitate various capabilities like file read or bash execution, a query engine to handle LLM API calls and orchestration, multi-agent orchestration to spawn "sub-agents" or swarms to carry out complex tasks, and a bidirectional communication layer that connects IDE extensions to Claude Code CLI.

The leak has also shed light on a feature called KAIROS that allows Claude Code to operate as a persistent, background agent that can periodically fix errors or run tasks on its own without waiting for human input, and even send push notifications to users. Complementing this proactive mode is a new "dream" mode that will allow Claude to constantly think in the background to develop ideas and iterate existing ones.

Perhaps the most intriguing detail is the tool's Undercover Mode for making "stealth" contributions to open-source repositories. "You are operating UNDERCOVER in a PUBLIC/OPEN-SOURCE repository. Your commit messages, PR titles, and PR bodies MUST NOT contain ANY Anthropic-internal information. Do not blow your cover," reads the system prompt. 

Another fascinating finding involves Anthropic's attempts to covertly fight model distillation attacks. The system has controls in place that inject fake tool definitions into API requests to poison training data if competitors attempt to scrape Claude Code's outputs.

Typosquat npm Packages Pushed to Registry

With Claude Code's internals now laid bare, the development risks provide bad actors with ammunition to bypass guardrails and trick the system into performing unintended actions, such as running malicious commands or exfiltrating data.

"Instead of brute-forcing jailbreaks and prompt injections, attackers can now study and fuzz exactly how data flows through Claude Code's four-stage context management pipeline and craft payloads designed to survive compaction, effectively persisting a backdoor across an arbitrarily long session," AI security company Straiker said.

The more pressing concern is the fallout from the Axios supply chain attack, as users who installed or updated Claude Code via npm on March 31, 2026, between 00:21 and 03:29 UTC may have pulled with it a trojanized version of the HTTP client that contains a cross-platform remote access trojan. Users are advised to immediately downgrade to a safe version and rotate all secrets.

What's more, attackers are already capitalizing on the leak to typosquat internal npm package names in an attempt to target those who may be trying to compile the leaked Claude Code source code and stage dependency confusion attacks. The names of the packages, all published by a user named "pacifier136," are listed below -

  • audio-capture-napi
  • color-diff-napi
  • image-processor-napi
  • modifiers-napi
  • url-handler-napi

"Right now they're empty stubs (`module.exports = {}`), but that's how these attacks work – squat the name, wait for downloads, then push a malicious update that hits everyone who installed it," security researcher Clément Dumas said in a post on X.

The incident is the second major blunder for Anthropic within a week. Details about the company's upcoming AI model, along with other internal data, were left accessible via the company's content management system (CMS) last week. Anthropic subsequently acknowledged it's been testing the model with early access customers, stating it's "most capable we've built to date," per Fortune.

Found this article interesting? Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.


文章来源: https://thehackernews.com/2026/04/claude-code-tleaked-via-npm-packaging.html
如有侵权请联系:admin#unsafe.sh