Untrusted repositories turn Claude code into an attack vector
嗯,用户让我总结这篇文章,控制在100字以内,而且不需要特定的开头。首先,我得通读文章,抓住主要信息。 文章讲的是Claude Code存在漏洞,可能导致远程代码执行和API密钥被盗。Check Point Research发现了这些漏洞,利用Hooks、MCP服务器和环境变量来运行恶意命令。用户只要克隆或打开不受信任的仓库就会触发攻击。 还有提到CVE编号和修复措施,比如Anthropic加强了信任提示和限制API调用。这些都是关键点。 接下来,我需要把这些信息浓缩到100字以内。要确保包含主要威胁、漏洞利用方式、影响以及修复措施。 可能的结构是:Claude Code漏洞导致代码执行和API密钥盗窃,通过恶意配置文件触发,影响企业云环境,Anthropic已修复。 检查一下字数,确保不超过限制。这样用户就能快速了解文章的核心内容了。 </think> Anthropic的Claude Code AI工具存在漏洞,允许攻击者通过恶意仓库配置文件远程执行代码并窃取API密钥。Check Point Research发现这些漏洞可利用Hooks、MCP服务器和环境变量,在用户克隆或打开不受信任项目时触发攻击。Anthropic已修复这些问题以提升安全性。 2026-2-25 21:39:43 Author: securityaffairs.com(查看原文) 阅读量:8 收藏

Untrusted repositories turn Claude code into an attack vector

Pierluigi Paganini February 25, 2026

Flaws in Anthropic’s Claude Code could allow remote code execution and theft of API keys when users open untrusted repositories.

Check Point Research team found multiple vulnerabilities in Anthropic’s Claude Code AI coding assistant that could lead to remote code execution and API key theft. The vulnerabilities abuse features such as Hooks, MCP servers, and environment variables to run arbitrary shell commands and exfiltrate Anthropic API credentials when users clone and open untrusted repositories.

“Critical vulnerabilities, CVE-2025-59536 and CVE-2026-21852, in Anthropic’s Claude Code enabled remote code execution and API key theft through malicious repository-level configuration files, triggered simply by cloning and opening an untrusted project.” reads the report published by Check Point Research.

“Built-in mechanisms—including Hooks, MCP integrations, and environment variables—could be abused to bypass trust controls, execute hidden shell commands, and redirect authenticated API traffic before user consent”

Researchers found that Claude Code’s project-level configuration files can act as an execution layer, allowing the attackers to abuse a single malicious repository as an attack vector. Simply cloning and opening a crafted repo could trigger hidden commands, bypass consent safeguards, steal Anthropic API keys, and pivot from a developer’s workstation into shared enterprise cloud environments, without visible warning.

The risks include silent command execution via abused Hooks, consent bypass in the Model Context Protocol (CVE-2025-59536), and API key exfiltration before trust confirmation (CVE-2026-21852), potentially exposing broader AI-driven workflows.

Anthropic’s API Workspaces feature lets multiple API keys share access to cloud-stored project files. Since files belong to the entire workspace and not just one API key, stealing a single key could let attackers access, change, or delete shared data, upload harmful content, and create unexpected charges. This behavior puts the whole team at risk, not just one developer.

The flaws highlight a new AI supply chain threat: repository configuration files now act as execution logic, so simply opening an untrusted project can trigger abuse. Anthropic addressed the issues by tightening trust prompts, blocking external tool execution, and restricting API calls until user approval.

“AI-powered coding tools are rapidly becoming part of enterprise development workflows. Their productivity benefits are significant, but so is the need to reassess traditional security assumptions.

Configuration files are no longer passive settings. They can influence execution, networking, and permissions.” concludes the report. “As AI integration deepens, security controls must evolve to match the new trust boundaries.”

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini

(SecurityAffairs – hacking, Claude)




文章来源: https://securityaffairs.com/188508/security/untrusted-repositories-turn-claude-code-into-an-attack-vector.html
如有侵权请联系:admin#unsafe.sh