Anthropic Report Shows Bad Actors Abusing Claude in Attacks
网络犯罪分子利用Anthropic的Claude大型语言模型进行大规模数据勒索攻击,自动化攻击过程并分析数据制定赎金数额。此外,Claude还被用于诈骗和开发勒索软件。Anthropic采取措施应对这些威胁。 2025-9-16 12:30:7 Author: securityboulevard.com(查看原文) 阅读量:18 收藏

Bad actors over the summer weaponized Anthropic’s Claude large language model (LLM) in a large-scale data extortion scheme that illustrated how threat groups are leveraging AI agents in their operations.

According to the AI company, the unnamed cybercriminal used the Claude Code development tool to automate essentially every part of the attacks, from reconnaissance to harvesting victims’ credentials to penetrating networks.

“Claude was allowed to make both tactical and strategic decisions, such as deciding which data to exfiltrate, and how to craft psychologically targeted extortion demands,” Anthropic wrote in a recent report. “Claude analyzed the exfiltrated financial data to determine appropriate ransom amounts and generated visually alarming ransom notes that were displayed on victim machines.”

Techstrong Gang Youtube

Anthropic outlined the extortion scheme and other instances of bad actors abusing Claude in cybercrime campaigns in its Threat Intelligence Report.

Evolving AI Threats

Threat actors over the past several years have been as quick as the business world to jump onto the generative AI bandwagon, developing ways to compromise LLMs to steal data or to run their own nefarious operations. IT vendors and cybersecurity companies are quickly adding AI capabilities to their security tools, but threat groups are doing the same.

AI enables hackers to create malware more quickly and write more convincing phishing emails, and it lowers the bar for less-skilled bad actors.

The use of Claude Code and AI agents in the data extortion case shows how the cybercriminal world continues to evolve its efforts.

“This represents an evolution in AI-assisted cybercrime,” Anthropic wrote. “Agentic AI tools are now being used to provide both technical advice and active operational support for attacks that would otherwise have required a team of operators. This makes defense and enforcement increasingly difficult, since these tools can adapt to defensive measures, like malware detection systems, in real time.”

The company wrote that it expects “attacks like this to become more common as AI-assisted coding reduces the technical expertise required for cybercrime.”

Healthcare, Government Agencies Targeted

In the extortion campaign that ran in July, the threat actors targeted at least 17 organizations in such sectors as healthcare, emergency services, and government, as well as religious institutions, to steal data and then threaten to expose it if a ransom wasn’t paid.

Some of the ransom demands were more than $500,000.

Once Anthropic detected the operation, the company banned the accounts involved, developed an automated screening tool – a tailored classifier – and created a detection method for discovering similar activities. It also shared technical indicators about the attack with authorities.

North Korean IT Worker Scams

The data extortion incident was one of three cases that Anthropic used to illustrate how bad actors are abusing Claude in their campaigns. The company also found that operatives were using the LLM in North Korea’s expanding IT worker scams that targeted Fortune 500 companies.

North Korean intelligence agencies for several years have been running elaborate schemes to get their agents hired as remote IT workers by unsuspecting companies, a ruse designed to steal data and bring money to the regime to help it evade international sanctions and fund its weapons programs.

Before LLMs, it took years for North Korea to train its agents to take on remote IT work, which was a relatively slow process.

“But AI has eliminated this constraint,” Anthropic wrote. “Operators who cannot otherwise write basic code or communicate professionally in English are now able to pass technical interviews at reputable technology companies and then maintain their positions. This represents a fundamentally new phase for these employment scams.”

The company banned the accounts involved, enhanced the tools that collect, store, and correlate the known indicators of the scams, and shared the information with authorities.

Multiple Ransomware Variants

Another case focused again on how AI allows low-skilled bad actors to run increasingly complex campaigns. A hacker used Claude to not only develop multiple ransomware variants, but also to market and distribute them. Each variant featured advanced evasion capabilities, encryption, and anti-recovery mechanisms, according to Anthropic.

The ransomware packages were sold on dark web forums for $400 to $1,200.

“This actor appears to have been dependent on AI to develop functional malware,” Anthropic wrote. “Without Claude’s assistance, they could not implement or troubleshoot core malware components, like encryption algorithms, anti-analysis techniques, or Windows internals manipulation.”

Again, when detecting the LLM abuse, the company banned the associated account, alerted partners, and created new ways to detect malware upload, modification, and generation.

Recent Articles By Author


文章来源: https://securityboulevard.com/2025/09/anthropic-report-shows-bad-actors-abusing-claude-in-attacks/
如有侵权请联系:admin#unsafe.sh