Google sounds alarm on self-modifying AI malware
嗯,用户让我用中文总结这篇文章,控制在100字以内,而且不需要用“文章内容总结”之类的开头。首先,我得仔细阅读文章内容,抓住主要信息。 文章讲的是Google的威胁情报小组发现了新的恶意软件,这些软件利用AI在运行时自我修改、适应和收集数据。这提高了它们的隐蔽性和持久性。提到了几个例子,比如PROMPTFLUX和PROMPTSTEAL,它们使用大语言模型来动态生成恶意脚本和混淆代码。此外,国家支持的黑客也在利用AI工具进行攻击。 接下来,我需要把这些关键点浓缩到100字以内。要包括Google的警告、AI恶意软件的特点、具体例子以及国家行为者的参与。同时,语言要简洁明了,直接描述内容。 可能会遇到的问题是如何在有限的字数内涵盖所有重要信息而不遗漏关键点。可能需要合并一些信息或使用更简洁的表达方式。 最后,确保总结准确反映文章内容,并且符合用户的格式要求。 </think> Google警告称,新型恶意软件利用AI在运行中自我修改、适应和收集数据,以增强隐蔽性和持久性。研究人员发现PROMPTFLUX和PROMPTSTEAL等恶意软件使用大语言模型动态生成代码并混淆自身以规避检测。此外,国家支持的黑客也在滥用AI工具进行攻击活动。 2025-11-6 17:45:55 Author: securityaffairs.com(查看原文) 阅读量:3 收藏

Google sounds alarm on self-modifying AI malware

Pierluigi Paganini November 06, 2025

Google warns malware now uses AI to mutate, adapt, and collect data during execution, boosting evasion and persistence.

Google’s Threat Intelligence Group (GTIG) warn of a new generation of malware that is using AI during execution to mutate, adapt, and collect data in real time, helping it evade detection more effectively.

Cybercriminals increasingly use AI to build malware, plan attacks, and craft phishing lures. Recent research shows AI-driven ransomware like PromptLock can adapt during execution.

GTIG reports a new phase of AI abuse: attackers now deploy AI-powered malware that adapts behavior during execution.

” For the first time, GTIG has identified malware families, such as PROMPTFLUX and PROMPTSTEAL, that use Large Language Models (LLMs) during execution. These tools dynamically generate malicious scripts, obfuscate their own code to evade detection, and leverage AI models to create malicious functions on demand, rather than hard-coding them into the malware.” reads the report published by Google. “While still nascent, this represents a significant step toward more autonomous and adaptive malware.”

In 2025, Google identified the first malware using AI mid-execution to change its behavior dynamically. While current examples are mostly experimental, they signal a shift toward AI-integrated cyberattacks. Attackers are moving past using AI merely for support or coding help, marking the start of a trend likely to grow in future intrusion campaigns.

Below the list of malware with novel AI capabilities GTIG detected in 2025:

MalwareFunctionDescriptionStatus
FRUITSHELLReverse ShellPublicly available reverse shell written in PowerShell that establishes a remote connection to a configured command-and-control server and allows a threat actor to execute arbitrary commands on a compromised system. Notably, this code family contains hard-coded prompts meant to bypass detection or analysis by LLM-powered security systems.Observed in operations
PROMPTFLUXDropperDropper written in VBScript that decodes and executes an embedded decoy installer to mask its activity. Its primary capability is regeneration, which it achieves by using the Google Gemini API. It prompts the LLM to rewrite its own source code, saving the new, obfuscated version to the Startup folder to establish persistence. PROMPTFLUX also attempts to spread by copying itself to removable drives and mapped network shares.Experimental
PROMPTLOCKRansomwareCross-platform ransomware written in Go, identified as a proof of concept. It leverages an LLM to dynamically generate and execute malicious Lua scripts at runtime. Its capabilities include filesystem reconnaissance, data exfiltration, and file encryption on both Windows and Linux systems.Experimental
PROMPTSTEALData MinerData miner written in Python and packaged with PyInstaller. It contains a compiled script that uses the Hugging Face API to query the LLM Qwen2.5-Coder-32B-Instruct to generate one-line Windows commands. Prompts used to generate the commands indicate that it aims to collect system information and documents in specific folders. PROMPTSTEAL then executes the commands and sends the collected data to an adversary-controlled server.Observed in operations
QUIETVAULTCredential StealerCredential stealer written in JavaScript that targets GitHub and NPM tokens. Captured credentials are exfiltrated via creation of a publicly accessible GitHub repository. In addition to these tokens, QUIETVAULT leverages an AI prompt and on-host installed AI CLI tools to search for other potential secrets on the infected system and exfiltrate these files to GitHub as well.Observed in operations

Table 1: Overview of malware with novel AI capabilities GTIG detected in 2025

Google’s Threat Intelligence Group documented early, experimental malware that directly leverages large language models to adapt and evade detection. PROMPTFLUX, a VBScript dropper found in June 2025, queries Gemini to request VBScript obfuscation and evasion code, logging AI responses and containing a “Thinking Robot” module that aims to fetch new evasive code just-in-time; its full self-update routine appears under development and some features remain commented out. Variants instruct Gemini to rewrite the script hourly as an “expert VBScript obfuscator,” embedding API keys and self-regeneration logic to create recursive metamorphism. Although PROMPTFLUX shows proof-of-concept capabilities rather than active network compromise, Google disabled associated assets and strengthened model protections. Separately, GTIG observed APT28 using PROMPTSTEAL (aka LAMEHUG), a data-miner that queries an LLM (Qwen2.5-Coder) via Hugging Face during live operations to generate system- and file-collection commands on the fly; PROMPTSTEAL likely uses stolen API tokens and blindly executes LLM-generated commands to harvest documents and system info before exfiltration.

“PROMPTSTEAL likely uses stolen API tokens to query the Hugging Face API. The prompt specifically asks the LLM to output commands to generate system information and also to copy documents to a specified directory.” reads the report published by Google. “The output from these commands are then blindly executed locally by PROMPTSTEAL before the output is exfiltrated. Our analysis indicates continued development of this malware, with new samples adding obfuscation and changing the C2 method.”

GTIG flagged multiple AI-enabled malware in the wild such as FruitShell, which is a PowerShell reverse shell that runs arbitrary commands and embeds hardcoded AI prompts meant to evade AI-powered defenses.

QuietVault, a JavaScript credential stealer, hunts NPM and GitHub tokens using onsite AI prompts and CLI tools to find secrets.

Together, these cases mark a shift from AI-as-tooling to AI-in-the-loop malware, signaling an emerging threat trajectory that defenders must anticipate and mitigate.

Google warns that in 2025, the underground cybercrime market for AI-powered tools evolved significantly. GTIG found numerous multifunctional AI tools supporting all attack phases, especially phishing campaigns. Many mirrored legitimate SaaS models, offering free versions with ads and paid tiers for advanced features like image generation, API access, and Discord integration.ù

 The report also detailed how nation-state actors misused generative AI tools in thier operations.

“State-sponsored actors from North Korea, Iran, and the People’s Republic of China (PRC) continue to misuse generative AI tools including Gemini to enhance all stages of their operations, from reconnaissance and phishing lure creation to C2 development and data exfiltration.” concludes the report.

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini

(SecurityAffairs – hacking, malware)




文章来源: https://securityaffairs.com/184275/malware/google-sounds-alarm-on-self-modifying-ai-malware.html
如有侵权请联系:admin#unsafe.sh