Google warns of new AI-powered malware families deployed in the wild
嗯,用户让我帮忙总结一下这篇文章的内容,控制在100个字以内,而且不需要用“文章内容总结”或者“这篇文章”这样的开头。直接写描述即可。首先,我需要通读整篇文章,理解其主要内容。 文章主要讲的是Google的威胁情报小组GTIG发现今年网络攻击中出现了重大变化,攻击者开始利用人工智能来部署新的恶意软件家族,这些恶意软件在执行过程中集成了大型语言模型(LLMs)。这种新方法使得恶意软件能够在执行过程中动态改变,达到了传统恶意软件难以企及的操作灵活性。Google将这种技术称为“即时”自我修改,并举了PromptFlux和PromptSteal等例子。 此外,文章还提到攻击者滥用Google的Gemini模型在整个攻击生命周期中的多个案例,包括漏洞发现、钓鱼诱饵制作和数据外泄工具开发等。最后,Google采取措施禁用了相关账户,并加强了模型的安全防护。 接下来,我需要将这些关键点浓缩到100字以内。首先,提到GTIG发现攻击者利用AI生成新型恶意软件;然后说明这些恶意软件如何动态改变;接着提到滥用Gemini模型的例子;最后指出Google采取的应对措施。 可能的结构是:Google报告称今年网络攻击中出现重大变化,攻击者利用AI生成新型恶意软件,并动态改变代码以逃避检测。同时,攻击者滥用Gemini模型进行漏洞发现、钓鱼诱骗和数据外泄工具开发。Google已采取措施禁用相关账户并加强安全防护。 检查一下字数是否在100字以内,并确保没有使用禁止的开头方式。 </think> Google报告称今年网络攻击中出现重大变化,攻击者利用AI生成新型恶意软件,并动态改变代码以逃避检测。同时,攻击者滥用Gemini模型进行漏洞发现、钓鱼诱骗和数据外泄工具开发。Google已采取措施禁用相关账户并加强安全防护。 2025-11-5 15:0:20 Author: www.bleepingcomputer.com(查看原文) 阅读量:36 收藏

Artificial Intelligence

Google's Threat Intelligence Group (GTIG) has identified a major shift this year, with adversaries leveraging artificial intelligence to deploy new malware families that integrate large language models (LLMs) during execution.

This new approach enables dynamic altering mid-execution, which reaches new levels of operational versatility that are virtually impossible to achieve with traditional malware.

Google calls the technique "just-in-time" self-modification and highlights the experimental PromptFlux malware dropper and the PromptSteal (a.k.a. LameHug) data miner deployed in Ukraine, as examples for dynamic script generation, code obfuscation, and creation of on-demand functions.

Wiz

PromptFlux is an experimental VBScript dropper that leverages Google's LLM Gemini in its latest version to generate obfuscated VBScript variants.

It attempts persistence via Startup folder entries, and spreads laterally on removable drives and mapped network shares.

"The most novel component of PROMPTFLUX is its 'Thinking Robot' module, designed to periodically query Gemini to obtain new code for evading antivirus software," explains Google.

The prompt is very specific and machine-parsable, according to the researchers, who see indications that the malware's creators aim to create an ever-evolving "metamorphic script."

PromptFlux "Thinking" function
PromptFlux "StartThinkingRobot" function
Source: Google

Google could not attribute PromptFlux to a specific threat actor, but noted that the tactics, techniques, and procedures indicate that it is being used by a financially motivated group.

Although PromptFlux was in an early development stage, not capable to inflict any real damage to targets, Google took action to disable its access to the Gemini API and delete all assets associated with it.

Another AI-powered malware Google discovered this year, which is used in operations, is FruitShell, a PowerShell reverse shell that establishes remote command-and-control (C2) access and executes arbitrary commands on compromised hosts.

The malware is publicly available, and the researchers say that it includes hard-coded prompts intended to bypass LLM-powered security analysis.

Google also highlights QuietVault, a JavaScript credential stealer that targets GitHub/NPM tokens, exfiltrating captured credentials on dynamically created public GitHub repositories.

QuietVault leverages on-host AI CLI tools and prompts to search for additional secrets and exfiltrate them too.

On the same list of AI-enabled malware is also PromptLock, an experimental ransomware that relies on Lua scripts to steal and encrypt data on Windows, macOS, and Linux machines.

Cases of Gemini abuse

Apart from AI-powered malware, Google's report also documents multiple cases where threat actors abused Gemini across the entire attack lifecycle.

A China-nexus actor posed as a capture-the-flag (CTF) participant to bypass Gemini's safety filters and obtain exploit details, using the model to find vulnerabilities, craft phishing lures, and build exfiltration tools.

Iranian hackers MuddyCoast (UNC3313) pretended to be a student to use Gemini for malware development and debugging, accidentally exposing C2 domains and keys.

Iranian group APT42 abused Gemini for phishing and data analysis, creating lures, translating content, and developing a "Data Processing Agent" that converted natural language into SQL for personal-data mining.

China's APT41 leveraged Gemini for code assistance, enhancing its OSSTUN C2 framework and utilizing obfuscation libraries to increase malware sophistication.

Finally, the North Korean threat group Masan (UNC1069) utilized Gemini for crypto theft, multilingual phishing, and creating deepfake lures, while Pukchong (UNC4899) employed it for developing code targeting edge devices and browsers.

In all cases Google identified, it disabled the associated accounts and reinforced model safeguards based on the observed tactics, to make their bypassing for abuse harder.

AI-powered cybercrime tools on underground forums

Google researchers discovered that on underground marketplaces, both English and Russian-speaking, the interest in malicious AI-based tools and services is growing, as they lower the technical bar for deploying more complex attacks.

"Many underground forum advertisements mirrored language comparable to traditional marketing of legitimate AI models, citing the need to improve the efficiency of workflows and effort while simultaneously offering guidance for prospective customers interested in their offerings," Google says in a report published today.

The offers range from utilities that generate deepfakes and images to malware development, phishing, research and reconnaissance, and vulnerability exploitation.

As the cybercrime market for AI-powered tools is getting more mature, the trend indicates a replacement of the conventional tools used in malicious operations.

The Google Threat Intelligence Group (GTIG) has identified multiple actors advertising multifunctional tools that can cover the stages of an attack.

The push to AI-based services seems to be aggressive, as many developers promote the new features in the free version of their offers, which often include API and Discord access for higher prices.

Google underlines that the approach to AI from any developer "must be both bold and responsible" and AI systems should be designed with "strong safety guardrails" to prevent abuse, discourage, and disrupt any misuse and adversary operations.

The company says that it investigates any signs of abuse of its services and products, which include activities linked to government-backed threat actors. Apart from collaboration with law enforcement when appropriate, the company is also using the experience from fighting adversaries "to improve safety and security for our AI models."

Wiz

Secrets Security Cheat Sheet: From Sprawl to Control

Whether you're cleaning up old keys or setting guardrails for AI-generated code, this guide helps your team build securely from the start.

Get the cheat sheet and take the guesswork out of secrets management.


文章来源: https://www.bleepingcomputer.com/news/security/google-warns-of-new-ai-powered-malware-families-deployed-in-the-wild/
如有侵权请联系:admin#unsafe.sh