Google says hackers are abusing Gemini AI for all attacks stages
好的,我现在需要帮用户总结一篇文章的内容,控制在100字以内,而且不需要特定的开头。首先,我得仔细阅读文章,理解其主要内容。 文章主要讲的是国家支持的黑客如何利用Google的Gemini AI模型来进行攻击,从侦察到攻击后的行动。具体来说,来自中国、伊朗、朝鲜和俄罗斯的黑客使用Gemini进行目标分析、生成钓鱼诱饵、翻译文本、编码、漏洞测试和故障排除。此外,网络犯罪分子对AI工具的兴趣也在增加,用于非法活动如社会工程学。 接下来,我需要提取关键点:国家支持的黑客、利用Gemini AI、攻击各个阶段(侦察到数据外泄)、涉及的国家和地区、具体用途如钓鱼攻击、代码生成等。同时,文章还提到AI工具被用于恶意软件开发和ClickFix活动。 现在,我需要用简洁的语言把这些要点浓缩到100字以内。要确保涵盖主要威胁行为者、使用的AI工具及其用途。 可能的结构是:国家支持的黑客利用Google Gemini AI进行攻击活动,包括侦察、钓鱼诱骗生成等;涉及多个国家和地区的威胁行为者;AI也被用于恶意软件开发和社会工程学活动。 检查字数是否在限制内,并确保信息准确无误。 </think> 国家支持的黑客利用Google Gemini AI进行攻击活动,包括侦察、钓鱼诱骗生成及恶意软件开发。来自中国、伊朗等国的威胁行为者借助该模型执行社会工程学和代码生成任务。 2026-2-12 07:15:24 Author: www.bleepingcomputer.com(查看原文) 阅读量:1 收藏

Google says hackers are abusing Gemini AI for all attacks stages

State-backed hackers are using Google's Gemini AI model to support all stages of an attack, from reconnaissance to post-compromise actions.

Bad actors from China (APT31, Temp.HEX), Iran (APT42), North Korea (UNC2970), and Russia used Gemini for target profiling and open-source intelligence, generating phishing lures, translating text, coding, vulnerability testing, and troubleshooting.

Cybercriminals are also showing increased interest in AI tools and services that could help in illegal activities, such as social engineering ClickFix campaigns.

Wiz

AI-enhanced malicious activity

The Google Threat Intelligence Group (GTIG) notes in a report today that APT adversaries use Gemini to support their campaigns "from reconnaissance and phishing lure creation to command and control  (C2) development and data exfiltration."

Chinese threat actors employed an expert cybersecurity persona to request that Gemini automate vulnerability analysis and provide targeted testing plans in the context of a fabricated scenario.

“The PRC-based threat actor fabricated a scenario, in one case trialing Hexstrike MCP tooling, and directing the model to analyze Remote Code Execution (RCE), WAF bypass techniques, and SQL injection test results against specific US-based targets,” Google says.

Another China-based actor frequently employed Gemini to fix their code, carry out research, and provide advice on technical capabilities for intrusions.

The Iranian adversary APT42 leveraged Google's LLM for social engineering campaigns, as a development platform to speed up the creation of tailored malicious tools (debugging, code generation, and researching exploitation techniques).

Additional threat actor abuse was observed for implementing new capabilities into existing malware families, including the CoinBait phishing kit and the HonestCue malware downloader and launcher.

GTIG notes that no major breakthroughs have occurred in that respect, though the tech giant expects malware operators to continue to integrate AI capabilities into their toolsets.

HonestCue is a proof-of-concept malware framework observed in late 2025 that uses the Gemini API to generate C# code for second-stage malware, then compiles and executes the payloads in memory.

HonestCue operational overview
HonestCue operational overview
Source: Google

CoinBait is a React SPA-wrapped phishing kit masquerading as a cryptocurrency exchange for credential harvesting. It contains artifacts indicating that its development was advanced using AI code generation tools.

One indicator of LLM use is logging messages in the malware source code that were prefixed with "Analytics:," which could help defenders track data exfiltration processes.

Based on the malware samples, GTIG researchers believe that the malware was created using the Lovable AI platform, as the developer used the Lovable Supabase client and lovable.app.

Cybercriminals also used generative AI services in ClickFix campaigns, delivering the AMOS info-stealing malware for macOS. Users were lured to execute malicious commands through malicious ads listed in search results for queries on troubleshooting specific issues.

AI-powered ClickFix attack
AI-powered ClickFix attack
source: Google

The report further notes that Gemini has faced AI model extraction and distillation attempts, with organizations leveraging authorized API access to methodically query the system and reproduce its decision-making processes to replicate its functionality.

Although the problem is not a direct threat to users of these models or their data, it constitutes a significant commercial, competitive, and intellectual property problem for the creators of these models.

Essentially, actors take information obtained from one model and transfer the information to another using a machine learning technique called "knowledge distillation," which is used to train fresh models from more advanced ones.

“Model extraction and subsequent knowledge distillation enable an attacker to accelerate AI model development quickly and at a significantly lower cost,” GTIG researchers say.

Google flags these attacks as a threat because they constitute intellectual theft, they are scalable, and severely undermine the business model of AI-as-a-service, which has the potential to impact end users soon.

In a large-scale attack of this kind, Gemini AI was targeted by 100,000 prompts that posed a series of questions aimed at replicating the model’s reasoning across a range of tasks in non-English languages.

Google has disabled accounts and infrastructure tied to documented abuse, and has implemented targeted defenses in Gemini’s classifiers to make abuse harder.

The company assures that it "designs AI systems with robust security measures and strong safety guardrails" and regularly tests the models to improve their security and safety.

tines

The future of IT infrastructure is here

Modern IT infrastructure moves faster than manual workflows can handle.

In this new Tines guide, learn how your team can reduce hidden manual delays, improve reliability through automated response, and build and scale intelligent workflows on top of tools you already use.


文章来源: https://www.bleepingcomputer.com/news/security/google-says-hackers-are-abusing-gemini-ai-for-all-attacks-stages/
如有侵权请联系:admin#unsafe.sh