Google: state-backed hackers exploit Gemini AI for cyber recon and attacks
好的,我现在需要帮用户总结这篇文章的内容,控制在100个字以内,而且不需要用“文章内容总结”或者“这篇文章”这样的开头。首先,我得通读整篇文章,抓住主要信息。 文章主要讲的是国家支持的黑客利用Google的Gemini AI进行网络攻击和侦察活动。比如,朝鲜的UNC2970和伊朗的APT42都使用了Gemini来进行情报收集和社会工程攻击。此外,还提到了AI生成恶意软件如HONESTCUE和钓鱼工具COINBAIT的情况。Google已经采取措施阻止这些活动,并强调了负责任地开发AI的重要性。 接下来,我需要把这些关键点浓缩到100字以内。要确保涵盖国家黑客、Gemini AI的用途、具体例子以及Google的应对措施。 可能会这样组织:国家支持的黑客利用Gemini AI进行网络侦察和攻击,包括生成钓鱼信息、恶意软件等。Google已采取行动,并强调安全标准的重要性。 最后检查一下字数是否符合要求,确保信息准确且简洁。 </think> 国家支持的黑客利用Google的Gemini AI进行网络侦察和攻击,包括生成钓鱼信息、恶意软件及情报收集。Google已采取措施阻止此类活动,并强调负责任地开发和使用AI的重要性。 2026-2-13 10:57:30 Author: securityaffairs.com(查看原文) 阅读量:0 收藏

Google: state-backed hackers exploit Gemini AI for cyber recon and attacks

Pierluigi Paganini February 13, 2026

Google says nation-state actors used Gemini AI for reconnaissance and attack support in cyber operations.

Google DeepMind and GTIG report a rise in model extraction or “distillation” attacks aimed at stealing AI intellectual property, which Google has detected and blocked. While APT groups have not breached frontier models, private firms and researchers have tried to clone proprietary systems. State-backed actors from North Korea, Iran, China, and Russia use AI for research, targeting, and phishing. Threat actors also test agentic AI, AI-powered malware like HONESTCUE, and underground “jailbreak” services.

Threat actors now use large language models to craft polished, culturally accurate phishing messages that remove common red flags like poor grammar. They also run “rapport-building” phishing, holding realistic multi-step conversations to gain trust before delivering malware.

Google reported that North Korea-linked hacker group UNC2970 used its Gemini AI model to gather intelligence on targets and support cyber operations. The company also said other threat groups now weaponize generative AI to speed up attack stages, run information operations, and even attempt model extraction attacks.

“The North Korean government-backed actor UNC2970 has consistently focused on defense targeting and impersonating corporate recruiters in their campaigns. The group used Gemini to synthesize OSINT and profile high-value targets to support campaign planning and reconnaissance.” reads the report published by Google. “This actor’s target profiling included searching for information on major cybersecurity and defense companies and mapping specific technical job roles and salary information. “

Iran-linked group APT42 also used generative AI tools like Gemini to boost reconnaissance and targeted social engineering. The group searched for official email addresses, researched organizations to build believable pretexts, and created tailored personas based on target biographies. The nation-state actor also used AI for language translation and understanding local context. Google disrupted the activity and disabled related assets.

In September 2025, Google tracked new malware called HONESTCUE that uses the Gemini API to generate malicious C# code on demand. Instead of storing full payloads, the malware sends prompts to Gemini, receives source code for a second-stage downloader, compiles it in memory with .NET tools, and executes it without writing files to disk. This fileless approach helps evade detection. Attackers also host payloads on platforms like Discord CDN. Researchers believe a single actor or small group is testing this AI-assisted malware as a proof of concept.

In November 2025, GTIG found COINBAIT, a phishing kit built with help from AI. It pretends to be a major crypto exchange to steal login details. Some of the activity links to UNC5356, a group known for SMS and phone phishing. The kit was likely created using Lovable AI and built as a complex React website. It includes detailed “? Analytics:” logs that show how it tracks and steals data. The attackers hid their systems behind Cloudflare and trusted services to avoid detection. COINBAIT shows a move toward modern web tools and cloud services, may be used by different groups, and also connects to AI-hosted ClickFix scams that trick users into installing malware like ATOMIC.

Underground forums show strong demand for AI tools built for cybercrime. Since most threat actors cannot build their own models, they rely on established services like Gemini. One example, Xanthorox, claimed to be a private custom AI for malware and phishing, but it actually ran on commercial and open-source AI tools layered together.

Attackers need stolen API keys to scale abuse, creating risks for organizations using cloud AI services. Criminals often exploit weak security in open-source AI platforms to steal and resell API keys, fueling a black market.

Google disabled accounts linked to this abuse and continues strengthening safeguards, threat detection, red teaming, and secure AI development through frameworks like SAIF and research projects such as Big Sleep and CodeMender.

“The potential of AI, especially generative AI, is immense. As innovation moves forward, the industry needs security standards for building and deploying AI responsibly.” concludes Google.

Follow me on Twitter: @securityaffairs and Facebook and Mastodon

Pierluigi Paganini

(SecurityAffairs – hacking, Gemini AI)




文章来源: https://securityaffairs.com/187958/ai/google-state-backed-hackers-exploit-gemini-ai-for-cyber-recon-and-attacks.html
如有侵权请联系:admin#unsafe.sh