Adversaries Exploiting Proprietary AI Capabilities, API Traffic to Scale Cyberattacks
好的,我现在需要帮用户总结一篇文章的内容,控制在100字以内。用户的要求是直接写文章描述,不需要以“文章内容总结”或“这篇文章”开头。 首先,我得仔细阅读文章内容。文章主要讲的是Google Threat Intelligence Group在2025年第四季度发现的AI滥用情况。威胁行为者利用大语言模型进行侦察、钓鱼、恶意软件开发等活动。特别是模型提取攻击,通过API获取输出来训练自己的模型,这涉及到知识产权盗窃。此外,AI还被用于生成钓鱼邮件和恶意软件代码,地下市场也在利用AI服务扩大攻击范围。 接下来,我需要提炼这些关键点:AI滥用、侦察、钓鱼、恶意软件、模型提取攻击、知识产权威胁、地下市场利用。 然后,把这些信息浓缩成一个连贯的句子,不超过100字。确保涵盖主要威胁和具体例子。 最后,检查语言是否简洁明了,没有使用任何开头套话。 </think> 2025年第四季度,Google Threat Intelligence Group报告称,威胁行为者滥用人工智能的情况显著增加。攻击者利用大型语言模型进行侦察、钓鱼、恶意软件开发等活动,并通过模型提取攻击窃取知识产权。地下市场也利用AI服务扩大攻击规模。 2026-2-13 08:19:8 Author: thecyberexpress.com(查看原文) 阅读量:2 收藏

In the fourth quarter of 2025, the Google Threat Intelligence Group (GTIG) reported a significant uptick in the misuse of artificial intelligence by threat actors. According to GTIG’s AI threat tracker, what initially appeared as experimental probing has evolved into systematic, repeatable exploitation of large language models (LLMs) to enhance reconnaissance, phishing, malware development, and post-compromise activity. 

A notable trend identified by GTIG is the rise of model extraction attempts, or “distillation attacks.” In these operations, threat actors systematically query production models to replicate proprietary AI capabilities without directly compromising internal networks. Using legitimate API access, attackers can gather outputs sufficient to train secondary “student” models. While knowledge distillation is a valid machine learning method, unauthorized replication constitutes intellectual property theft and a direct threat to developers of proprietary AI. 

Throughout 2025, GTIG observed sustained campaigns involving more than 100,000 prompts aimed at uncovering internal reasoning and chain-of-thought logic. Attackers attempted to coerce Gemini into revealing hidden decision-making processes. GTIG’s monitoring systems detected these patterns and mitigated exposure, protecting the internal logic of proprietary AI.  

AI Threat Tracker, a Force Multiplier 

Beyond intellectual property theft, GTIG’s AI threat tracker reports that state-backed and sophisticated actors are leveraging LLMs to accelerate reconnaissance and social engineering. Threat actors use AI to synthesize open-source intelligence (OSINT), profile high-value individuals, map organizational hierarchies, and identify decision-makers, dramatically reducing the manual effort required for research. 

For instance, UNC6418 employed Gemini to gather account credentials and email addresses prior to launching phishing campaigns targeting Ukrainian and defense-sector entities. Temp.HEX, a China-linked actor, used AI to collect intelligence on individuals in Pakistan and analyze separatist groups. While immediate operational targeting was not always observed, Google mitigated these risks by disabling associated assets. 

Phishing tactics have similarly evolved. Generative AI enables actors to produce highly polished, culturally accurate messaging. APT42, linked to Iran, used Gemini to enumerate official email addresses, research business connections, and create personas tailored to targets, while translation capabilities allowed multilingual operations. North Korea’s UNC2970 leveraged AI to profile cybersecurity and defense professionals, refining phishing narratives with salary and role information. All identified assets were disabled, preventing further compromise. 

report-ad-banner

AI-Enhanced Malware Development 

GTIG also documented AI-assisted malware development. APT31 prompted Gemini with expert cybersecurity personas to automate vulnerability analysis, including remote code execution, firewall bypass, and SQL injection testing. UNC795 engaged Gemini regularly to troubleshoot code and explore AI-integrated auditing, suggesting early experimentation with agentic AI, systems capable of autonomous multi-step reasoning. While fully autonomous AI attacks have not yet been observed, GTIG anticipates growing underground interest in such capabilities. 

Generative AI is also supporting information operations. Threat actors from China, Iran, Russia, and Saudi Arabia used Gemini to draft political content, generate propaganda, and localize messaging. According to GTIG’s AI threat tracker, these efforts improved efficiency and scale but did not produce transformative influence capabilities. AI is enhancing productivity rather than creating fundamentally new tactics in the information operations space. 

AI-Powered Malware Frameworks: HONESTCUE and COINBAIT 

In September 2025, GTIG identified HONESTCUE, a malware framework outsourcing code generation via Gemini’s API. HONESTCUE queries the AI for C# code to perform “stage two” functionality, which is compiled and executed in memory without writing artifacts to disk, complicating detection.  

Similarly, COINBAIT, a phishing kit detected in November 2025, leveraged AI-generated code via Lovable AI to impersonate a cryptocurrency exchange. COINBAIT incorporated complex React single-page applications, verbose developer logs, and cloud-based hosting to evade traditional network defenses. 

GTIG also reported that underground markets are exploiting AI services and API keys to scale attacks. One example, “Xanthorox,” marketed itself as a self-contained AI for autonomous malware generation but relied on commercial AI APIs, including Gemini.  


文章来源: https://thecyberexpress.com/gtig-ai-threat-tracker/
如有侵权请联系:admin#unsafe.sh