Generative AI in Social Engineering & Phishing in 2025
生成式 AI 使网络钓鱼攻击更加高效且个性化。2025 年,攻击者利用深度伪造语音、完美邮件和实时伪装绕过安全防御。钓鱼点击量激增 466%,案例包括深度伪造浪漫诈骗和模仿美国官员的声音引发全球警报。大型语言模型生成的鱼叉式钓鱼邮件效果与人类相当。防御需结合行为异常检测、多因素认证及员工培训以应对威胁。 2025-9-3 01:0:0 Author: www.darknet.org.uk(查看原文) 阅读量:0 收藏

Generative AI has turned phishing into a high-speed, highly personalised threat. In 2025, attackers are no longer limited by poor grammar or generic templates. They now wield voice deepfakes, perfect emails, and live impersonation to bypass even vigilant SOCs with ease. For defenders, understanding how this wave unfolds and how to disrupt it is vital now.

Generative AI in Social Engineering & Phishing in 2025

Trend Overview

Generative AI enables content so polished that spam filters and human analysts struggle to catch it. A Kaspersky report shows over 142 million phishing link clicks in Q2 2025, a rise attributed to GenAI’s ability to mimic trusted senders and craft convincing messages in real-time time.

Shadow phishing is also exploding. The Sift Digital Trust Index reports breached personal data up 186 per cent, and phishing incidents surged 466 per cent in Q1 2025, driven by automated kits that write and send documents without human aid Q2 2025 Digital Trust Index. This shift indicates that attackers using generative AI have dramatically increased both scale and believability in phishing campaigns.

Campaign Analysis

Deepfake Romance Scam – Hollywood Voice Cloning, $430K Loss

A woman in Southern California was scammed out of over $430,000 in a romance fraud using a deepfake of actor Steve Burton. The fraudsters transitioned from Facebook Messenger to WhatsApp, using hyperrealistic video and voice impersonation to build trust and manipulate her into selling her home L.A. Woman Loses Life Savings After Scammers Use AI to Pose as “General Hospital” Star. The case reveals how AI deepfakes can bypass emotional and rational filters, weaponising empathy at scale.

Voice Deepfake of U.S. Official Sparks Global Alert

In mid-2025, an AI-generated audio impersonation of the U.S. Secretary of State triggered a global security alert. The fake voice was used in communications to senior officials, raising alarms about state-level deception and geopolitical manipulation AI voice deepfake of US Secretary of State triggers global security alert. The incident illustrates how generative AI is no longer limited to financial scams; it now poses a threat to national trust and diplomacy.

LLM Spear-Phishing as Effective as Human Attackers

A 2024 academic study tested spear-phishing emails generated by a large language model (LLM) and compared them against human-crafted emails and a control group. The AI-generated emails achieved a 54 per cent click-through rate, on par with human attackers (54 per cent) and far above the baseline (12 per cent) Evaluating Large Language Models’ Capability to Launch Fully Automated Spear Phishing Campaigns. This proves that automation via LLMs now rivals human creativity in social engineering effectiveness.

Detection Vectors and TTPs

Generative AI phishing tactics often blend across texts, voice, and video, exploiting multiple vectors. Spear-phishing emails written by LLMs usually bypass traditional keyword or style-based detection. Security teams must shift toward behavioural anomaly detection, which identifies messages that deviate from a user’s regular communications, citation-based scoring, or anomalies in interaction patterns.

Voice deepfakes exploit telephone-based social engineering, where caller documents (such as badges or scripts) may align, but tone and timing differ. Detection strategies need to include voice biometrics, “challenge response” methods, and secondary verification, especially for high-value or emotional requests.

Industry Response and Law Enforcement

In July 2025, Kaspersky announced it blocked over 142 million phishing link clicks in Q2, driven by GenAI use AI-powered phishing attacks are on the rise and getting smarter. That scale of automated suppression shows how detection infrastructure must scale proportionally.

Meanwhile, companies like Pindrop report a 475 per cent increase in synthetic voice attacks in insurance, and 149 per cent rise in banking fraud calls, all powered by deepfake audio generation Pindrop’s 2025 Voice Intelligence & Security Report. These insights have prompted new vendor partnerships and early-stage integration of voice-deepfake detection into identity verification services.

CISO Playbook

  • Deploy behavioural detection that flags anomalous email/content requests outside standard communication patterns.
  • Require MFA for high-risk actions and verify requests through alternative channels (e.g., video or out-of-band confirmation) before fulfilling.
  • In contact centres, integrate voice biometric systems and random voice-response challenges to detect deepfake audio.
  • Train staff to recognise generative phishing indicators: unusual context, overly polished tone, or inconsistencies in caller ID.

Closing Perspective

Generative AI has profoundly changed phishing and social engineering. It is no longer a matter of ‘if’ defenders can detect manipulated content; it is about whether they can detect behaviour that breaks trust. The most effective defence will come from systems and teams that monitor for anomalies across channels, not just suspicious content.

Always detect suspicious communications with layered verification and follow established legal protocols when responding to suspected fraud.


文章来源: https://www.darknet.org.uk/2025/09/generative-ai-in-social-engineering-phishing-in-2025/
如有侵权请联系:admin#unsafe.sh