In an AI World, Every Attack is a Social Engineering Attack
嗯,用户让我总结一篇文章的内容,控制在一百个字以内,而且不需要用“文章内容总结”之类的开头。我得先仔细读一下这篇文章。 文章讲的是AI如何改变网络攻击的方式。以前的攻击需要很多人力和时间,现在有了大语言模型和生成式AI,攻击可以自动化了。比如之前需要专门的团队来钓鱼,现在AI可以自动识别漏洞,同时攻击多个目标。这样一来,攻击成本降低,范围扩大,更多小公司也会被攻击。 另外,文章提到AI还能生成难以分辨真假的账号,模仿人类行为建立信任。这会让现有的安全措施失效,因为传统的验证方法无法区分真人和AI。所以作者建议要用更强大的身份验证系统,比如基于加密的解决方案,避免存储密码被窃取。 用户的需求是让这个总结简洁明了,在100字以内。我需要抓住重点:AI如何使网络攻击更高效、范围更广、更难防范,并且解决方案是采用加密技术。 可能的结构是:先说AI带来的变化,然后讲攻击方式的变化和影响,最后提到解决方案。确保每个部分都简短有力。 再检查一下有没有超过字数限制。大概控制在每句话10-15字左右。 </think> 文章探讨了AI技术如何改变网络攻击方式。随着大语言模型和生成式AI的出现,原本需要大量人力和时间的精准攻击将变得自动化且成本低廉。这种趋势将使网络攻击范围扩大至更多企业,并使现有安全措施失效。为应对这一威胁,需采用基于加密的身份验证系统以增强安全性。 2025-11-5 10:15:11 Author: securityboulevard.com(查看原文) 阅读量:5 收藏

An on-call IT worker receives an urgent call from the CEO requesting an MFA reset. The IT worker grants access – only to find out later that the voice they heard was not the CEO. The fraudulent access was used to initiate a ransomware attack and tens of millions of dollars in damages. 

An open-source contribution from a developer with three years of productive and helpful contributions includes an innocuous configuration script missed by other maintainers. The script installs a DLL that grants remote code access to developer machines and infects thousands of machines before discovery. 

Cruise Con 2025

Today, these scenarios require dedicated and organized attackers who methodically pick and research targets, exploiting public information to credibly impersonate or prioritize potential victims. These types of focused attacks are far more effective than older, “spray and pray” phishing schemes, but are expensive in terms of personnel and time. High-cost, high-reward, non-scalable. 

AI is About to Change This 

With the emergence of large language models (LLMs) and Generative AI, tasks that previously required significant investments in human capital and training are about to become completely automatable and turnkey. The same script kiddies who helped scale botnets, DDoS (distributed denial of service), and phishing attacks are about to gain access to social engineering as a service. 

Rather than a profit-motivated or state-sponsored hacking group picking one or two targets at a time, in a world of cheaply scalable, multi-channel attacks, this style of attack is going to radically expand. Instead of a single IT worker or executive assistant, attackers will now target all of them. Sophisticated attacks that become cheap will move “down-market,” to target multiple small and medium-sized businesses. For state actors pursuing strategic aims, why carefully infect one open-source project when bot contributors with synthetic identities can try to build trust in hundreds of thousands of them? 

When every phishing attack at scale is targeted and multichannel, every human interaction that isn’t face-to-face becomes a threat surface. When the cost gets cheap enough, the attack surface broadens horizontally and vertically, exposing far more companies, institutions, and individuals to attacks. We are already starting to see the beginnings of this trend, with attackers using AI to automate the identification of vulnerable companies and individuals.

Neither our institutions nor personal relationships are prepared for this.  The inherent trust of human-to-human interaction over voice calls, text messages, video conferences, social media, etc., is so ingrained into our business processes today that it will take years if not decades for our institutions to meaningfully respond to such a threat.  

As Terrifying as This Sounds, It is, In Fact, Significantly Worse  

The same AI that is being used today to generate fraudulent content and influence discussions on the internet is also capable of generating synthetic accounts that are increasingly indistinguishable from real, human accounts.  It is now becoming economical to completely automate the process of operating millions of accounts for years to emulate human behavior and build trust. 

Therefore, in a few years, looking at an account’s activity will no longer be a valid signal of whether that account is human or machine. Even if we stop naively trusting voice or video calls, the follow-up methods for filtering out bots or false identities we rely on today will also fail us.  

This is a far more immediate and substantial threat than quantum computing. AI botnets capable of credible social engineering attacks at scale are a certainty within months or years. Quantum-resistant encryption is meaningless if people and organizations can’t distinguish staff or customers from AI attackers.  

For identity security and social engineering, LLMs and Generative AI have effectively passed the Turing test. Therefore, any security paradigm relying on user behavior to distinguish humans from bots will fail. AIs are too good at credibly imitating and replaying human interactions online. The only answer is to provide people with a stronger form of identity, specifically a cryptographic solution without stored secrets. 

Avoiding stored secrets – eliminating the complexity and replay threats inherent to key management – is the only path forward to truly enable identity to enable authentication of you and not a proxy of you. Stored secrets will inevitably be compromised, so we must instead pick identity systems where the assertions of identity cannot be stolen or replayed. 

AI is forcing our hand. Unless and until we can empower humans to securely possess their own cryptographic keys, we face a world where AI advantages the attackers far more than the rest of us. 


This article was co-written by Cory Ondrejka, Creator of Second Life and a consumer tech executive who has held senior technical roles at the world’s largest companies.


文章来源: https://securityboulevard.com/2025/11/in-an-ai-world-every-attack-is-a-social-engineering-attack/
如有侵权请联系:admin#unsafe.sh