A Barracuda Networks analysis of unsolicited and malicious emails sent between February 2022 to April 2025 indicates 14% of the business email compromise (BEC) attacks identified were similarly created using a large language model (LLM).
Conducted in collaboration with a group of researchers from Columbia University and the University of Chicago, the analysis also finds that just over half (51%) of all the spam messages identified were written using an LLM.
Asaf Cidon, an associate professor at Columbia University, said in addition to simply increasing productivity, it appears cyberattackers are also leveraging generative AI to more rapidly create variants of these attacks that they hope will be more difficult to detect.
Cybercriminals also appear to be using AI to refine their emails and possibly their English rather than to change the tactics of their attacks. They are also testing variations of wording to see which are more effective in bypassing defenses and encouraging more targets to click links in much the same way A/B testing is conducted by marketers, the report finds.
The analysis also shows that while the emails generated using LLMs tend to be more formal and grammatically correct, they typically convey the same level of urgency generally associated with BEC attacks.
As a result, it is becoming more difficult for end users to detect these attacks, an issue that is likely to be further exacerbated with the rise of deepfakes that make use of AI to, for example, create audio files that impersonate executives, said Cidon. In fact, it won’t be long before most BEC attacks are crafted using LLMs, he added.
As such, cybersecurity teams will increasingly need to depend on tools and platforms that use metadata to thwart these attacks before they find their way into an inbox by identifying the domain used to send these messages, he added. While cybercriminals are getting more adept at creating new domains and websites to launch their attacks, advances in AI are also making it easier for cybersecurity defenders to identify them, he noted.
In effect, organizations of all sizes are now caught up in an AI cybersecurity arms race. Cybercriminals are clearly becoming more adept at using LLMs to create more attacks than ever. As the cost of crafting these attacks continues to drop to near zero, cybercriminals can afford to craft more of them simply because a few successful attacks can justify much of the effort, noted Cidon. Every malicious email that makes it into an inbox simply increases the chances one of those attacks will succeed, he added.
It’s not clear to what degree AI is changing the economics of cybersecurity for attackers and defenders, but the level of scale at which the battle is being fought is fundamentally changing. The only issue that remains to be seen now is to what degree some organizations will soon find themselves overwhelmed by these attacks simply because they, either out of ignorance or lack of budget, were unable to adjust to this new reality.
Recent Articles By Author