AI is the most significant technological disruption of the decade—and threat actors are already utilizing it to scale up their illicit operations. However, as headlines and fiction blur the line between present reality and theoretical threats, security professionals need threat intelligence that provides clarity and visibility into real-world criminal use cases of AI.
In our recent webinar, How Threat Actors Use AI: Separating Fact from Fiction, Flashpoint analysts cut through the noise, revealing what our teams are actively observing in threat actor communities, from specialized malicious models to AI-generated attack plans.
Missed the live session? No problem, here are the most critical takeaways you need to remember about how AI is currently being weaponized.
The most immediate and high-impact AI threat is not generating complex code, but how it is being used to exploit the human element. In addition to its ability to create believable phishing lures, it is now being used to create a fabricated reality that exploits human trust and psychology—specifically our reliance on vision and voice for authentication.
Flashpoint analysts detailed how a finance worker in Hong Kong was duped into authorizing a $25 million wire transfer. The victim had attended a call with “senior executives” of his company. In this call, their video feeds and voices were deepfaked, making the deception virtually indistinguishable from a legitimate meeting.
AI is also enabling sophisticated threat actors, such as North Koreans (DPRK), to fraudulently secure and maintain high-value remote IT jobs. Using generative AI, DPRK agents are creating fake companies and profiles to exploit the trust in the hiring process. Fake companies are often used to legitimize their credentials, providing reference checks or portfolio sites. The scale of these operations has grown, primarily to facilitate financial gain and intelligence gathering.
Unsurprisingly, financial motivation drives the vast majority of AI-assisted cybercrime, with Flashpoint observing threat actors discussing massive, scalable fraud campaigns designed to bypass existing protection models.
According to data from the FBI and Deloitte, AI-related fraud against financial institutions is set to cause up to $40 billion in annual exposure. This accelerated rate is primarily driven by three factors:
Flashpoint analysts provided a unique look into the dark web’s response to ChatGPT: the rise of the AI-as-a-Service (AaaS) model, which has dramatically lowered the technical barrier for new and existing criminals.
AaaS has led to the proliferation of malicious imitation chatbots, dubbed Dark GPTs—such as WormGPT and FraudGPT—across Telegram, Tor, and open exchanges. While Flashpoint has identified many as scams, the ones that work significantly simplify complex crimes. Leveraging malicious AI chatbots, threat actors are able to write command syntax for vulnerability exploits and provide recommendations on how to use other illicit tools.
Ultimately, AI is on track to automate every phase of the cyberattack lifecycle, including strategic decision-making and cost calculation. Flashpoint demonstrated this capability in the webinar, showing how jailbroken AI models can be prompted to create complete, multi-phase attack plans tailored to specific targets. This includes reconnaissance, infiltration, and execution strategies for financial fraud.
Furthermore, Flashpoint showed how AI performs sophisticated cost efficiency checks, calculating return on investment (ROI) for threat actors. One demonstration showed that setting up a complex point of sale (POS) fraud scheme could cost under $400, while yielding thousands of dollars per day from high-volume merchants.
For maximum anonymity and customization, threat actors are also leveraging offline models—open-source AI models that can be used locally. These models allow attackers to edit system prompts and fine-tune the AI for specific, malicious ends, such as malware creation, without fear of commercial guardrails.
AI is not a theoretical future threat—it is actively being used right now to increase the scale, sophistication, and psychological impact of attacks across all critical domains. Watch the on-demand webinar recording for a deeper dive into these key takeaways, the specific threat actor conversations tracked, and to learn how to prepare your defenses against AI-powered TTPs.