Jamie Levy, director of adversary tactics at Huntress, highlights a rare and revealing incident: a cybercriminal downloaded Huntress’ software, inadvertently giving defenders a front-row seat into how attackers are experimenting with artificial intelligence.
For years, the industry has speculated that threat actors were using AI—but speculation is not proof. This time, there was evidence. By examining the attacker’s activity, the Huntress team confirmed that AI models were being used to automate decision-making, generate scripts, and fine-tune attack chains with far greater efficiency than before.
Levy explains that adversaries are leveraging AI tools much the same way defenders are—analyzing environments, identifying weaknesses, and optimizing their next moves. What’s striking, she says, is how AI lowers the barrier for entry: attackers no longer need deep technical expertise to execute complex operations. Instead, they can prompt an AI model to generate or refine malicious code, making attacks faster and harder to predict.
The conversation also touches on the broader implications for defenders. As attackers use AI to become more adaptive, cybersecurity teams will need to move beyond static defenses and embrace their own AI-driven automation to keep pace. Tools that once offered insight into attacker behavior are now being turned against them, creating a feedback loop between offense and defense.
Levy stresses that the lesson isn’t panic—but preparation. AI is now part of the adversary toolkit, and its use will only accelerate. The challenge for defenders is to leverage the same technology responsibly—anticipating how it can be used maliciously, closing those gaps before attackers exploit them, and maintaining human oversight in an increasingly automated arms race.