Novee has introduced AI Red Teaming for LLM Applications, an autonomous security testing capability built into its AI penetration testing platform. The product is designed to find vulnerabilities in AI-powered applications before attackers do, addressing a category of risk that traditional pentesting tools were never built to handle.
As enterprises deploy more AI-enabled software, from customer-facing chatbots to internal copilots and autonomous agents, security teams are contending with a different class of threats: prompt injection, jailbreak attempts, data exfiltration, and agent manipulation. Static scanners and web application testing tools don’t detect these issues, and manual testing can’t keep pace with the speed at which attackers operate.
Novee’s agent takes a different approach. It autonomously simulates adversarial attack scenarios, chains techniques together, and evaluates how an AI application behaves under pressure. Security teams can point it at any LLM-powered application, regardless of the underlying model provider, whether that’s OpenAI, Anthropic, or an open-source model. The agent produces a vulnerability assessment with remediation guidance and can plug into existing CI/CD pipelines.
“I’ve spent twenty years on the offensive side of cyber, inside government operations, protecting critical infrastructure, and now building AI systems that think like real attackers,” said Ido Geffen, CEO and co-founder of Novee. “What we see consistently is that attackers compress timelines dramatically. The window between vulnerability and exploitation can shrink to minutes. Defending against that requires continuous testing, not periodic assessments.”
The product comes directly out of Novee’s own research. The team recently disclosed a vulnerability in Cursor that allowed attackers to influence a coding agent’s context window and achieve full remote code execution on a developer’s workstation. Additional findings are currently under responsible disclosure with other vendors, and the results of that research feed directly into the agent’s training.
“AI applications introduce an entirely new attack surface, but most organizations are still testing them with tools designed for web applications and infrastructure,” said Gon Chalamish, co-founder and CPO of Novee. “Attackers are already adapting their techniques for AI systems. Security teams need a way to test those systems the same way adversaries attack them.”
Novee’s AI Red Teaming for LLM Applications is currently available in beta. The company was founded by national-level offensive security leaders and has raised $51.5 million within four months of its launch from YL Ventures, Canaan Partners, and Zeev Ventures. Novee is presenting at RSA Conference 2026 in San Francisco.