Novee Brings Autonomous Red Teaming to LLM Applications, Built From Its Own Vulnerability Research
嗯,用户让我总结这篇文章的内容,控制在一百个字以内,而且不需要特定的开头。首先,我需要通读整篇文章,抓住主要信息。 文章主要讲的是Novee公司推出了一种新的AI红队测试工具,用于检测大型语言模型应用的安全漏洞。传统工具无法应对这些新威胁,而Novee的解决方案可以自主模拟攻击,帮助企业在攻击者之前发现漏洞,并提供修复建议。 接下来,我需要将这些信息浓缩到100字以内。要确保涵盖产品名称、功能、目标以及解决的问题。同时,语言要简洁明了,避免冗长。 可能的结构是:公司名称 + 产品名称 + 功能描述 + 解决的问题 + 优势。这样既全面又简洁。 最后检查字数是否符合要求,并确保没有使用“文章内容总结”等开头。 </think> Novee推出AI红队测试工具,专为大型语言模型应用设计,能自主模拟攻击、检测漏洞并提供修复建议,帮助企业防范新兴威胁如提示注入和数据泄露。 2026-3-25 16:11:52 Author: securityboulevard.com(查看原文) 阅读量:1 收藏

Avatar photo

Novee has introduced AI Red Teaming for LLM Applications, an autonomous security testing capability built into its AI penetration testing platform. The product is designed to find vulnerabilities in AI-powered applications before attackers do, addressing a category of risk that traditional pentesting tools were never built to handle.

As enterprises deploy more AI-enabled software, from customer-facing chatbots to internal copilots and autonomous agents, security teams are contending with a different class of threats: prompt injection, jailbreak attempts, data exfiltration, and agent manipulation. Static scanners and web application testing tools don’t detect these issues, and manual testing can’t keep pace with the speed at which attackers operate.

Novee’s agent takes a different approach. It autonomously simulates adversarial attack scenarios, chains techniques together, and evaluates how an AI application behaves under pressure. Security teams can point it at any LLM-powered application, regardless of the underlying model provider, whether that’s OpenAI, Anthropic, or an open-source model. The agent produces a vulnerability assessment with remediation guidance and can plug into existing CI/CD pipelines.

“I’ve spent twenty years on the offensive side of cyber, inside government operations, protecting critical infrastructure, and now building AI systems that think like real attackers,” said Ido Geffen, CEO and co-founder of Novee. “What we see consistently is that attackers compress timelines dramatically. The window between vulnerability and exploitation can shrink to minutes. Defending against that requires continuous testing, not periodic assessments.”

The product comes directly out of Novee’s own research. The team recently disclosed a vulnerability in Cursor that allowed attackers to influence a coding agent’s context window and achieve full remote code execution on a developer’s workstation. Additional findings are currently under responsible disclosure with other vendors, and the results of that research feed directly into the agent’s training.

“AI applications introduce an entirely new attack surface, but most organizations are still testing them with tools designed for web applications and infrastructure,” said Gon Chalamish, co-founder and CPO of Novee. “Attackers are already adapting their techniques for AI systems. Security teams need a way to test those systems the same way adversaries attack them.”

Novee’s AI Red Teaming for LLM Applications is currently available in beta. The company was founded by national-level offensive security leaders and has raised $51.5 million within four months of its launch from YL Ventures, Canaan Partners, and Zeev Ventures. Novee is presenting at RSA Conference 2026 in San Francisco.


文章来源: https://securityboulevard.com/2026/03/novee-brings-autonomous-red-teaming-to-llm-applications-built-from-its-own-vulnerability-research/
如有侵权请联系:admin#unsafe.sh