SentinelOne used RSAC 2026 to push deeper into AI-native security, announcing four new offerings that extend its platform from threat detection into the governance and testing of AI systems themselves.
The first is Prompt AI Agent Security, a real-time discovery and governance control plane built for AI agents and agentic workflows. It monitors and enforces policy on agent interactions at machine speed, covers Model Context Protocol (MCP) servers, and can auto-remediate unauthorized agentic behavior before it causes damage. As enterprises deploy more autonomous agents across their infrastructure, the tool is designed to give security teams visibility and control that traditional approaches cannot provide.
The second is Prompt AI Red Teaming, which lets organizations test homegrown AI applications by simulating real attack techniques: prompt injections, jailbreaks, privilege escalation, and data poisoning. Critically, it runs continuous evaluation as models evolve, not just at deployment time.
The third announcement is the general availability of Purple AI Auto Investigation. Previously in limited release, the feature now gives analysts one-click agentic investigations that autonomously gather cross-stack evidence, synthesize threat data, and construct attack timelines. SentinelOne says the tool “shrinks security investigations that took hours and days into minutes and seconds.” The adoption numbers back that up: Purple AI Auto Investigation was included in more than 50% of all SentinelOne licenses sold in Q4 FY2026.
Rounding out the announcements: AI data pipelines inside Singularity AI SIEM that reduce data noise by up to 80% before ingestion. For security operations teams drowning in alert volume, that kind of pre-ingestion filtering is a meaningful operational lever.
Together, the four products reflect a broader strategic move by SentinelOne to own the security layer not just around traditional endpoints and cloud workloads, but around the AI systems that are increasingly running alongside them.