
SAN FRANCISCO — RSAC 2026 opens here Monday at Moscone Center, with upwards of 40,000 cybersecurity professionals, executives, and policy leaders, myself among them, filing in to take stock of an industry under acute pressure.
Related: RSAC 2026’s full agenda
The dominant undercurrent is already unmistakable: AI hasn’t just arrived in cybersecurity. It has split the field in two.
For the past year, the industry has been simultaneously fighting two wars. One is about using AI to transform defense — rebuilding threat detection, threat response, and security operations from the ground up with AI at the center.
The other war is newer and in some ways more disorienting: figuring out how to secure AI systems themselves — even as attackers are learning to turn those same systems against the companies racing to deploy them.
These two wars demand entirely new weapons and fundamentally different thinking. They are both accelerating — and as the conference opens, it is far from clear that defenders are keeping pace with either.
The shot heard round the SOC
In mid-September 2025, something happened that the industry had long theorized but never quite confronted head-on. Anthropic detected and disrupted what it subsequently documented as the first large-scale cyberattack executed without substantial human intervention.
A Chinese state-sponsored group had manipulated Anthropic’s Claude Code tool into attempting infiltration of roughly 30 global targets — financial institutions, technology companies, chemical manufacturers, government agencies. The AI did 80 to 90 percent of the work: scanning infrastructure, writing exploit code, harvesting credentials, organizing stolen data. Human operators showed up only at a handful of strategic decision points per attack cycle.
Anthropic was candid about what the incident meant. “The barriers to performing sophisticated cyberattacks have dropped substantially,” the company wrote, “and we predict that they’ll continue to do so.”
Less noticed but equally significant: the attackers had gained access by jailbreaking Claude, breaking it into small, seemingly innocent subtasks so that the model executed malicious operations without ever being shown the full picture. The AI wasn’t compromised by a vulnerability in the traditional sense. It was deceived — systematically, at scale, at machine speed.
Speed that no human team can match
The September incident wasn’t an outlier. It was a confirmation.
Unit 42 has tracked mean time to exfiltrate data collapsing from nine days in 2021 to two days in 2023 to roughly 30 minutes by 2025. A February 2026 Malwarebytes report cited a 2025 MIT study in which an AI model using the Model Context Protocol achieved full domain dominance on a corporate network in under an hour — with no human intervention — evading endpoint detection in real time by adapting its tactics on the fly. Malwarebytes called MCP-based attack frameworks a “defining capability” of criminal operations in 2026.
The defense side is being forced to match that pace. Several vendors announcing at RSAC this week are targeting exactly this problem — reducing threat investigations that once took analysts hours down to seconds, cutting mean-time-to-resolution by as much as 90 percent.
That is the operational reality walking through Moscone Center’s doors this week. Attacks are no longer constrained by how fast a human attacker can think, pivot, or type. They are constrained only by compute.
Wave 1 and Wave 2
And yet, this is precisely why the other battle — using AI to transform defense — carries genuine urgency. For three decades, defenders were structurally outmatched. The attack surface expanded faster than human-scale teams could ever respond. The SOC analyst could only work so many hours, parse so many alerts, correlate so many data points. The asymmetry was baked in.
AI-native security architecture offers the first credible counter to that asymmetry. Not AI features bolted onto platforms built a decade ago, but systems designed from the ground up around continuous, autonomous detection and response — systems that can operate at the same speed and scale as the threat. Call it Wave 1: AI deployed to rebuild the defensive stack.
There is good news on Wave 1. “A large portion of what is required is understood today,” said Jamison Utter, vice president at A10 Networks, in a conversation last week.
Cloud security, Kubernetes security, network firewalling, API protection — the tools exist to secure the known infrastructure layer, and the industry knows how to use them. The blocking and tackling, Utter said, is manageable.
Traditional SIEMs are leaving enterprises increasingly exposed as queues keep growing, investigations take longer to correlate and enrich context, and security talent shortages compound the pressure.
Wave 2 is harder and less settled. It is the security of AI itself — hardening models against prompt injection, governing the behavior of autonomous agents, building data-integrity controls that ensure what’s feeding enterprise AI can actually be trusted.
What makes Wave 2 structurally different from anything the industry has faced before is not complexity or scale. It is the nature of the attack surface itself. “Never before was language itself an attack surface,” Utter said. The semantic and non-deterministic character of large language models means adversaries no longer need to craft a malformed packet or inject a SQL string. They can probe an AI system through metaphor, through images, by switching languages mid-conversation — exploiting the very flexibility that makes these systems valuable.
The existing defensive stack wasn’t designed for any of that. “Every other tool we have today — firewalls, NDRs, WAFs, API securities — none of them solve the semantic problem,” Utter said, “because that’s not what they were designed to do.” The companies working the Wave 2 front are younger, smaller, and moving fast. Most enterprises haven’t caught up to what they’re solving.
George Gerchow, a security veteran who has watched successive architectural shifts leave visibility gaps in their wake, frames the pattern plainly.
“Anytime there’s a paradigm shift in technology, it always starts with visibility, or at least it should,” he said. “AI has just exacerbated the problem — it’s really hard to tell what’s going on in that world right now.”
Gerchow, CSO at Bedrock Data, pointed to the specific threat vector driving that gap — rogue AI agents calling on resources and accessing sensitive data with no meaningful oversight. “Having visibility into what they’re truly going to do, what sensitive data they’re going to access, has become nearly impossible,” he said.
Gunter Ollmann, CTO of Cobalt and a three-decade practitioner of offensive security, puts a number on that gap. Cobalt’s own pentesting data shows that organizations are resolving API and cloud vulnerabilities at rates above 70 percent — but when it comes to serious genAI flaws identified during testing, only about one in five gets fixed.
The pace of AI deployment, Ollmann has observed, is outrunning the security discipline needed to validate it. At RSA this week, Cobalt is announcing new AI-driven pentesting capabilities designed to automate reconnaissance and vulnerability discovery at the speed the threat environment now demands.
That distinction — architectural versus cosmetic — is the line I’ll be drawing all week. A lot of vendors on this floor will have an AI story. Fewer will have an AI-native architecture. Fewer still will be able to explain precisely why the legacy model cannot get from here to there — not as a diplomatic talking point, but as a technical and economic reality.
A narrow window
There is one other thing I am carrying into this week. The window matters.
Defenders who move first and farthest from the legacy model have a real advantage right now — in detection speed, in response capability, in the ability to process the kind of data volumes that modern environments generate. But attackers are adopting the same tools. The offensive use of agentic AI is not a future concern. It is a current operational fact, documented and published by the company that built the model that was turned against it.
Utter put the core dynamic in four words: “It’s machines fighting machines.” AI guardrail systems — purpose-built language models trained on attack data — inspecting inbound and outbound LLM traffic in real time, at carrier scale. That is what Wave 2 defense looks like in practice. The race is already on.
The gap between those who have made the architectural shift and those still running legacy-with-AI-features will not widen indefinitely in defenders’ favor. At some point, the tools equalize. What does not equalize is institutional readiness — the trained analysts, the mature playbooks, the governance frameworks, the hard-won organizational trust in automated systems making real-time decisions.
That institutional readiness takes years to build. Which means the time to start is now, and the window is not permanently open.
This week at RSAC, I will be looking for the practitioners and founders who understand both sides of the split — who can name what is broken in the old model specifically, who have made an actual bet on the new one, and who are clear-eyed about how much time is left to make it matter.
Stay tuned. I’ll keep watch — and keep reporting.
Acohido
Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.
(Editor’s note: I used Claude and ChatGPT to assist with research compilation, source discovery, and early draft structuring. All interviews, analysis, fact-checking, and final writing are my own. I remain responsible for every claim and conclusion.)
March 21st, 2026 | My Take | Top Stories