RSAC 2026 Marked a Turning Point for AppSec. The Reason – Agentic Security
RSA Conference 2026 has just wrapped in San Francisco.
If you’ve been to enough of these events, you know that while they’re valuable for innovation, connection, and hearing where the industry is headed, they tend to blend into the collective memory of past events after a couple months, with little to distinguish them.
But then there’s the 1%.
The rare moment you recognize immediately when something shifts.
Not a gradual step forward, but a leap. When what felt experimental or theoretical suddenly becomes real.
RSAC 2026 felt like one of those moments.
What set this year apart was the emergence of Agentic AppSec, not as an idea or an experiment, but rather an operational reality already being adopted and executed, as part of the growing recognition that AI-driven development is fundamentally reshaping the software lifecycle – into Agentic Development Lifecycle (ADLC) – and security models that must evolve to support it.
What Defined This Turning Point in RSAC 2026
To understand why RSAC 2026 felt so unique, it helps to take a closer look at the themes that consistently emerged across the event.
From Assistive AI → Autonomous (Agentic) Security
The biggest shift: Agents have grown to be more than ‘assistants’. Security is no longer just assisted by AI – AI agents increasingly execute it.
Agents are moving from copilots to decision-makers who can investigate, triage, and act. The industry is transitioning from human-paced workflows to machine-speed security operations.
“Security for AI” and “Security by AI” Converge
A major theme across sessions:
Organizations must secure AI systems (LLMs, agents, MCPs), while simultaneously using AI to secure software and pipelines.
AppSec is now responsible for both sides of the equation:
- Protecting AI-generated code and AI components
- Using agents to secure the SDLC, and increasingly, the ADLC
The Rise of the Agentic Development Lifecycle (ADLC)
AI is reshaping how software is written, reviewed, and deployed. Security must adapt to a lifecycle where agents generate, modify, and ship code.
AppSec implication:
- Security shifts from left → everywhere
- From reactive → embedded into autonomous workflows
Explosion of AI Supply Chain Risk
RSAC highlighted growing concern around the security risks introduced by new supply chain components and dependencies, such as LLMs, agents, MCP servers, plugins, and AI SDKs.
There is a clear need for visibility (AI-BOM), provenance, and trust in AI components.
AppSec implication:
- SBOM is evolving into AI-BOM
- You now secure not just code dependencies, but AI dependencies
AI-Native Security Vendors vs. Legacy Players
There’s a clear market shift:
The rise of AI-native security companies is challenging traditional vendors. Winning platforms are being rebuilt from the ground up as AI-first, not AI-enhanced.
AppSec implication:
- Expect consolidation around platforms that embed agents deeply
- Not bolt-on AI features
Trust, Governance, and Identity Become Foundational
As agents act autonomously, the question becomes:
Who authorized the agent? What can it do?
Identity and governance are now core security primitives, not add-ons.
AppSec implication:
Security must enforce:
- Agent identity
- Policy boundaries
- Auditability of decisions
Taken together, these themes highlight a clear gap: traditional AppSec approaches were not designed for an agentic development lifecycle.
That gap — and how to close it — was a central focus of what we introduced, as it raised the question: how do you secure an ecosystem that is agent-driven as much, if not more, than human-driven?
At RSAC 2026, we introduced our new capabilities designed to address exactly that.

Securing the Agentic Development Lifecycle
At RSA this year, Checkmarx unveiled a new set of innovations designed to secure the ADLC:
Expansion of the Checkmarx Assist family of agents
Building on Checkmarx Developer Assist, we introduced two new agents: Triage Assist and Remediation Assist, designed to secure the critical post-commit phase. These agents help teams quickly prioritize real risks and fix them efficiently within pull requests (PR), reducing noise and accelerating secure code delivery.
Introducing Checkmarx AI Supply Chain Security
As organizations increasingly build with AI components, an entirely new layer is introduced into the supply chain, requiring dedicated security to address its unique challenges and risks.
Checkmarx AI Supply Chain Security provides full visibility and risk assessment across the AI stack. With a centralized inventory and AI-BOM covering MCP servers, LLMs, AI agents, SDKs, and more, teams can move fast with AI, without losing control over security.
SAST AI and DAST for AI
Checkmarx enhanced its two core security engines to support AI-powered SAST scanning across virtually any programming language, helping organizations future-proof their technology adoption. In parallel, we strengthened our DAST engine to deliver runtime protection aligned with the realities of AI-driven and “vibe coding” development.
Risk Orchestration within ASPM
Checkmarx also announced a new and enhanced risk management and visibility solution across applications, projects, and repositories to improve decision-making, reduce noise, and highlight critical vulnerabilities.
Agent identity
- Policy boundaries
- Auditability of decisions
the tooling landscape must evolve to keep pace with the speed of AI-driven development. The shift is no longer about “AI in AppSec,” but about AppSec itself becoming an entirely different paradigm – agentic, autonomous, and continuous by design.

Closing Notes
The idea that “AppSec is becoming agentic” goes beyond a shift in tooling — it reflects a fundamentally different way of working with and understanding application security.
AppSec is changing its DNA.
That is why, compared to 2025, this year’s event was overwhelmingly focused on AI and Agentic Application Security, with a clear emphasis on how
Tags:
ADLC
Agentic AI
Agentic AppSec
conferences
RSAC
RSAC 2026