Agentic AI represents a tectonic shift in the attack surface of modern enterprises. Moving beyond the request-response paradigm of Large Language Models (LLMs), agentic systems possess Distributed Autonomy: the ability to plan multi-step operations, execute tools via external APIs, manage long-term state through memory layers, and perform Retrieval-Augmented Generation (RAG) to influence their reasoning.
Traditional security assumptions — which rely on human-mediated input and deterministic logic — fail in the face of Emergent Behavior. In Multi-Agent Systems (MAS), complex outcomes arise from the interactions between autonomous entities, often bypassing the safety guardrails of the underlying foundation models.
The OWASP Agentic Security Initiative (ASI) was established to address these specific “agency” risks. While the OWASP Top 10 for LLM Applications focuses on the risks of the model interface, the ASI taxonomy focuses on the risks of the system’s actions. In an MAS environment, a single vulnerability can trigger a cascading “Blast Radius,” where a compromise in one agent’s memory or toolset leads to systemic collapse across the entire infrastructure.