As enterprise AI rapidly matures, we’re entering a new phase, one powered by agentic AI. These intelligent agents are more autonomous, capable of making decisions, taking actions, and adapting dynamically to new environments. This evolution introduces new complexity in how we build AI systems as well as in how we secure them.
Agentic AI doesn’t live in a vacuum. It spans virtual machines, containers, serverless functions, and SaaS applications. That makes holistic cloud security more important than ever. In this modern paradigm, extending proven network security principles from Kubernetes to serverless environments is a critical next step in securing agentic AI.
The cloud has undergone a dramatic transformation in just a few years:
This progression is layered. Enterprises now run hybrid environments with all three, often integrating third-party SaaS. And agentic AI takes advantage of all of them.
Here’s the challenge: these different platforms have inconsistent security controls. Visibility, enforcement, and policy management vary across layers, creating blind spots that attackers can exploit.
To protect agentic AI, we need a unified security approach that can span and scale across all cloud layers, just like the workloads themselves.
Serverless is a natural fit for agentic AI. Here’s why:
In other words, serverless is where the intelligence of AI meets the efficiency of the cloud. But the more dynamic and distributed the compute model, the more complex the security posture becomes.
One of the core principles of modern Kubernetes security is the ability to enforce zero trust from within the network, not just at the edge.
But here’s the reality: VMs use one set of security tools. Kubernetes uses another. Serverless? Even more fragmented.
This fragmentation leads to policy drift, weak visibility, and inconsistent enforcement. That’s a problem when AI agents are moving laterally across environments, invoking APIs, triggering functions, and ingesting data from multiple sources.
To truly secure agentic AI, we must extend Kubernetes-style security principles to serverless:
This approach creates an embedded enforcement layer that travels with the workloads—not a bolt-on tool, but integrated security that adapts to modern cloud architectures.
Let’s take a closer look at the serverless platforms most commonly used in agentic AI architectures:
Each has unique runtime models, permission structures, and network configurations, but all face similar challenges: ephemeral compute, inconsistent traffic controls, and minimal east-west protection.
Security must be embedded in the design phase of agentic AI, not retrofitted later. Key best practices include:
By extending network security capabilities from Kubernetes into serverless environments, we enable AI agents to operate autonomously without opening the door to lateral movement, privilege escalation, or data exfiltration.
The next evolution of AI is more autonomous, distributed, and cloud native. To keep up, our approach to security must evolve, too.
By extending proven Kubernetes security principles to serverless technologies, we lay the foundation for comprehensive cloud-native security that enables agentic AI to thrive securely, at scale, and across clouds.
KubeCon + CloudNativeCon North America 2025 is taking place in Atlanta, Georgia, from November 10 to 13. Register now.