Top 5 Things CISOs Need to Do Today to Secure AI Agents
嗯,用户让我用中文总结一篇文章,控制在100字以内,而且不需要特定的开头。首先,我需要快速浏览文章内容,抓住主要观点。 文章主要讲的是Agentic AI带来的变革,以及如何通过身份管理来确保其安全性。作者提到AI代理不仅仅是工具,而是自主行动的实体,可能引发数据泄露或系统故障。因此,安全必须从访问控制入手,而不是仅仅依靠护栏措施。 接下来,我需要提取关键点:AI代理的自主性、身份管理的重要性、访问控制、生命周期治理等。然后将这些点浓缩成一句话或两句话,确保在100字以内。 最后,检查语言是否简洁明了,符合用户的要求。避免使用复杂的术语,让总结易于理解。 </think> 文章探讨了Agentic AI(自主人工智能)对企业运营的深远影响,并强调了身份管理在确保AI安全中的核心作用。作者指出,AI代理作为自主实体,在企业系统中行动时需通过严格的身份认证、权限控制和生命周期治理来降低风险。身份是构建安全、可扩展AI系统的基石。 2026-3-17 14:15:19 Author: www.bleepingcomputer.com(查看原文) 阅读量:7 收藏

AI Agents

By Itamar Apelblat, Co-Founder and CEO, Token Security

Agentic AI represents a once-in-a-generation shift in how organizations operate. AI agents are not copilots. They are not better chatbots.

They are autonomous actors that plan, decide, and act. Increasingly, they will write code, move data, execute transactions, provision infrastructure, and interact with customers often without a human in the loop. They will also operate continuously, across systems, at machine speed.

This transformation is already unlocking enormous business value. But, it will only succeed if it is secured properly. And today, most organizations are not prepared.

The prevailing approach to AI security focuses on guardrails such as prompt filtering, output controls, and behavior monitoring. That thinking is flawed. Guardrails attempt to constrain behavior after access has already been granted. But once an AI agent has credentials and connectivity, a single misstep can cause data exfiltration, destructive actions, or cascading failures across interconnected systems.

If you want to secure AI agents without slowing innovation, they need to rethink the control plane. Identity, not prompts, not networks, not vendor assurances, is the only scalable foundation for securing and governing autonomous systems.

For a deeper explanation of why identity is becoming the foundation for AI security, see Securing Agentic AI: Why Everything Starts with Identity.

Here are the five most important actions CISOs should take today to ensure AI agent security:

1. Treat AI Agents as First-Class Identities

The moment an AI agent connects to production systems, APIs, cloud roles, SaaS platforms, or infrastructure, it stops being an experiment and becomes an identity.

Every AI agent uses identities, often many of them: API tokens, OAuth grants, service accounts, cloud roles, secrets, and access keys. Yet in most organizations, these identities are invisible, unmanaged, and poorly governed.

You must mandate that every AI agent is treated as a first-class digital identity:

  • It must have a clear owner
  • It must be authenticated
  • Its permissions must be explicitly defined
  • Its activity must be logged and monitored

If you don’t know which identities your agents are using, you don’t control them.

2. Shift from Guardrails to Access Control

Guardrails assume that AI can be safely constrained by rules. But AI agents are non-deterministic and adaptive. With an unlimited number of possible prompts and interactions, bypass is not a question of if it will happen, but when.

Even if prompt controls worked 99% of the time, 1% of infinity is still infinity.

Security must move down the stack to where real control exists: access. You need to ask these questions:

  • What systems can this agent reach?
  • What data can it read?
  • What actions can it execute?
  • Under what conditions?
  • For how long?

Once access is tightly scoped, behavior becomes far less dangerous. Identity-based access control is the containment layer for autonomous software. Network controls are too coarse. Prompt filters are too weak. AI platform assurances are not enough.

Identity is the only control plane that spans every system an agent touches.

3. Eliminate Shadow AI by Gaining Identity Visibility

Shadow AI is not primarily a tooling problem. It is an identity problem. Developers, IT admins, and business users are already creating AI agents that connect to business-critical systems, leverage APIs, retrieve data, and trigger workflows.

These agents don’t announce themselves. They simply start acting. When security teams lack visibility into these identities, Zero Trust collapses. Unknown agents become trusted by default because their credentials are valid.

You must prioritize:

  • Continuous discovery of machine and non-human identities.
  • Identification of agent-related tokens, service accounts, and OAuth grants.
  • Mapping which agents have access to which systems.

If you can’t see it, you can’t secure it. And in the AI era, what you can’t see is often autonomous.

4. Secure Based on Intent, Not Just Static Permissions

AI agents are goal-oriented. Two identical agents with identical permissions can behave very differently depending on their objective. This introduces a missing dimension in traditional access models: intent.

To secure AI agents effectively, organizations must answer:

  • What is this agent meant to accomplish?
  • What actions are required to achieve that goal?
  • Which actions are outside its purpose?

An agent created to summarize support tickets should not be able to export the full customer database. An infrastructure optimization agent should not be able to modify IAM policies. Intent defines acceptable behavior.

This breaks the dangerous assumption that agents can simply inherit human permissions. An agent acting “on behalf of” a highly privileged engineer should not automatically gain every permission that engineer has.

Security for AI agents is not about predicting behavior. It is about enforcing intent through tightly scoped identity and access controls.

5. Implement Full AI Agent Lifecycle Governance

Security failures rarely happen at the moment of creation. They happen over time. Access accumulates. Ownership becomes unclear. Credentials persist. Agents are modified, repurposed, and eventually abandoned, often silently. AI agents compress this lifecycle dramatically. What used to unfold over months can now happen in hours or even more rapidly.

You must ensure lifecycle governance for every agent:

  • Who owns it today?
  • What access does it currently have?
  • Is that access still aligned to its intent?
  • When should secrets be rotated, access reviewed, or the agent decommissioned?

Without continuous lifecycle control, risk compounds invisibly. If you cannot answer these questions at any given moment, you do not control your AI agents.

New frameworks for AI agent identity lifecycle governance are emerging to address exactly this challenge, download Token’s new AI Agent Identity Lifecycle Management ebook for more information.

Secure AI Is Scalable AI

Agentic AI is inevitable and it is overwhelmingly positive for business. The value lies in autonomous access that allows agents to act across systems at scale and machine speed. But, autonomy without identity control is chaos.

Organizations that bolt AI onto legacy, human-centric identity models will either overprivilege agents or slow innovation to a halt. Organizations that ignore identity will eventually lose control. The path forward is not to slow down AI. It is to secure it properly.

Identity is the only scalable control plane for agentic AI. Lifecycle governance is non-negotiable. And security must enable, not obstruct,  innovation.

The companies that win in the coming decade will be those that leverage AI to transform their business while remaining secure. The key to doing that is identity.

If you’d like to see how Token security is tackling agentic AI identity at scale, book a demo with our technical team.

Sponsored and written by Token Security.


文章来源: https://www.bleepingcomputer.com/news/security/top-5-things-cisos-need-to-do-today-to-secure-ai-agents/
如有侵权请联系:admin#unsafe.sh