Identity-First AI Security: Why CISOs Must Add Intent to the Equation
好的,我现在需要帮用户总结这篇文章的内容,控制在100字以内。首先,我得仔细阅读文章,理解其主要观点。 文章的作者是Token Security的CEO和联合创始人Itamar Apelblat。他讨论了AI在企业中的应用从被动助手转变为主动操作者的变化。这带来了新的安全挑战,特别是访问控制方面的问题。 文章指出,AI代理现在承担了更多的任务,如配置基础设施、处理客户支持请求等。它们的行为与身份相同,但很多组织并没有将它们视为第一类身份来管理。这导致了权限继承和任务漂移等问题。 作者提出了“基于身份的安全”和“意图驱动的权限”作为解决方案。这意味着AI代理需要有独特的身份、明确的角色和生命周期管理,并且访问控制不仅要基于身份,还要考虑意图和上下文。 最后,作者强调CISO需要将AI代理视为可问责的身份,并结合意图和上下文来约束其行为,以确保企业级AI的安全。 总结起来,文章的核心内容是:AI代理在企业中的角色转变带来了新的安全挑战,需要采用基于身份和意图的访问控制来应对。 </think> 文章讨论了AI在企业中的角色转变及其带来的安全挑战。AI从被动助手变为执行复杂任务的操作者,但传统IAM无法有效管理其动态行为。作者提出“基于身份的安全”和“意图驱动的权限”,强调将AI视为独特身份并结合意图和上下文进行访问控制的重要性。 2026-2-24 15:15:17 Author: www.bleepingcomputer.com(查看原文) 阅读量:8 收藏

Agentic AI

Author: Itamar Apelblat, CEO and Co-Founder, Token Security

Not long ago, AI deployments inside the enterprise meant copilots drafting emails or summarizing documents. Today, AI agents are provisioning infrastructure, answering customer support tickets, triaging alerts, approving transactions, writing production code, and so much more. They are no longer passive assistants. They are operators within the enterprise.

For CISOs, this shift creates a familiar but amplified problem: access.

Every AI agent authenticates to systems and services. It uses API keys, OAuth tokens, cloud roles, or service accounts. It reads data, writes configurations, and calls downstream tools. In other words, it behaves exactly like an identity, because it is one.

Yet in many organizations, AI agents are not governed as first-class identities. They inherit the privileges of their creators. They operate under over-scoped service accounts. They are granted broad access just to make sure things work. Once deployed, they often evolve faster than the controls around them.

This is the emerging blind spot in AI security.

The first step toward closing it is what we call identity-first security for AI: recognizing that every autonomous agent must be governed, audited, and attested just like a human user or machine workload. That means unique identities, defined roles, clear ownership, lifecycle management, access control, and auditability.

But here’s the hard truth: identity alone is no longer sufficient.

Traditional identity and access management (IAM) answers a straightforward question: Who is requesting access? In a human-driven world, that was often enough. Users had roles and job functions. Services had defined scopes. Workflows were relatively predictable.

AI agents change that equation.

They are dynamic by design. They interpret inputs, plan actions, and call tools based on context. An AI agent that begins with the mission to generate a quarterly report might, if prompted or misdirected, attempt to access systems unrelated to reporting. An infrastructure agent designed to remediate vulnerabilities might pivot to modifying configurations in ways that exceed its original scope.

When that happens, identity-based controls don’t necessarily stop it from happening.

Traditional IAM assumes determinism. A role is granted because a user or service performs a defined function. The scope of action is predictable.

AI agents break that assumption. Their objective may be fixed, but the path they take to achieve it is fluid. They reason, chain tools together, and explore alternative actions.

Static roles were never designed for actors that decide how to act in real time. If the agent’s role allows the action, access is granted, even if the action no longer aligns with the reason the agent was deployed in the first place.

This is where intent-based permissioning becomes essential.

If identity answers who, intent answers why.

Intent-based permissions evaluate whether an agent’s declared mission and runtime context justify activating its privileges at that moment. Access is no longer just a static mapping between identity and role. It becomes conditional on purpose.

Consider an AI agent responsible for deploying code. In a traditional model, it may have standing permissions to modify infrastructure. In an intent-aware model, those privileges activate only when the deployment is tied to an approved pipeline event and change request. If the same agent attempts to modify production systems outside that context, the privileges do not activate that access.

The identity hasn’t changed, but the intent, and therefore the authorization, has.

This combination addresses two of the most common failure modes we’re seeing in AI deployments.

First, privilege inheritance. Developers often test agents using their own elevated credentials. Those privileges persist in production environments, creating unnecessary exposure. Treating agents as distinct identities can help eliminate this bleed-through.

Second, mission drift. AI agents can pivot mid-run based on prompts, integrations, or adversarial input. Intent-based controls prevent that pivot from turning into unauthorized access.

For CISOs, the value isn’t just tighter control. It’s governance that scales.

AI agents interact with thousands of APIs, SaaS platforms, and cloud resources. Trying to manage risk by enumerating every permissible action quickly becomes unmanageable. Policy sprawl increases complexity, and complexity erodes assurance.

An intent-based model simplifies oversight. Governance shifts from managing thousands of discrete action rules to managing defined identity profiles and approved intent boundaries.

Policy reviews focus on whether an agent’s mission is appropriate, not whether every individual API call is accounted for in isolation.

Audit trails become more meaningful as well. When an incident occurs, security teams can determine not only which agent performed an action, but what intent profile was active and whether the action aligned with its approved mission.

That level of traceability is increasingly critical for regulatory scrutiny and board-level accountability.

The broader issue is this: AI agents are accelerating faster than traditional access control models were designed to handle. They operate at machine speed, adapt to context, and orchestrate across systems in ways that blur the lines between application, user, and automation.

CISOs cannot afford to treat them as just another workload.

The shift to agentic AI systems requires a shift in security thinking. Every AI agent must be treated as an accountable identity. And that identity must be constrained not only by static roles, but by declared purpose and operational context.

The path forward is clear. Inventory your AI agents. Assign them unique, lifecycle-managed identities. Define and document their approved missions. And enforce controls that activate privileges only when identity, intent, and context align.

Autonomy without governance is a massive risk. Identity without intent is incomplete.

In the agentic era, understanding who is acting is necessary. Ensuring they are acting for the right reason is what makes agentic AI secure.

If you’re securing agentic AI we’d love to show you a technical demo of Token and hear more about what you’re working on.

Sponsored and written by Token Security.


文章来源: https://www.bleepingcomputer.com/news/security/identity-first-ai-security-why-cisos-must-add-intent-to-the-equation/
如有侵权请联系:admin#unsafe.sh