Bridging the AI Agent Authority Gap: Continuous Observability as the Decision Engine
好的,我现在需要帮用户总结这篇文章的内容,控制在100字以内。首先,我得通读整篇文章,理解其主要观点。 文章讨论了AI代理在企业安全中的结构性缺口,特别是权限委托的问题。传统IAM关注的是“谁有访问权限”,而AI代理的出现让问题变得更复杂,需要考虑权限是如何被委托的,以及在什么条件下委托等。 文章指出,AI代理的权限来源于传统的企业身份,如人类用户、机器身份等。如果这些源头没有被良好管理,AI代理可能会放大隐藏的风险。Orchid的模型通过持续观察和分析这些传统身份,建立了一个更安全的代理AI治理框架。 总结的时候,我需要突出AI代理的权限缺口是由于委托链的问题,并且强调Orchid的方法是如何解决这个问题的。同时要控制在100字以内,语言要简洁明了。 可能的结构是:先点明问题所在(AI代理权限缺口源于委托链),然后说明解决方案(Orchid通过观察和优化传统身份来建立动态治理层),最后强调这种方法的意义(将可观察性转化为治理)。 现在把这些点组织成连贯的句子,确保不超过字数限制。 </think> 文章讨论了AI代理在企业安全中的结构性缺口,指出问题在于AI代理作为被委托的身份而非独立实体。企业需先管理传统身份(如人类用户、机器身份)以减少“身份暗物质”,再通过动态治理层评估委托者的姿态、意图和上下文,从而安全地管理AI代理。Orchid的模型通过持续观察和分析传统身份行为,在实时环境中实施动态授权控制。 2026-4-24 11:49:0 Author: thehackernews.com(查看原文) 阅读量:18 收藏

The AI Agent Authority Gap - From Ungoverned to Delegation

As discussed in our previous article, AI agents are exposing a structural gap in enterprise security, but the problem is often framed too narrowly.

The issue is not simply that agents are new actors. It is that agents are delegated actors. They do not emerge with independent authority. They are triggered, invoked, provisioned, or empowered by existing enterprise identities: human users, machine identities, bots, service accounts, and other non-human actors.

That makes Agent-AI fundamentally different from both people and software, while still being inseparable from both.

This is why the AI Agent Authority Gap is really a delegation gap. Enterprises are trying to govern an emerging actor without first governing the identities that delegate authority to it.

Traditional IAM was built to answer a narrower question: who has access. But once AI agents are introduced, the real question becomes: what authority is being delegated, by whom, under what conditions, for what purpose, and across what scope? 

First Things First: Governing the Delegation Chain Before Agent AI 

The crucial point is sequencing. An enterprise cannot safely govern Agent-AI unless it first governs, as much as possible, the traditional actors that serve as its delegation source.

Human identities and traditional machine identities are already fragmented across applications, APIs, embedded credentials, unmanaged service accounts, and application-specific identity logic. This is the identity dark matter Orchid describes: authority that exists, operates, and often accumulates risk outside the view of managed IAM. If that dark matter remains unobserved, then the agent inherits an already broken authority model. The result is predictable: the agent becomes an efficient amplifier of hidden access, hidden permissions, and hidden execution paths.

So the bridge to safe Agent-AI adoption is not to start with the agent in isolation. It is first to reduce identity dark matter across the traditional actor estate, so it won’t be delegated or abused for the sake of efficiency. That means illuminating all human and traditional machine identities across the application environment, understanding how they authenticate, where credentials are embedded, how workflows actually execute, and where unmanaged authority sits. Orchid’s continuous observability model is the essential foundation for safe Agent AI implementation because it establishes a verified baseline of real identity behavior across managed and unmanaged environments, rather than relying on incomplete static policy assumptions.

From Observability to Authority: Dynamic Governance for Agent AI

Once that traditional actor layer is observed, analyzed, and optimized, that output becomes the input for a real-time Agent-AI Delegation Authority layer.This is where Orchid’s model becomes more powerful than conventional IAM. Its telemetry is not just visibility or insight. It becomes a continuous feed into an authority engine that evaluates the authority profile of the delegator, the context of the target application, the intent behind the requested action, and the effective scope of execution. In other words, the agent should not be governed only by its own nominal permissions. It should be governed continuously by the posture and intent of the actor delegating authority to it, plus the context of what the agent is trying to do.

That creates a much stronger model for control. Think about it. A human delegator with weak posture, risky behavior, or excessive hidden access should not yield the same Agent-AI authority as a tightly governed delegator operating in a constrained workflow. Likewise, a machine or service account with broad but poorly understood access should not be allowed to trigger an agent with unconstrained downstream actionability.

Orchid’s role in this model is to continuously assess the delegator, the delegated actor, and the application path between them, then enforce authority accordingly. That is what turns observability into governance.

This is also why the destination state is not just better individual auditing of human, machine, and agent AI actors. It is dynamic sequential delegation control. Orchid can map each agent identity to the applications it touches, the workflows it can invoke, the intent patterns it exhibits, and the scope of its intended actions. It can then use the live observability feed to determine, in real time, whether that agent should be allowed to act, allowed only to recommend, constrained to a limited tool set, or stopped entirely. That is the ultimate meaning of closing the authority gap: not just knowing what an agent can access, but continuously determining what it is allowed to decide and execute at machine speed.

Closing Reminders

AI agents are not just a new identity type. They are a delegated identity type. Their authority originates from traditional enterprise actors: humans, bots, service accounts, and machine identities. That means the problem of Agent-AI governance does not begin with the agent. It begins with the delegation source. If enterprises cannot observe and govern the human and traditional machine identities that trigger agent actions, then they cannot safely govern the agent either. Orchid’s model makes that sequencing explicit: first reduce identity dark matter across the traditional actor estate, then use continuous observability, analysis, and audit of those delegators as the live input into a real-time Agent-AI Delegation Authority layer. In that model, the agent is governed not only by its nominal permissions but by the posture, intent, context, and scope of the actor delegating authority to it. That is the missing bridge between traditional IAM and safe Agent-AI adoption.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.


文章来源: https://thehackernews.com/2026/04/bridging-ai-agent-authority-gap.html
如有侵权请联系:admin#unsafe.sh