OAuth is a broadly accepted standard. It’s used all over the internet. But as the usage of LLM agents continues to expand, OAuth isn’t going to be enough.
In fact, relying on OAuth will be dangerous. We won’t be able to set permissions at an appropriate granularity, giving LLMs access to far too much. More data breaches are likely to occur as attackers compromise OAuth tokens. Keeping a record of authorizations of actions an LLM agent has taken will be unnecessarily complex.
We need a new approach.
OAuth is a standard for authorization (and incorporates authentication with OpenID Connect, or OIDC). It’s primarily used for access delegation, or allowing one application to access another one with a certain scope — e.g., when you use Sign in with Google on a website, it gets access to certain parts of your Google account. It’s commonly used when integrating third-party applications into SaaS platforms as well (for example, giving Martech apps access to Salesforce).
OAuth defines the scope of access by encoding permissions data (e.g., this user is an admin) on a token, which is issued to a client. When the client tries to take an action, it provides that token in the request, and the application uses it to decide whether the action is allowed (e.g., yes, this user is permitted to delete this folder because they’re an admin).
Getting authorization wrong can lead to disaster. Getting authorization wrong for agents can lead to disaster orders of magnitude faster.
Breaches are an obvious example of where authorization can go wrong. In August 2025, attackers compromised OAuth tokens held by Drift, a chatbot from Salesloft that other companies embed on their websites. The attackers used these tokens to access the Salesforce instances of those other companies (Salesloft’s customers) — including Cloudflare, Dynatrace, PagerDuty, and others — and exfiltrate data belonging to their customers, continuing the attack chain. This breach continues to have repercussions: data stolen in the breach was recently leaked.
There have been many other breaches following the same pattern: get OAuth tokens, then get into third-party SaaS platforms. That’s the vector that lets attackers expand to more victims — and it’s the same shape of a problem as agents. In fact, it’s a bigger problem with agents: rather than integrating into one platform, an agent might connect to dozens of services and data sources.
The agentic risk isn’t limited to attackers compromising tokens and breaching systems. There’s also misuse: interacting with an LLM frontend as a normal user, but getting illicit information accidentally or with prompt jailbreaking. You can try to handle this by sanitizing prompts, but that will never give you full confidence that you’ve blocked access. Everything at the prompt layer is interpreted rather than enforced.
Only limiting the LLM’s access at an authorization enforcement layer will address this.
So why not use OAuth? It’s a standard meant for access delegation, which is exactly what we need to do with agents. But it falls short.
The model of passing embedding permissions on a token that is then reused numerous times means that:
An authorization policy expresses the logic for who can do what. With OAuth, permissions are reflected in scopes. An authorization server manages the policy logic for what scopes can be requested, those scopes are embedded onto a token for a given client, and the token is passed to a resource server, which enforces access based on those scopes.
OAuth can handle coarse, role-level permissions (RBAC), but isn’t practical for fine-grained permissions at a resource level (“this user can read document a, document c, document d …”). The size quickly expands beyond what a token can handle. Fine-grained permissions are exactly what you need for an LLM agent, where you might want to scope permissions at resource or even field level (e.g., an agent that prioritizes accounts for a sales rep should have access to fields for company size, but not customer phone numbers).
It’s also not feasible to model complex permissions rules. For example:
A token is static, reflecting permissions at the time they were checked. Making frequent or dynamic changes to authorization is difficult or impossible with a token approach, since you often can’t revoke an existing token to update it with new permissions. For example, you might want to be able to monitor agent access in real time and revoke or reduce permissions if it’s acting dangerously. Or you might want to implement the reverse: a one-off escalation of agent privileges for a particular task (e.g., an agent doesn’t have permanent write access, but can update a record with human-in-the-loop consent.
The static nature of tokens and the difficulty of revocation also makes them dangerous. Tokens leak; breaches happen. The agentic change in this attack vector is its magnitude: an agent might have tokens for multiple different services, all of which an attacker could potentially access.
In many cases, you want to maintain records of all agent data access and actions: what an agent did or attempted to do and why it was allowed or not allowed to do it (the request, relevant policy details, and the enforcement decision).
Why is recording agentic actions important? And why would you need that record in your authorization system, if you might be tracking the end-to-end agentic workflow elsewhere? Records of the authorization decisions are important for three reasons:
Regardless of flavor, this isn’t something OAuth has a way of recording. Once the token is passed back, the authorization server doesn’t see what an agent is doing with it. If you want a record of agent actions in an OAuth world, you need to maintain it out of band — e.g., in your application, where you’re making enforcement decisions.
What’s the solution?
One answer is implementing better authorization in the underlying resources that agents access. I’ll propose that what we need is something with a different structure from the token-based OAuth method: A real-time policy engine, consulted with every action (with the ability to handle resource-level and complex policy modeling) that logs everything agents are attempting to do (including with on-behalf-of tracing), and that will fire alerts and support human-in-the-loop least privilege enforcement for if/when agents act incorrectly.
But standards take time to change, and it’s still incumbent on organizations to ensure that agents’ actions are under control. Separate from better authorization in the data sources and tools that agents use, agent authorization can be addressed at the tool access layer (e.g., MCP servers, agent frameworks). Any security-minded organization should be recording agent actions, running anomaly detection to catch misbehavior, dynamically reducing permissions or quarantining rogue agents, and maintaining an audit trail. The goal is automating the principle of least privilege: agents should be able to access only the tools they need for the task at hand.
Without a new approach to agentic authorization, we should expect to see more disasters as agents proliferate. If we can get ahead of the authorization problem, we can realize the promise of AI agents without the risks.