Organizations are racing to implement autonomous artificial intelligence (AI) agents across their operations, but a sweeping new study reveals they’re doing so without adequate security frameworks, creating what researchers call “the unsecured frontier of autonomous operations.”
The research, released Tuesday by Enterprise Management Associates (EMA), surveyed 271 IT, security, and identity and access management (IAM) professionals and found that agentic AI has moved from emerging technology to operational reality.
Among companies with 500 or more employees, only 2% reported no plans to adopt the technology, with most organizations already deploying AI agents for both employee-facing and customer-facing tasks.
However, rapid adoption has exposed a critical vulnerability: Existing identity management systems aren’t equipped to handle autonomous agents, and organizations lack the policies needed to secure them.
“When it comes to agentic AI identity, most organizations are woefully unprepared for inherent security risks and operational challenges of managing those identities,” said Ken Buckler, EMA’s research director covering information security, risk, and compliance management, who authored the study.
The findings reveal troubling gaps in organizational readiness. For instance, 79% of organizations without written policies governing agentic AI have already deployed these agents anyway — a cart-before-horse approach that Buckler describes as creating “a significant industry blind spot.”
The research also uncovered widespread dissatisfaction with current IAM solutions. Some 41% of organizations reported security or reliability concerns with their existing IAM providers, a particularly alarming figure since agents operate independently and require robust identity controls. Cost unpredictability emerged as another major challenge, cited by 56% of medium-sized organizations and 45% of large enterprises.
Many organizations mistakenly believe their current IAM infrastructure can simply absorb the additional load of managing autonomous agent identities. The survey data strongly contradicts this assumption, revealing what researchers characterize as “systemic unpreparedness” across the industry.
The study, titled “Agentic AI Identities: The Unsecured Frontier of Autonomous Operations,” calls for a fundamental shift in how organizations approach identity management. It argues AI agents must be treated as “first-class digital identities” managed with equal or greater rigor than human users. Without this transformation, organizations risk embedding systemic vulnerabilities into their operational infrastructure.
The research, conducted in partnership with Ory, a modern IAM company specializing in customer and agentic AI identity management, provides actionable guidance for bridging the gap between enthusiastic adoption and effective governance.
The move toward AI accountability comes as new security threats emerge. DryRun Security CEO James Wickett cautions 2026 will see attackers shift from prompt injection to what he calls “agency abuse,” the manipulation of AI agents with excessive permissions to cause real-world damage.
“You tell it to clean up a deployment, and it might literally delete a production environment because it doesn’t understand intent the way a human does,” Wickett explains. He predicts attackers will launder malicious intent through seemingly routine requests, forcing AI agents to comply because they believe they’re performing legitimate tasks.
Recent Articles By Author