CSA and Aembit Survey: 68% of Organizations Can’t Distinguish AI Agent Actions from Human Activity
嗯,用户让我帮忙总结一下这篇文章的内容,控制在一百个字以内,而且不需要用“文章内容总结”或者“这篇文章”这样的开头。首先,我需要理解文章的主要内容。 文章讲的是AI代理在企业环境中的广泛应用,但大多数组织无法清楚地区分人类和AI代理的活动。调查发现,68%的组织无法区分,而73%的组织预计AI代理将在一年内变得至关重要。同时,85%的组织已经在生产环境中使用AI代理,用于任务自动化、研究、开发协助和安全监控等方面。 然而,身份治理方面存在问题。52%的组织使用工作负载身份,43%依赖共享服务账户,31%允许AI代理使用人类用户身份。74%的组织表示AI代理经常获得不必要的访问权限,79%认为它们创造了难以监控的新访问途径。只有22%的组织报告说访问框架被“非常一致”地应用到AI代理上。 所有权分散:28%由安全团队负责,21%由开发和工程团队负责,19%由IT团队负责,只有9%由IAM团队负责。 专家指出,现有的IAM工具无法处理AI代理带来的新攻击面问题。因此,总结时需要涵盖这些关键点:AI代理的普及、身份治理问题、访问控制不足以及所有权分散。 接下来,我需要将这些信息浓缩到100字以内。要确保包含主要数据点和结论。可能的结构是:首先说明问题(AI代理普及但身份治理不足),然后给出关键数据(68%,73%,85%,访问控制问题),最后提到专家观点或解决方案的需求。 最后检查字数是否符合要求,并确保语言简洁明了。 </think> 文章指出,在企业环境中广泛应用的AI代理存在身份治理和访问控制问题。调查显示68%的企业无法区分人类与AI活动,73%认为AI将成关键运营工具。现有IAM工具无法有效应对AI带来的新挑战。 2026-3-24 14:7:28 Author: securityboulevard.com(查看原文) 阅读量:3 收藏

Avatar photo

AI agents are already deployed broadly across enterprise environments. The problem is that organizations can’t tell what they’re doing.

That’s the core finding of a new survey report released at RSAC 2026 by the Cloud Security Alliance, commissioned by Aembit. The “Identity and Access Gaps in the Age of Autonomous AI” report surveyed 228 IT and security professionals in January 2026 and found that identity governance for AI agents is, in most organizations, essentially improvised.

The headline number: 68% of organizations cannot clearly distinguish between human and AI agent activity, even as 73% expect AI agents to become vital to their operations within the next year. Eighty-five percent say AI agents are already running in production environments, across task automation (67%), research (52%), developer assistance (50%), and security monitoring (50%). In other words, these agents are doing real work inside real systems with real access, and most organizations lack the controls to attribute their actions.

The identity situation is particularly fragmented. Fifty-two percent of organizations use workload identities for agents, 43% rely on shared service accounts, and 31% allow agents to operate under human user identities. Nearly three-quarters (74%) say agents often receive more access than necessary. Seventy-nine percent believe agents create new access pathways that are difficult to monitor. Only 22% report that access frameworks are applied “very consistently” to AI agents.

Ownership is scattered too: 28% say security leads responsibility, followed by development and engineering (21%) and IT (19%). Only 9% point to IAM teams.

“AI agents are inheriting human permissions, operating under shared accounts, and expanding the attack surface in ways that existing IAM tools weren’t designed to handle,” said David Goldschlag, co-founder and CEO of Aembit. “Agentic autonomy without identity-level access controls is a risk organizations can’t afford to ignore.”

Hillary Baron, AVP of Research at CSA, added that existing IAM approaches “were not designed for autonomous agents and are showing strain as deployments scale.”

The full report is available from the Cloud Security Alliance. Aembit is a non-human identity and access management platform backed by $45 million in total funding.


文章来源: https://securityboulevard.com/2026/03/csa-and-aembit-survey-68-of-organizations-cant-distinguish-ai-agent-actions-from-human-activity/
如有侵权请联系:admin#unsafe.sh