Shadow AI isn’t a fringe behavior; it’s the norm. My team recently analyzed AI usage patterns across multiple industries and found signs of unapproved AI activity in more than 80% of the 100+ customer organizations sampled. Whether it’s sales teams dropping customer data into ChatGPT, HR uploading resumes into Claude, or executives experimenting with AI planning tools, employees are driving adoption from the bottom up. The productivity gains can be real. But the hidden exposures are just as real—and at least for now, remain under the radar.
On the positive side, Shadow AI reflects genuine demand. Workers see immediate value in using generative AI to draft emails, analyze data, or brainstorm strategy. These tools help employees move faster, unblock workflows, and generate insights that would otherwise take hours or days. But this DIY adoption also creates fragmented, inconsistent, and uncontrolled environments. Unlike approved enterprise tools, Shadow AI lacks standardized training, monitoring, and integration. Two people in the same department could be using completely different models with no shared data governance. What feels like productivity today can quickly become operational chaos tomorrow.
One of the most alarming findings from our research is that traditional security controls are blind to much of this activity. Data loss prevention, CASB, and network monitoring tools often fail to pick up Shadow AI usage because browser traffic is encrypted, employees use unmanaged devices, and developer workflows are particularly exposed.
With Anthropic’s Model Context Protocol (MCP) gaining traction, some developers are unknowingly storing API keys, tokens, or credentials in configuration files that can end up in shared repositories—an easy path to data leakage. This invisibility compounds the problem. Organizations aren’t just dealing with risky usage; they often don’t even know it’s happening.
And it’s not just unmanaged API keys (or other secrets/NHIs) that can leak. The whole point of the protocol is to connect data and context to an LLM. When a developer sets up an MCP server, they share data about what they want the LLM to do, with the LLM. Let’s consider what early adopter use cases are for agentic AI (e-commerce and healthcare come to mind), the potential for leaky PII to become weaponized is an inevitability.
At XM Cyber, we observed that even when corporate policies explicitly banned generative AI—even in highly regulated industries like healthcare, financial services, and consulting—about one in 10 employees still bypassed the restrictions. That may sound small, but in a workforce of 10,000, that’s 1,000 employees actively sidestepping governance. This isn’t simply a compliance violation. It’s a trust issue. If employees don’t believe official channels can meet their needs, they will find workarounds. The result: sensitive data flowing into unvetted systems, with no audit trail, no record of exposure, and no recourse when regulators ask tough questions.
MIT research finding that 95% of enterprise AI pilots fail to deliver measurable ROI highlights the tension between top-down initiatives and bottom-up adoption. Large enterprises move slowly: they evaluate vendors, negotiate contracts, and worry about compliance frameworks. By the time a pilot is rolled out, employees may have already adopted their own tools. Grassroots adoption proves there is genuine value to be had—but without structure, it’s nearly impossible to capture that value at scale. Exposure management is about bridging this gap: aligning the energy of employee innovation with the rigor of enterprise governance.
Banning Shadow AI outright rarely works. Employees will continue to chase productivity wherever they can find it. Instead, CISOs should treat Shadow AI as an exposure to be managed—just like any other attack surface risk. That means identifying where and how it’s happening, quantifying the potential blast radius, and then building policies and controls that guide usage without stifling it.
Here are a few creative steps CISOs can take to mitigate the spread of shadow AI:
Employees realize they need to embrace AI to remain relevant and will keep experimenting with tools that make their jobs easier. The challenge for security leaders is not to fight this trend but to govern it without suffocating it. With the right approach – one that emphasizes visibility, containment, and education, CISOs can turn Shadow AI from an opaque risk into a manageable asset.