Like me, this news probably shocked almost all AI enthusiasts. The GenAI gold rush has apparently turned into a reckoning. And the fallout may be the next cyberattack.
A recent MIT report reveals an unexpected twist in the AI market, making waves across boardrooms and leadership circles. The report, based on analysis of over 300 AI deployments, interviews with 52 organizations, and surveys from 153 senior leaders, reveals an uncomfortable truth.
Despite $30–40 billion in enterprise investment in GenAI, up to 95% of organizations are getting zero return. No, that is not a typo. Ninety-five percent.
The findings are sobering and, frankly, confirm what many of us in cybersecurity and digital transformation have been saying for years. Rushing the capability doesn’t guarantee you’ll capture the value. While large enterprises are running the most AI pilots, investing the most resources, and assembling the biggest teams, they’re reporting the lowest pilot-to-scale conversion rates. By contrast, mid-market companies moved more decisively, with top performers reporting average timelines of just 90 days from pilot to full implementation.
The malaise seems similar to the cybersecurity industry.
While the cybersecurity market approaches half a trillion dollars in 2025, attacks continue to rise rather than decline. While AI budgets explode, business impact remains elusive. And I’m convinced the real issue is the same in both domains.
Are You Breach Ready? Uncover hidden lateral attack risks in just 5 days. Get a free Breach Readiness and Impact Assessment with a visual roadmap of what to fix first.
An overreliance on technology to solve problems without investing in the foundational capabilities required to manage and adapt to it.
While the world debates how to improve value and make AI projects more successful, I’ve been thinking about the breach exposure risks posed by abandoned AI projects.
It is no secret that increased digitalization and adoption of artificial intelligence have exponentially expanded the attack surface that threat actors can exploit. And fewer than 1% of organizations have adopted microsegmentation capabilities that can anticipate, withstand, and evolve from cyberattacks.
This means most organizations remain grossly unprepared and far from breach ready.
The MIT report mentions that “most organizations fall on the wrong side of the GenAI Divide: adoption is high, but disruption is low. Seven of nine sectors show little structural change. Enterprises are piloting GenAI tools, but very few reach deployment. Generic tools like ChatGPT are widely used, but custom solutions stall due to integration complexity and a lack of fit with existing workflows.”
Also Read: “Would You Like to Play a Game?” The AI-Accelerated Cyber Battlefield is Here Now
AI systems are not the same as traditional IT systems. They are data-hungry, often requiring access to multiple sensitive datasets; highly interconnected, spanning clouds, SaaS platforms, APIs, and internal systems; and continuously evolving, with changing models, features, and dependencies.
This poses even larger problems in Digital Industrial Systems (OT/ICS/CPS/IIoT/IoMD). These environments often rely on older, disparate machinery, making it difficult to aggregate data and leading to poor training sets. Because AI systems often do not understand the “common sense” or real-world physical constraints of a factory floor, they can be inaccurate, generate excessive false alerts, and quickly lose operator trust. More importantly, Digital Industrial Systems prioritize safety and reliability, and “up to 95%” accuracy from an AI system is simply unacceptable.
Despite this, most AI projects were architected using legacy security assumptions: trusted internal networks, broad east-west access, and perimeter-centric defenses. When business confidence waned, projects were paused or abandoned. However, pilots whose anomalies were initially tolerated in the name of speed quietly became persistent deployments, and temporary exceptions hardened into architecture.
Access Forrester Wave
Report | Discover why ColorTokens was rated ‘Superior’ in OT, IoT, and Healthcare Security.
Abandoned AI projects and pilots also create unforeseen and often undetectable vulnerabilities. These can be exploited through AI-driven attacks that evade traditional cybersecurity tools, including prompt injection (via website content or emails), training data poisoning, subtle adversarial inputs (such as imperceptible noise added to data), model inversion and extraction, or even LLM jailbreaking to bypass safety controls.
From a breach-readiness standpoint, abandoned AI systems are more dangerous than actively managed ones — not only because they leave behind an “uncontained” blast radius due to AI workloads being placed in flat network segments with unrestricted lateral connectivity. Without microsegmentation, a compromised AI workload is not a single isolated incident. It becomes an entry point into the enterprise.
Nonproductive or abandoned AI pilots do not reduce this blast radius; they freeze it in place.
AI pipelines rely on service accounts, tokens, and API keys to function autonomously. When projects stop, these identities persist. Over time, they become invisible, unrotated, and highly attractive to attackers seeking low-noise access. Training datasets, feature stores, embeddings, and intermediate artifacts often contain regulated, proprietary, or mission-critical data. These artifacts are rarely classified, encrypted, or lifecycle-managed. Abandoned systems leave this data exposed and undetected.
However, the biggest risks they create are Shadow AI and supply chain attack exposure. Many AI initiatives integrate external model providers or data sources through weakly governed interfaces. Once projects stall, vendor oversight erodes, creating latent supply chain risk that is difficult to detect and even harder to explain after a breach.
Also Read: Containing the Inevitable: What Cyber Leaders Must Prepare for in 2026
We need to act now.
If my point of view sounds alarming, consider recent Anthropic red-teaming research. In a recent evaluation of AI models’ cyber capabilities, current Claude models succeeded at multistage attacks on networks with dozens of hosts using only standard open-source tools, rather than the custom tooling required by previous generations. This demonstrates how quickly barriers to AI-driven cyber operations are falling and reinforces the importance of fundamentals like prompt patching of known vulnerabilities.
The bottom line: everyone needs to step up. Improve governance. Ensure all abandoned or unproductive AI projects are formally shut down and decommissioned. Most AI initiatives were designed to prevent breaches, not to survive them. The implicit assumption was that if controls were added later, risk would be manageable. In reality, AI systems amplify risk because they sit at the intersection of data, automation, and trust.
Breach readiness demands a different mindset: assume compromise, design for containment, and minimize blast radius by default. If you haven’t already, invest in foundational microsegmentation and run AI projects in isolated microsegments that are disconnected from production systems until least-privileged access is explicitly granted.
If AI expansion is increasing your exposure, let’s talk about containing risk and building true breach readiness.
The post 95% of AI Projects Are Unproductive and Not Breach Ready appeared first on ColorTokens.
*** This is a Security Bloggers Network syndicated blog from ColorTokens authored by Agnidipta Sarkar. Read the original post at: https://colortokens.com/blogs/breach-readiness-ai-attack-surface-microsegmentation/