95% of AI Projects Are Unproductive and Not Breach Ready
好的,我现在需要帮用户总结这篇文章的内容,控制在100字以内。首先,我得通读整篇文章,抓住主要观点。 文章开头提到MIT报告指出95%的企业AI项目没有产生回报,这让我意识到这是一个关键点。接着,文章讨论了大型企业虽然投入最多资源,但成功转化率最低,而中型企业表现更好。这说明资源投入不一定带来成功。 然后,文章将AI项目的困境与网络安全进行了类比,指出过度依赖技术而忽视基础能力的问题。这可能是一个重要的比较点,需要包含在总结中。 接下来,文章提到废弃的AI项目带来的安全风险,比如攻击面扩大、数据泄露和供应链攻击等。这些都是关键的安全隐患,必须在总结中提到。 最后,作者建议加强治理、关闭无用项目,并投资微分段技术来减少攻击影响。这些建议是解决问题的方法,也是总结的重要部分。 现在我需要将这些要点浓缩到100字以内。首先点出报告的主要发现:95%的AI项目未产生回报。然后提到大型企业的问题和中型企业的优势。接着类比网络安全的问题。最后强调废弃项目带来的安全风险和建议措施。 这样结构清晰,涵盖了文章的核心内容。 </think> MIT报告指出95%的企业AI项目未产生回报,大型企业虽投入最多但转化率最低,而中型企业表现更佳。AI项目的困境与网络安全类似,过度依赖技术而忽视基础能力导致问题频发。废弃的AI项目更易成为攻击目标,放大安全风险,需加强治理并投资微分段技术以减少攻击影响。 2026-2-3 09:47:18 Author: securityboulevard.com(查看原文) 阅读量:1 收藏

Like me, this news probably shocked almost all AI enthusiasts. The GenAI gold rush has apparently turned into a reckoning. And the fallout may be the next cyberattack.

A recent MIT report reveals an unexpected twist in the AI market, making waves across boardrooms and leadership circles. The report, based on analysis of over 300 AI deployments, interviews with 52 organizations, and surveys from 153 senior leaders, reveals an uncomfortable truth.

Despite $30–40 billion in enterprise investment in GenAI, up to 95% of organizations are getting zero return. No, that is not a typo. Ninety-five percent.

The findings are sobering and, frankly, confirm what many of us in cybersecurity and digital transformation have been saying for years. Rushing the capability doesn’t guarantee you’ll capture the value. While large enterprises are running the most AI pilots, investing the most resources, and assembling the biggest teams, they’re reporting the lowest pilot-to-scale conversion rates. By contrast, mid-market companies moved more decisively, with top performers reporting average timelines of just 90 days from pilot to full implementation.

The malaise seems similar to the cybersecurity industry.

While the cybersecurity market approaches half a trillion dollars in 2025, attacks continue to rise rather than decline. While AI budgets explode, business impact remains elusive. And I’m convinced the real issue is the same in both domains.

Are You Breach Ready? Uncover hidden lateral attack risks in just 5 days. Get a free Breach Readiness and Impact Assessment with a visual roadmap of what to fix first.

An overreliance on technology to solve problems without investing in the foundational capabilities required to manage and adapt to it.

While the world debates how to improve value and make AI projects more successful, I’ve been thinking about the breach exposure risks posed by abandoned AI projects.

It is no secret that increased digitalization and adoption of artificial intelligence have exponentially expanded the attack surface that threat actors can exploit. And fewer than 1% of organizations have adopted microsegmentation capabilities that can anticipate, withstand, and evolve from cyberattacks.

This means most organizations remain grossly unprepared and far from breach ready.

The MIT report mentions that “most organizations fall on the wrong side of the GenAI Divide: adoption is high, but disruption is low. Seven of nine sectors show little structural change. Enterprises are piloting GenAI tools, but very few reach deployment. Generic tools like ChatGPT are widely used, but custom solutions stall due to integration complexity and a lack of fit with existing workflows.”

Also Read: “Would You Like to Play a Game?” The AI-Accelerated Cyber Battlefield is Here Now

AI systems are not the same as traditional IT systems. They are data-hungry, often requiring access to multiple sensitive datasets; highly interconnected, spanning clouds, SaaS platforms, APIs, and internal systems; and continuously evolving, with changing models, features, and dependencies.

This poses even larger problems in Digital Industrial Systems (OT/ICS/CPS/IIoT/IoMD). These environments often rely on older, disparate machinery, making it difficult to aggregate data and leading to poor training sets. Because AI systems often do not understand the “common sense” or real-world physical constraints of a factory floor, they can be inaccurate, generate excessive false alerts, and quickly lose operator trust. More importantly, Digital Industrial Systems prioritize safety and reliability, and “up to 95%” accuracy from an AI system is simply unacceptable.

Despite this, most AI projects were architected using legacy security assumptions: trusted internal networks, broad east-west access, and perimeter-centric defenses. When business confidence waned, projects were paused or abandoned. However, pilots whose anomalies were initially tolerated in the name of speed quietly became persistent deployments, and temporary exceptions hardened into architecture.

Access Forrester Wave™ Report | Discover why ColorTokens was rated ‘Superior’ in OT, IoT, and Healthcare Security.

Abandoned AI projects and pilots also create unforeseen and often undetectable vulnerabilities. These can be exploited through AI-driven attacks that evade traditional cybersecurity tools, including prompt injection (via website content or emails), training data poisoning, subtle adversarial inputs (such as imperceptible noise added to data), model inversion and extraction, or even LLM jailbreaking to bypass safety controls.

From a breach-readiness standpoint, abandoned AI systems are more dangerous than actively managed ones — not only because they leave behind an “uncontained” blast radius due to AI workloads being placed in flat network segments with unrestricted lateral connectivity. Without microsegmentation, a compromised AI workload is not a single isolated incident. It becomes an entry point into the enterprise.

Nonproductive or abandoned AI pilots do not reduce this blast radius; they freeze it in place.

AI pipelines rely on service accounts, tokens, and API keys to function autonomously. When projects stop, these identities persist. Over time, they become invisible, unrotated, and highly attractive to attackers seeking low-noise access. Training datasets, feature stores, embeddings, and intermediate artifacts often contain regulated, proprietary, or mission-critical data. These artifacts are rarely classified, encrypted, or lifecycle-managed. Abandoned systems leave this data exposed and undetected.

However, the biggest risks they create are Shadow AI and supply chain attack exposure. Many AI initiatives integrate external model providers or data sources through weakly governed interfaces. Once projects stall, vendor oversight erodes, creating latent supply chain risk that is difficult to detect and even harder to explain after a breach.

Also Read: Containing the Inevitable: What Cyber Leaders Must Prepare for in 2026

We need to act now.

If my point of view sounds alarming, consider recent Anthropic red-teaming research. In a recent evaluation of AI models’ cyber capabilities, current Claude models succeeded at multistage attacks on networks with dozens of hosts using only standard open-source tools, rather than the custom tooling required by previous generations. This demonstrates how quickly barriers to AI-driven cyber operations are falling and reinforces the importance of fundamentals like prompt patching of known vulnerabilities.

The bottom line: everyone needs to step up. Improve governance. Ensure all abandoned or unproductive AI projects are formally shut down and decommissioned. Most AI initiatives were designed to prevent breaches, not to survive them. The implicit assumption was that if controls were added later, risk would be manageable. In reality, AI systems amplify risk because they sit at the intersection of data, automation, and trust.

Breach readiness demands a different mindset: assume compromise, design for containment, and minimize blast radius by default. If you haven’t already, invest in foundational microsegmentation and run AI projects in isolated microsegments that are disconnected from production systems until least-privileged access is explicitly granted.

If AI expansion is increasing your exposure, let’s talk about containing risk and building true breach readiness.

The post 95% of AI Projects Are Unproductive and Not Breach Ready appeared first on ColorTokens.

*** This is a Security Bloggers Network syndicated blog from ColorTokens authored by Agnidipta Sarkar. Read the original post at: https://colortokens.com/blogs/breach-readiness-ai-attack-surface-microsegmentation/


文章来源: https://securityboulevard.com/2026/02/95-of-ai-projects-are-unproductive-and-not-breach-ready/
如有侵权请联系:admin#unsafe.sh