GUEST ESSAY: Executives trust AI security even as security teams confront blind spots, new risks
好的,我现在需要帮用户总结这篇文章的内容,控制在100字以内,而且不需要特定的开头。首先,我得仔细阅读文章,抓住主要观点。 文章主要讲的是高管和安全团队对AI系统的安全看法存在分歧。高管们认为他们有强大的安全覆盖,但实际操作中安全团队并不这么认为。问题出在AI系统的供应链很多部分不可见,导致安全工具无法检测到潜在风险。比如预训练模型、数据集这些可能没有被纳入传统的安全工具中。 然后,文章提到AI开发依赖很多隐含的信任,比如常用的框架和工具链,这些通常被视为基础设施而不是需要审查的依赖项。这导致了很多潜在的风险点,而组织在合规和漏洞管理方面也缺乏信心。 最后,作者建议组织应该先解决可见性的问题,比如建立模型的清单、识别依赖的组件,并跟踪这些组件的漏洞情况。只有这样,才能真正提升AI系统的安全性。 总结起来,文章的核心是高管与安全团队对AI安全的不同看法以及背后的原因:供应链的不可见性和隐含信任导致的风险。解决方案在于提高可见性和管理依赖项。 </think> 高管与安全团队对AI系统安全性看法分歧:80%高管认为覆盖强,但仅40%从业者认同。问题在于AI供应链多部分不可见,传统工具难以检测风险。 2026-3-20 10:3:11 Author: securityboulevard.com(查看原文) 阅读量:7 收藏

By Daniel Bardenstein

In our recent report, Beyond the Black Box, we found a striking gap: 80% of executives believe their organizations have strong security coverage for AI systems. Only about 40% of AppSec practitioners agree.

Related: AI moves mainstream

That’s not just a perception problem. It’s a visibility problem.

The numbers back that up. Sixty-three percent of organizations report discovering “shadow AI” inside their environments — tools, models, or integrations adopted without formal oversight.

Executives tend to measure security by the presence of programs, policies, and governance structures. Practitioners measure it by what they can actually see, inspect, and test. When it comes to AI systems, those two measures rarely land on the same number.

The reason is straightforward: much of the AI supply chain is still invisible to the tools security teams rely on.

Breaking assumptions

Over the past decade, software security built real mechanisms for understanding dependencies. Package managers, dependency scanners, and software bills of materials (SBOMs) emerged because organizations learned they couldn’t secure what they couldn’t inventory. Modern AppSec programs now assume teams can identify the components their software depends on and track vulnerabilities within them.

AI systems break that assumption.

A typical AI deployment doesn’t just include application code and open source libraries. It may depend on pretrained models, model weights, training datasets, machine learning frameworks, GPU acceleration libraries, and specialized tooling embedded inside development pipelines. Many of those components are inherited through environments, frameworks, or model repositories — not explicitly chosen through dependency management systems.

As a result, they often don’t appear where AppSec tools normally look. That blind spot is widening alongside rapid adoption: nearly 80% of organizations report broad use of commercial AI tools, while 56.7% are training open-weight models on internal datasets.

Bardenstein

Leadership may assume existing security coverage extends to AI systems. Security teams know large parts of the stack remain opaque. The confidence gap in our report reflects that difference directly.

AI development also runs on a large amount of implicit trust. Teams routinely rely on widely used machine learning frameworks, model repositories, GPU toolchains, and preconfigured development environments. These components are typically treated as foundational infrastructure — not as software dependencies that need scrutiny. That reliance is growing: about 29% of organizations already report tuning their own models, layering in additional dependencies across training data, frameworks, and compute infrastructure.

Risk buried in the stack

Security history is pretty consistent on this point: infrastructure layers are often where high-impact vulnerabilities surface. And organizations aren’t confident they’ve got a handle on even the compliance basics. Ninety-three percent say they have room for improvement in understanding licensing, IP, and usage obligations tied to AI models and datasets.

In many environments, security teams may not even know these components are present, let alone have visibility into vulnerabilities within them. When issues emerge at that layer, they can affect large portions of the AI pipeline without triggering a single traditional security control.

This is exactly what AppSec practitioners are reacting to when they report lower confidence in AI security coverage.

Executives, meanwhile, are often seeing different signals. Organizations may have launched AI governance initiatives, introduced policies covering AI systems, or incorporated AI risks into broader compliance frameworks. Those efforts reflect real awareness of the challenge.

But governance doesn’t automatically translate into artifact-level visibility. The security of an AI system ultimately depends on the components it relies on, and many organizations are still working out how to inventory and track those components.

Software supply chain security followed a similar path. For years, organizations assumed their software stacks were secure — until Log4j exposed how little visibility existed into underlying dependencies. Only then did practices like SBOM generation and dependency monitoring become standard.

AI ecosystems appear to be at an earlier point in that same arc.

The hunt for solutions

Organizations looking to close the gap should start with a few basic questions. Do we maintain an inventory of the models running in production? Can we identify the frameworks, runtimes, and infrastructure components those models depend on? Do we have a way to track vulnerabilities within those dependencies over time?

If the answers aren’t clear, the problem isn’t just AI security coverage. It’s that significant portions of the AI supply chain may still be invisible to the teams responsible for securing them.

The 80/40 split in our report reflects that reality. Executives see coverage. Practitioners see the parts of the AI stack that remain hidden.

Confidence in security programs matters. But confidence without visibility is fragile.

Before organizations can secure AI systems, they first need to understand the software supply chains those systems depend on.

About the essayist: Daniel Bardenstein is CEO and co-founder of Manifest, where he focuses on making software and AI supply chains more transparent and secure. Before Manifest, he served as Chief of Tech Strategy at CISA and led cybersecurity efforts at the Defense Digital Service, including Hack the Pentagon.

March 20th, 2026 | Guest Blog Post | Top Stories

*** This is a Security Bloggers Network syndicated blog from The Last Watchdog authored by bacohido. Read the original post at: https://www.lastwatchdog.com/guest-essay-executives-trust-ai-security-even-as-security-teams-confront-blind-spots-new-risks/


文章来源: https://securityboulevard.com/2026/03/guest-essay-executives-trust-ai-security-even-as-security-teams-confront-blind-spots-new-risks/
如有侵权请联系:admin#unsafe.sh