Securing Your AI Supply Chain: Your AI Is Running, But You Don’t Know What It’s Doing
嗯,用户让我总结这篇文章,控制在100字以内,而且不需要特定的开头。首先,我需要快速浏览文章内容,抓住主要观点。 文章主要讲的是AI在供应链中的安全问题。传统安全工具无法检测AI带来的风险,比如模型中毒、数据隐私问题等。企业可能在不知情的情况下使用了AI服务,导致合规和安全漏洞。 接下来,我得把这些要点浓缩到100字以内。要提到传统工具的不足,AI带来的新风险,以及企业需要采取的新措施,比如构建AI-BOM和提升治理能力。 最后,确保语言简洁明了,不使用复杂的术语,让读者一目了然。 </think> 文章指出传统应用安全工具无法检测AI相关风险,如模型中毒、数据隐私泄露等。企业可能在不知情情况下使用AI服务或依赖项,导致合规与安全漏洞。建议构建AI-BOM(人工智能物料清单),提升对AI组件的可见性、评估与治理能力。 2026-4-19 08:38:28 Author: checkmarx.com(查看原文) 阅读量:8 收藏

You passed your security audit. SAST came back clean. SCA found no critical vulnerabilities. Secrets scanning turned up nothing. Your release moved forward with confidence. 

Then, weeks later, leadership asks: “Are we using AI in any of our applications?” 

Honestly? No one knows. 

Somewhere in your codebase, invisible to every tool you have, an application is calling a hosted LLM service. An agent framework arrived through a dependency. Prompts are loading from runtime configuration. Embeddings are being sent to a vector store. 

None of it shows up in your SBOM. None of it is on anyone’s radar. 

This isn’t a failure of your security team. It’s a structural gap. 

The Supply Chain is Changing (Again) 

For years, traditional AppSec protected a predictable set of things: application code, open-source packages, secrets, containers, and infrastructure. SAST, SCA, vulnerability management, all built for that world. 

Then AI became a production dependency. 

More than 75% of enterprises are already embedding LLMs, AI SDKs, and AI services directly into their applications. But the security and governance programs designed to manage software haven’t caught up. 

Modern applications now depend on: 

  • Hosted AI services (LLM APIs) 
  • AI frameworks and SDKs 
  • Agent code and MCP servers 
  • Prompts and datasets 
  • Embeddings and vector stores 

These don’t behave like traditional dependencies: 

  • A model can be safe in testing and unsafe under real-world prompts 
  • A prompt can quietly change system behavior without changing application logic 
  • An MCP tool can expand execution capability beyond what developers intended 
  • A service provider can change data retention terms without a code change 

Traditional AppSec tools don’t detect these risks because they weren’t designed to. They can’t assess model poisoning, unverified weights, unsafe adapters, malicious MCP servers, or licensing violations.  

None of these are hypothetical. They’re showing up in real pipelines, real codebases, and real compliance conversations, often without anyone realizing it. 

At the same time, regulatory pressure is real. The EU AI Act, ISO 42001, and other frameworks are creating real accountability for AI governance. Yet, most organizations lack even a basic AI asset inventory, let alone the ability to demonstrate compliance. 

The Hidden Threats in Your AI Dependencies 

Below are 10 prominent AI supply chain risks validated by OWASP LLM03:2025 (the industry standard) and our own Checkmarx Zero research team. 

These risks reflect where visibility gaps typically become security gaps in this new supply chain structure: 

Group A: Trust & Provenance Poisoned models, fake models, abandoned models, vulnerable AI packages—risks tied to where models actually come from and whether you can trust them. 

Group B: Modification & Fine-Tuning Malicious adapters, model merge exploits—risks introduced when models are altered without visibility. 

Group C: Deployment Risks Mobile and edge model attacks where compromised models are embedded outside standard update mechanisms. 

Group D: MCP Supply Chain Tool poisoning, compromised dependencies, shadow MCP servers, unauthorized integrations that expand what AI can actually do. 

Group E: Governance & Exposure Licensing violations, unclear terms-of-service, privacy policy drift that quietly changes how your data is used. 

Each reflects a different failure mode: compromised artifacts, unmanaged modifications, invisible deployments, unauthorized connections, and untracked obligations. 

Where Does Your Organization Actually Stand? 

Most security teams assume they’re at least partially aware of their AI exposure. In practice, the answer is usually Stage 1: Unknown. There’s no inventory, no policy enforcement, and no audit trail, just scattered usage across repos and environments. 

Getting from Unknown to Governed isn’t a single leap. It’s a defined progression: from discovery, to control, to compliance-ready reporting. Understanding where you sit today is the prerequisite to knowing what to do next. 

Visibility First, Then Everything Else  

What connects all these risks is something simple: if you don’t know an AI component exists in your software, you can’t assess it, govern it, or protect against what it might do. 

This requires building what didn’t exist before: an AI-BOM, an inventory that captures what AI is running your applications and what that implies for risk and compliance. 

This requires four capabilities: 

  1. Discover AI assets across code and configuration 
  1. Assess AI-specific risks (not just CVEs) 
  1. Control through policy enforcement and approved registries 
  1. Report compliance-ready documentation 

AI is already embedded in your stack, whether you know it or not. The goal isn’t to slow adoption, it’s to bring the same AppSec discipline to AI dependencies that teams already apply to everything else they ship. 

That starts with visibility.  

Want to go deeper?  

We’ve put together a full breakdown of the threat landscape with all 10 risk categories, real-world examples, and the controls mapped to each. But more than that: the guide walks through a practical AI Supply Chain Maturity Model so you can identify where your organization stands today, a side-by-side comparison of traditional SBOMs vs. AI-BOMs, and a two-floor security architecture that tells you what to preserve from your existing AppSec program and what to add on top of it. 

Read it now  

Tags:

ADLC

Agentic AI

Software Supply Chain

SSCS


文章来源: https://checkmarx.com/ai-llm-tools-in-application-security/securing-your-ai-supply-chain-your-ai-is-running-but-you-dont-know-what-its-doing/
如有侵权请联系:admin#unsafe.sh