SHARED INTEL Q&A: Why Data Bill of Materials (DBOM) is surfacing as a crucial tool to secure AI
嗯,用户让我总结这篇文章,控制在一百个字以内,而且不需要用“文章内容总结”这样的开头。首先,我得仔细阅读文章,抓住主要观点。 文章讲的是2025年企业意识到他们在AI治理方面出了问题。他们没有追踪数据的来源和流动,导致审计、数据泄露和监管压力。Kurtic提到数据治理必须从源头开始,否则模型一旦运行就很难控制了。还有提到数据物料清单(DBOM)的重要性,以及企业如何改进治理措施。 接下来,我需要把这些要点浓缩到100字以内。要确保涵盖时间点、问题、解决方案和关键概念。可能的结构是:时间点+问题+解决方案+关键概念。 比如:“2025年企业意识到AI治理问题,未能追踪数据来源和流动,导致审计和监管压力。专家强调需从源头控制数据,并引入数据物料清单(DBOM)以确保合规与透明。” 这样刚好在100字左右,并且直接描述了文章内容。 </think> 2025年企业意识到AI治理问题,未能追踪数据来源和流动,导致审计和监管压力。专家强调需从源头控制数据,并引入数据物料清单(DBOM)以确保合规与透明。 2025-12-31 14:13:9 Author: securityboulevard.com(查看原文) 阅读量:1 收藏

By Byron V. Acohido

Enterprises hustling to embed AI across their operations came to an uncomfortable realization in 2025: they lost track of the data powering those systems.

Related: The case for SBOM

Few paused to map where sensitive data lived or how it moved. That oversight is now surfacing in audits, breach incidents, and regulatory pressure.

According to field research from Bedrock Security, most IT and security leaders still lack foundational visibility into the datasets fueling training and inference.

“You can’t govern retroactively,” says Bruno Kurtic, co-founder and CEO of Bedrock Data. “You need controls before the model runs, not after.”

Last Watchdog caught up with Kurtic to understand what this shift means in practical terms. Here’s an edited version of that conversation.

LW: Why did 2025 mark a turning point for AI governance?

Kurtic: From 2023 through early 2025, companies moved fast. AI projects were spinning up across every business unit. Boards pushed, competitors pushed harder, and nobody wanted to fall behind.

That pace hit a wall by mid-2025. Teams started asking, “What data is actually feeding these systems?” More often than not, no one had a clear answer. Models were already in production. Agents were accessing data across cloud and on-prem environments. But there was no record of what went in or came out.

This wasn’t hypothetical anymore. It showed up in board discussions, audits, and regulatory reviews.

The bigger issue is structural. Data volume is growing much faster than security budgets. Most companies never fully mapped their sensitive data—let alone tracked what’s entering AI pipelines. That forced a shift in mindset. The question moved from “How fast can we go?” to “How do we move responsibly without losing control?”

LW: What’s the actual risk when organizations can’t see what’s feeding their models?

Kurtic: These risks are no longer theoretical. One biotech company we worked with discovered personal data in a training set. They didn’t catch it until after deployment. At that point, it was too late—the exposure was permanent.

It also creates a serious accountability problem. When a regulator asks, “What data was used?” companies need a clear answer. If they don’t have one, the incident becomes a governance failure.

That’s why governance has to start before the data enters the pipeline. Once it flows into an AI workflow, it’s incredibly difficult to undo. The only way to stay in control is to govern at the source.

LW: What is a Data Bill of Materials, and why is it showing up now?

Kurtic

Kurtic: A Data Bill of Materials—DBOM—is like an ingredient label for a model. It documents what went into training, fine-tuning, and inference. It tracks where the data came from, how it was classified, and how it was processed.

It’s showing up now because companies are shifting from experiments to production AI. That raises new questions. Can this model access PII? Are we violating policy? Can we spot drift into unauthorized use?

Without a DBOM, these are hard to answer. With one, they become part of normal operations.

Regulators are pushing in this direction, but a lot of the momentum is internal. Companies are realizing they can’t govern what they can’t see.

LW: Where are companies still going wrong on AI governance?

Kurtic: A big mistake is treating governance like a checkbox. Policies alone don’t protect anything. You need real-time visibility—where data lives, who’s using it, how it moves. Otherwise, the policy becomes a liability.

There’s also too much tool dependency. Teams lean on SIEMs, DLPs, CNAPPs. But those tools weren’t built to understand data sensitivity. They generate alerts but miss context, which leads to noise and fatigue.

Ingestion is another blind spot. Teams are moving production data into development, spinning up agents, and integrating RAG systems at speed. Sensitive data slips in unnoticed. Shadow AI is real. Tools get used without approval, and data flows in ways no one is tracking.

The core misunderstanding is this: governance doesn’t start at analysis. It starts at collection.

LW: With regulations tightening, how prepared are enterprises for scrutiny?

Kurtic:

In the U.S., the SEC is sharpening its expectations. It’s no longer enough to say, “We used AI.” Companies must show what data was used, how it was handled, and how it shaped decisions.

Most of today’s infrastructure isn’t built for that. Traditional DSPM tools can classify data but can’t trace it through AI workflows or validate controls in real time. That’s a major gap—and a growing regulatory risk.

LW: What happens when AI agents start operating autonomously across environments?

Kurtic: It changes the game. Humans operate at a pace that’s manageable. AI agents don’t. They can issue hundreds of queries a minute, span multiple platforms, and chain actions together with no oversight.

The speed is one issue. The autonomy is another. Agents don’t just consume data. They generate new data. And sometimes that includes hallucinations—false or fabricated information. If those outputs enter operational systems or official reports, the risk multiplies.

Security teams need to shift their thinking. It’s not about controlling the agent. It’s about controlling the data pathways they access. That’s how you align behavior with policy and reduce downstream harm.

LW: What’s the one priority leaders should focus on for 2026?

Kurtic: Make governance operational. Start at the data layer. Build systems that provide real-time visibility into where data lives, how it flows, and which agents touch it.

Once that’s in place, you can apply guardrails automatically. You can document what data influences your models. You move from reactive audits to live oversight.

That’s how you scale AI—and stay accountable.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.


(LW provides consulting services to the vendors we cover.)

December 31st, 2025 | Q & A | Top Stories


文章来源: https://securityboulevard.com/2025/12/shared-intel-qa-why-data-bill-of-materials-dbom-is-surfacing-as-a-crucial-tool-to-secure-ai/
如有侵权请联系:admin#unsafe.sh