FIRESIDE CHAT: Leaked secrets are now the go-to attack vector — and AI is accelerating exposures
嗯,用户让我用中文总结这篇文章,控制在100字以内,而且不需要特定的开头。我得先仔细阅读文章内容,抓住主要观点。 文章主要讲的是企业安全漏洞的来源发生了变化,泄露的凭证现在成了主要攻击入口。比如Salesloft事件,通过OAuth令牌窃取数据。GitGuardian的数据也显示泄露量激增,尤其是AI相关的基础设施导致了更多问题。开发者和非技术人员使用AI生成代码时容易嵌入凭证,治理和标准建设是关键。 接下来我要把这些要点浓缩成一段话,确保不超过100字。同时要避免使用“文章总结”之类的开头,直接描述内容。 可能的结构:泄露凭证成为主要攻击入口,AI加速暴露,开发者和非技术人员生成代码导致问题,治理和标准建设是关键。 再检查一下字数,确保符合要求。 </think> 企业安全漏洞的主要来源已从网络攻击转向泄露凭证。2025年GitGuardian发现2860万次凭证泄露,同比增长34%,AI基础设施导致的漏洞激增。开发者和非技术人员利用AI生成代码时易嵌入活凭证。治理和标准建设是关键。 2026-4-27 14:46:2 Author: securityboulevard.com(查看原文) 阅读量:21 收藏

The post FIRESIDE CHAT: Leaked secrets are now the go-to attack vector — and AI is accelerating exposures appeared first on The Last Watchdog.

By Byron V. Acohido

A consequential shift is underway in how enterprise breaches begin. The leaked credential — once treated as a hygiene problem — has become the primary on-ramp.

Last August’s Salesloft campaign was the pattern in miniature. Stolen OAuth tokens from one chatbot vendor pulled Salesforce data from 760 enterprise instances — Cloudflare, Cisco, Palo Alto Networks, and TransUnion among them, according to Mandiant. Google’s Threat Intelligence Group reported the primary intent: credential harvesting, each stolen key the path into the next victim.

That is the shape of the modern enterprise breach, says Dwayne McDaniel, senior developer advocate at non-human identity security firm GitGuardian, whom I interviewed at RSAC 2026. Each leaked credential, he explained, is a key to a door behind which sit more keys.

Leaks spiking

GitGuardian scans every public GitHub commit — every new batch of developer code published to a shared repository — for hard-coded secrets: credentials typed directly into source code. Its latest report documented 28.6 million such exposures in 2025 alone — a 34 percent year-over-year jump, the largest in five years. Private repositories ran six times worse.

And 64 percent of the credentials leaked in 2022 remain active today. GitGuardian emails developers the moment an exposed credential hits GitHub. The alerts go out. The credentials are rarely revoked. This is not a detection problem. It is a remediation problem.

AI is steepening the curve. Eight of the ten fastest-growing leaked-secret categories in 2025 traced directly to AI infrastructure. OpenRouter API keys — used to wire large language models into applications — jumped 48x year over year. DeepSeek keys were up 23x.

What’s a developer?

Developers are no longer alone in producing that code. GitGuardian found that commits where Claude Code served as co-signer — meaning a developer let AI complete the cycle without review — contained secrets at 33 percent. The baseline for all commits: 1.5 percent.

McDaniel’s own boss, a chief marketing officer, is now producing code. Executives and marketing leaders with no programming background are building production systems with live credentials embedded, because the tools make it that easy. “The developer becomes the AI,” McDaniel told me, “and you become the production manager.”

The fix isn’t purely technical. Gartner’s 2026 IAM Summit flagged machine identity as among the least mature areas in enterprise security. Workload identity frameworks like SPIFFE, in production at Uber and State Farm, are replacing API keys one system at a time. The tools exist.

Governance needed

What has to come first is the governance conversation. Some forty regulatory standards already name the requirement plainly, McDaniel notes: unauthorized access. Who can reach what, can you prove it, and what did you do about it. That framing doesn’t require new vocabulary. It requires the conversation to happen.

Organizations that treat AI-assisted code as finished work will ship faster this quarter. They will also ship the next class of exposures buried inside it.

Yet McDaniel is hopeful. Standards bodies have stopped treating credential abuse as tomorrow’s problem. The IETF, the CNCF, and the OpenID Foundation have active work on machine identity, workload authentication, and agentic AI governance. The tools are arriving. Whether governance arrives in time is the open question.

For a full drill down, please give a listen to the accompanying Fireside Chat podcast.

I’ll keep watch and keep reporting.

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.

(Editor’s note: I used Claude and ChatGPT to assist with research compilation, source discovery, and early draft structuring. All interviews, analysis, fact-checking, and final writing are my own. I remain responsible for every claim and conclusion.)

April 27th, 2026 | RSAC | Top Stories

*** This is a Security Bloggers Network syndicated blog from The Last Watchdog authored by bacohido. Read the original post at: https://www.lastwatchdog.com/fireside-chat-leaked-secrets-are-now-the-go-to-attack-vector-and-ai-is-accelerating-exposures/


文章来源: https://securityboulevard.com/2026/04/fireside-chat-leaked-secrets-are-now-the-go-to-attack-vector-and-ai-is-accelerating-exposures/
如有侵权请联系:admin#unsafe.sh