112 or 22 to 2: Who Moved the Vulnerability Cheese?
好,我需要帮用户总结这篇文章的内容,控制在100字以内,而且不需要特定的开头。首先,我快速浏览文章,发现主要讲的是AI在漏洞发现中的应用。 文章提到Anthropic用Claude模型审计Firefox代码,发现了112个潜在漏洞,但只有22个是真实的,其中两个可被利用。这说明AI在发现漏洞方面效率很高,但后续验证和处理成为瓶颈。 接下来,文章讨论了传统漏洞管理流程中发现阶段成本高,而AI改变了这一点。现在的问题在于如何处理大量潜在漏洞的验证、优先级排序和修复。 最后,文章提到安全团队需要自动化工具来应对这些挑战,并引用了约束理论中的瓶颈概念,说明每个阶段的自动化都会推动瓶颈转移。 总结起来,文章强调了AI如何改变漏洞管理流程,从发现到验证、优先级和修复都面临新的挑战。因此,在总结时要突出AI带来的效率提升和后续处理的压力。 </think> 文章讨论了AI在漏洞管理中的作用。AI能够快速扫描代码并发现大量潜在问题,但后续验证、优先级排序和修复成为新的瓶颈。传统流程中发现阶段成本高昂,而AI使这一阶段变得高效。然而,随着潜在漏洞数量激增,安全团队面临处理压力。文章指出,未来需通过自动化工具应对这些挑战,并强调了瓶颈转移的概念。 2026-3-9 09:25:58 Author: securityboulevard.com(查看原文) 阅读量:3 收藏

Here is a set of numbers worth thinking about. 112. 22. 2.

Anthropic recently ran its Claude model against the Firefox codebase and the AI flagged 112 possible bugs during a short audit exercise. After engineers reviewed the findings, 22 were confirmed as real vulnerabilities and only two were actually exploitable. Other reporting noted the experiment surfaced more high-severity issues in weeks than researchers often report in months.

Those numbers tell an important story. The headline is not that AI can find bugs. The headline is what happens when the cost of finding them collapses.

Security professionals already understand the funnel. At the top are things that might be bugs. Suspicious patterns. Edge cases. Code paths that could fail under the right conditions. That is where automated analysis shines. AI can scan enormous codebases and flag thousands of issues that deserve a closer look. But the work does not stop there. Someone still has to determine whether the issue is real, whether it creates a vulnerability and whether an attacker can actually exploit it.

For years, vulnerability discovery was expensive. Skilled researchers spent hours or days digging through code, testing assumptions and building proof-of-concept exploits. The scarcity of that expertise shaped the entire vulnerability ecosystem. Bug bounty programs, security response teams and disclosure workflows all evolved around the idea that vulnerability discovery would be slow and relatively rare.

AI changes that equation. When a machine can review massive codebases in minutes and generate long lists of possible flaws, discovery stops being the scarce resource. The bottleneck moves somewhere else. Security teams are no longer struggling to find vulnerabilities. They are struggling to decide which ones matter.

The shift is already accelerating. A recent analysis from the Cloud Security Alliance describes how the industry is moving beyond simple AI assistants toward autonomous multi-agent penetration testing systems capable of performing reconnaissance, scanning, exploitation and validation with minimal human intervention. These systems can run through attack scenarios in minutes that would have taken human testers many hours or even days. 

Commercial systems are already demonstrating the economics of this shift. Some AI penetration testing platforms report completing security challenges in minutes that took experienced human testers dozens of hours.

The vulnerability lifecycle was not designed for that world.

Traditionally, the process followed a predictable rhythm. A researcher would study a system, identify a possible flaw, test it and eventually submit a report. Vendors would reproduce the issue, validate it, assign a severity score and work toward a fix. Discovery was the hardest and most time-consuming part of the process.

AI changes the first step of that lifecycle. Instead of one researcher combing through a codebase, automated systems can review thousands of files in minutes and generate large volumes of findings. That does not mean all of those findings are real vulnerabilities. The Firefox example shows exactly how the funnel works. One hundred and twelve possible bugs were narrowed to twenty-two real vulnerabilities and ultimately two exploitable ones.

The top of the funnel just got a lot wider.

What happens next becomes the real challenge for security teams. The industry built its processes around a manageable flow of vulnerability reports. When AI makes discovery cheap and constant, the bottleneck moves downstream. Verification, prioritization and remediation become the hard parts of the lifecycle.

If this sounds theoretical, it is not. Many organizations are already dealing with the pressure. Bug bounty programs and security disclosure inboxes have been coping with rising volumes of vulnerability reports for years. Add AI-assisted discovery to the mix and the volume quickly becomes overwhelming. Security teams have a limited amount of time to review submissions, reproduce issues and determine whether they represent real risk.

That is why some vendors have quietly tightened their vulnerability reporting requirements. Others require full proof-of-concept exploits before accepting submissions. A few have stopped accepting unsolicited bug reports entirely. It is not because they do not care about security. It is because the signal-to-noise ratio has become difficult to manage.

None of this means the answer is to slow down vulnerability discovery. With AI scanning code, infrastructure and applications at machine speed, that is probably not even possible anymore. The volume of findings will only increase.

What we are seeing instead looks like a classic bottleneck problem. Eliyahu Goldratt described this dynamic in The Goal, his well-known book on the Theory of Constraints. In any complex system, there is always one step that limits the overall throughput. Improve that step and the constraint moves somewhere else.

For years, the constraint in vulnerability management was discovery. AI just removed much of that constraint.

The next bottleneck is validation. Security teams must determine which findings represent real vulnerabilities, which are exploitable and which can safely be ignored. In other words, they must figure out which of the 112 possible bugs actually matter.

The logical response is not to slow the machines down. It is to apply the same intelligence to the next stage of the process. AI systems can attempt controlled exploitation, reproduce vulnerabilities automatically and generate proof-of-concept attack paths. Instead of simply finding potential flaws, they can help security teams quickly determine which findings represent real risk.

Once validation becomes more automated, the constraint will move again. The next bottleneck will likely be prioritization and remediation. Security teams already struggle with vulnerability backlogs that number in the thousands. Even when a vulnerability is confirmed, coordinating the fix across development, operations and security teams can take far longer than discovering the issue.

Seen through that lens, the Firefox example starts to look less like a novelty and more like a preview of where the industry is heading. One hundred and twelve possible bugs. Twenty-two confirmed vulnerabilities. Two that could actually be exploited.

Those numbers reveal how the economics of vulnerability discovery are changing.

When discovery becomes abundant, the constraint moves. First to validation. Then to prioritization. Eventually, to remediation and operational risk management. Each stage becomes the new bottleneck until better automation and tooling move the constraint again.

AI did not just find more vulnerabilities.

It moved the vulnerability cheese.

Recent Articles By Author


文章来源: https://securityboulevard.com/2026/03/112-or-22-to-2-who-moved-the-vulnerability-cheese/
如有侵权请联系:admin#unsafe.sh