George Kurtz, founder and CEO of Crowdstrike, is credited with inventing vulnerability management. In the 20+ years since the term was coined and the category created, the practice has come to consume considerable amounts of security teams’ time and budgets. Despite both the discipline and the tooling maturing considerably, defenders are still struggling to manage vulnerabilities by most objective measures.
Indeed, according to the 2025 Verizon Data Breach Investigations Report (DBIR), 20% of the nearly 10,000 breaches in their analysis were the result of vulnerability exploitation — putting vulnerabilities on par with credential abuse and ahead of phishing in terms of initial access vectors. In their separate corpus of data, Mandiant found exploitation to be the primary initial access method in a third of their incident response engagements, making it the leading vector.
The challenge for defenders is the need to look for and at more and more vulnerabilities, even as there is less and less clarity about which to remediate and how.
If software sustains businesses like food sustains lives, then vulnerabilities might rightly be compared to foodborne illness and cases are growing at an average of 22% per year. Over 40,000 CVEs were disclosed in 2024 alone, and even conservative projections suggest that we are on pace for nearly 50,000 CVEs in 2025. The sheer volume is a problem unto itself, but other events and circumstances make sifting through CVEs increasingly difficult.
In February 2024, the Linux Kernel project became a CVE Numbering Authority (CNA) and the maintainers’ policy on CVE assignment has raised concerns about whether it will be possible to meaningfully identify and prioritize Linux vulnerabilities, which is critically important given its pivotal role in digital infrastructure ranging from the cloud to embedded systems. This issue is exacerbated by the ongoing vulnerability enrichment backlog for the National Vulnerability Database (NVD).
And, of course, the CVE program itself was in jeopardy of shutting down altogether, causing multiple CVE “forks” to mitigate the potential damage. All of these factors contribute to increased difficulty in determining exactly what vulnerabilities to address, but they also create long-term challenges for the overall practice of vulnerability management.
There are already plenty of methodologies for prioritizing vulnerabilities and a corresponding level of debate about their efficacy. Although the factors that feed the various vulnerability scoring systems are constantly being refined, the scores they produce are probabilities of varying qualities and so the decisions based on them are — ultimately — bets.
While it’s possible to make safe(r) bets, adversaries have their own hands to play. Consider the recent example of CVE-2025-24054. It has a “moderate” base common vulnerability scoring System (CVSS) score of 6.5, below the arbitrary threshold of seven, that many organizations use as the cut-off for high severity vulnerabilities. Microsoft’s own assessment was that the vulnerability was “less likely to be exploited”.
Even so, evidence of exploitation was uncovered just over a week after being disclosed. Conversely, data in the DBIR shows that the median time to remediate known-exploited vulnerabilities is 38 days. The fundamental issue, then, is that remediation can be prioritized using the best available information, but still lacks sufficient accuracy and takes too long to be effective.
Concern about the use of AI for vulnerability discovery and exploit development has been growing, and it may yet become a major issue, but adversaries already implicitly understand the bitter lesson: Modern fuzzers can be scaled both vertically and horizontally, so discovery of new vulnerabilities is a simple function of available compute power. AI is poised to make this process even more efficient and may make exploit development easier, but it’s hardly a prerequisite for success.
It also bears repeating that adversaries don’t submit CVEs. So while many new vulnerabilities may be discovered using modern techniques, they won’t become CVEs unless and until there is evidence of exploitation or the same vulnerability is reported by a researcher.
One area where less data exists — but the data that does exist is deeply concerning — is what might be called “vulnerability recidivism”. In June 2022, Google researcher Maddie Stone published a set of root cause analyses of zero-day exploits in the wild. That research demonstrated that 50% of the zero-day exploits in the first half of the year were variations of previously identified and patched vulnerabilities; nearly a quarter were variations from the previous year.
This illustrates an interesting half-life for vulnerabilities and — although there is no similar analysis for recent years — when paired with publicly available exploit data, patterns of attackers revisiting the same attack surfaces time after time appear (think JavaScript type confusion in web browsers or use-after-free vulnerabilities in OS subsystems like network filtering) and make more sense.
There is no question that vulnerability scanning and patch management remain necessary, but they are clearly no longer sufficient, and are at or near a point of diminishing marginal returns. The numbers demonstrate that scanning and patching are not a viable path to breach prevention. To truly prevent breaches, we must focus on both short- and long-term efforts to reduce the exploitability of our digital infrastructure, including more aggressive adoption of secure-by-design principles, as well as new and better approaches to runtime security.