Why Venture Capital Is Betting Against Traditional SIEMs
文章讨论了新一代安全分析平台对传统SIEM(安全信息和事件管理)系统的挑战,并逐一反驳了“SIEM已过时”的五项核心论点:集中式架构不可扩展、按数据量定价不合理、检测规则低效、缺乏上下文导致误报以及AI原生平台能彻底解决检测问题。文章指出这些论点经不起推敲,并强调尽管存在真实痛点和创新机会,但现有SIEM并未完全失效。 2025-12-17 21:29:4 Author: raffy.ch(查看原文) 阅读量:3 收藏

And why most of the arguments do not hold up under scrutiny

Over the past 18 to 24 months, venture capital has flowed into a fresh wave of SIEM challengers including Vega (which raised $65M in seed and Series A at a ~$400M valuation), Perpetual Systems, RunReveal, Iceguard, Sekoia, Cybersift, Ziggiz, and Abstract Security, all pitching themselves as the next generation of security analytics. What unites them is not just funding but a shared narrative that incumbent SIEMs are fundamentally broken: too costly, too siloed, too hard to scale, and too ineffective in the face of modern data volumes and AI-driven threats.

This post does not belabor each startup’s product. Instead it abstracts the shared assertions that justify recent funding and then stresses them to see which hold up under scrutiny. I am not defending incumbents. I am trying to separate real gaps from marketing (and funding) narratives.

The “SIEM is Broken” Narrative

A commonly cited industry report claimed that major SIEM tools cover only about 19% of MITRE ATT&CK techniques despite having access to data that could cover ~87%. That statistic is technically interesting but also deeply misleading: ATT&CK technique coverage is not an operational measure of detection quality or effectiveness, it primarily reflects rule inventory and tuning effort. Nevertheless, it has become a core justification for the “SIEM is obsolete” narrative. I wasn’t able to find the original report to validate what and how they tested, but I have seen SIEMs that cover much more and have big detection teams taking care of these issues.

The Five Core Claims Driving the Market Thesis

Across decks, interviews, and marketing copy, I picked five recurring themes that define what these companies think incumbents get wrong and what investors are underwriting as the path forward.

1. “Centralized SIEM architectures no longer scale”

The claim is that forcing security telemetry into a centralized repository is too expensive and too slow for modern enterprises generating terabytes of logs every day. The proposed fixes include federated queries, analyzing data where it lives, and decoupling detection from ingestion so you never have to move or duplicate all your data.

The challenge is that correlation, state, timelines, and real-time detection require locality. Distributed query engines excel at ad-hoc exploration but are not substitutes for continuous detection pipelines. Federated queries introduce latency, inconsistent performance, and complexity every time you write a detection. Normalization deferred to query time pushes complexity into every rule. You do not eliminate cost, you shift it to unpredictable query execution and compute costs that spike precisely when incidents occur. Centralizing data isn’t a flaw; it is a tradeoff that supports correlation engines, summary indexes, entity timelines, and stateful detections that distributed query models struggle to maintain in real time. In fact, if the SIEM was to store the data in the customer’s S3 bucket, you can keep cost somewhat under control.

2. “SIEM pricing is broken because it charges by data volume”

A frequent refrain is that incumbent SIEMs penalize good security hygiene by tying pricing to ingestion volume, which becomes untenable as data grows. The proposed response is pricing models untethered from volume, open storage, and customer-controlled compute.

The challenge is that cost doesn’t vanish because you hide volume. Compute, memory, enrichment, retention, and query costs all remain. If pricing is detached from ingestion, it typically reappears as unpredictable query charges, usage tiers, or gated features. Volume is not an arbitrary metric; it correlates with the cost a vendor (or customer) incurs. Treating cost as orthogonal to data volume does not make it disappear; it just blinds you to a key cost driver. I have dealt with all the pricing models: by user, by device, by volume, … in the end I needed to make my gross margins work, guess who pays for that?

3. “SIEM detections are weak because they rely on bad rules”

New entrants commonly assert that traditional SIEM rules are noisy, static, and unable to keep up with modern threat techniques. Solutions offered include natural-language detections, detections-as-code, continuous evaluation, and AI-generated rules.

The challenge is that many of these still sit atop the same primitives. For example, SIGMA is widely used as a community detection language, but it is fundamentally limited: it is mostly single-event, cannot express event ordering or causality, has no native temporal abstractions or entity-centric modeling, and cannot natively express thresholds, rates, cardinality, or statistical baselines. Wrapping these limitations in AI or “natural language” does not change the underlying detection physics. You can improve workflow and authoring experience, but you do not fundamentally invent a new class of detection with the same primitives. And guess what, large vendors have pretty significant content teams – I mean detection engineering teams – often tied into their threat research labs. Don’t tell me that a startup has found a more cost effective and higher efficacy way to release detection rules. If that were the case, all these large vendors would be dumb to operate such large teams.

4. “SIEMs lack context, causing false positives”

The argument here is that existing SIEMs flood analysts with alert noise because they lack deep asset context, threat intelligence, or behavioral understanding. New entrants promise tightly integrated TI feeds, cloud context, or built-in behavior analytics.

Context integration has been a focus of incumbent platforms for years. The real hard problem is not accessing context but operationalizing it without drowning analysts. More feeds often mean more noise unless you have mature enrichment pipelines, entity resolution, and risk scoring built into rules that understand multi-stage attack sequences. Adding more sources does not automatically improve signal quality. The noise problem is as much about rule quality and use-case focus as it is about context availability. Apply the same argument here with regards to the quality of threat feeds that I outlined in the last item.

5. “AI-native SIEMs will finally fix detection and response”

Perhaps the most seductive claim is that incumbent SIEMs were built for a pre-AI world and that new platforms built with agentic AI at every layer will finally crack automation, detection, and investigation.

The challenge is that AI does not eliminate the need for structured, high-quality, normalized data, or explainability, or deterministic behavior in high-risk contexts. AI can accelerate workflows, assist with investigation, and suggest hypotheses, but it does not replace the need for precise, reproducible, and auditable detection logic. Most AI-native claims today are improvements in UX and speed, not architectural breakthroughs in detection theory.

The Uncomfortable Conclusion

VC money is flowing because SIEM is operationally hard, expensive, and often unpopular with SOC teams. There is real pain and real gaps, especially around cost transparency, scaling, and usability. But declaring existing SIEMs obsolete because they are imperfect is not a thesis; it is a marketing slogan.

The core assumptions driving this funding wave deserve scrutiny: centralization is treated as a flaw rather than a tradeoff necessary for continuous detection, pricing complaints get conflated with architectural insights, detection quality is blamed on tooling rather than operational realities, and AI is overstated as a panacea.

On the flip side, here are a couple of directions that should be looked at:

  1. Some of the new entrant SIEMs actually make a dent. They are rebuilding their entire pipelines and storage architecture with modern technologies, not old paradigms. They have a clear advantage and don’t have to deal with millions of lines of tech debt. Using an agentic AI architecture could be quite interesting here.
  2. As the AI SOC emerges – and maybe become a reality – we will probably see more and more MCP servers exposing infrastructure information that can be leveraged, from alerts to context to response capabilities. But we’ll need to see how data schemas and all that will evolve.
  3. The one innovation that has already generated some returns for investors is the entire data pipeline world. Companies like Observo (I had the privilege to be an advisor) have truly added something useful to the SIEMs and as I argue in one of my previous blogs, needs to really become a capability baked into each SIEM out there.

Thanks for the feedback, Jesse!

No comments yet.


文章来源: https://raffy.ch/blog/2025/12/17/why-venture-capital-is-betting-against-traditional-siems/
如有侵权请联系:admin#unsafe.sh