SIEM Is Not Dead. It Just Stopped Moving Fast Enough.
好的,我现在需要帮用户总结这篇文章的内容,控制在100字以内,而且不需要特定的开头。首先,我得仔细阅读文章,抓住主要观点。 文章主要讨论SIEM(安全信息和事件管理)是否过时。作者认为SIEM并没有死,而是现有的SIEM供应商留下的太多缺口,导致新兴的AI SOC和管道供应商填补了这些空白。这些新供应商并没有取代SIEM,而是利用了现有产品的不足。 接下来,文章提到AI SOC在减少警报数量方面有帮助,但同时也可能隐藏了误报的问题。真正的解决方案应该在检测层、关联逻辑等方面进行改进。此外,网络遥测的重要性再次被强调,尤其是在数据主权和架构变化的情况下。 最后,作者指出SIEM市场正在经历重建周期,核心需求依然存在:安全团队需要一个地方来整合信号、构建上下文、改进检测和做出响应决策。 总结起来,文章的核心观点是:SIEM并未过时,而是需要填补现有产品中的缺口,并通过更好的架构、检测工程和网络遥测来提升整体安全性。 </think> 文章讨论了SIEM(安全信息和事件管理)的现状及其未来发展方向。作者认为SIEM并未过时,而是现有供应商留下的缺口被新兴的AI SOC和管道供应商填补。AI SOC主要解决下游问题而非完全替代SIEM。网络遥测的重要性再次被强调。市场正在重建缺失的部分而非完全取代SIEM。 2026-3-19 15:0:0 Author: raffy.ch(查看原文) 阅读量:5 收藏

I recently joined Tim Peacock and Anton Chuvakin on the Google Cloud Security Podcast to talk about SIEM, AI SOC, pricing, federated architecture, detection engineering, and why network telemetry is quietly becoming important again.

The short version is simple: SIEM is not dead. Calling it obsolete makes for good marketing, but it is not a serious thesis.

The new wave of AI SOC, SIEM, and pipeline vendors is not proving SIEM is dead. It is proving SIEM vendors left too many gaps open for too long.

The recent wave of AI SOC startups, pipeline vendors, and new SIEM entrants is a response to real pain in the market. They are not replacing SIEM. They are capitalizing on the gaps incumbent vendors left open.

TL;DR

  • SIEM is not dead. Vendors just left too many gaps open.
  • AI SOC often exposes those gaps more than it replaces SIEM.
  • Alert reduction alone will hide false negatives.
  • The real fixes are better routing, detection, context, and workflows.
  • Network telemetry still matters more than the market narrative suggests.

The market is not replacing SIEM. It is rebuilding missing pieces.

They say they will reduce alert volume, improve detections, make investigations faster, lower storage costs, and simplify operations. None of that is new. Those were always core parts of the SIEM vision.

That is why so many of these new entrants exist. They found real gaps:

  • Pricing that became too hard to justify
  • Architectures that did not scale as well as they should
  • Detection stacks that still require too much manual work
  • Default content that creates too much noise
  • Workflows that remain painful for analysts and service providers

This is why I do not buy the “SIEM is over” narrative. If incumbents fix these gaps, many point solutions lose their edge quickly.

AI SOC is mostly a patch on downstream pain

The strongest short-term value in the AI SOC market is obvious: too many teams, especially MSSPs and down-market security providers, are drowning in alerts. A lot of environments are running with default content, light tuning, and limited budget for customization. Large enterprises can afford deep implementation and constant refinement. Many managed providers cannot.

If a product makes the SOC quieter without improving coverage, you may not have solved the problem. You may have just converted visible false positives into invisible false negatives.

If a startup is solving alert overload by learning that the same service-account misconfiguration fires every morning at 8am and can safely be deprioritized, that is useful. But it is still a patch on bad upstream logic, and it often hides a second problem: false negatives. Once teams see fewer alerts, they assume the system got smarter. Sometimes it did. Sometimes it just got quieter. The real fix belongs closer to the detection layer, the correlation logic, the content, and the configuration model.

That is why I think a lot of the current AI SOC wave is temporary in its present form. Not temporary because the need goes away, but temporary because the best parts of that value will be absorbed elsewhere. Some of it should move back into the SIEM. Some of it should live in the detection engine. Some of it belongs in better onboarding, better rule tuning, better data handling, and better defaults.

There is still room for new winners here. But “we reduce alerts by 80%” is not a durable thesis by itself.

The architecture debate is not centralized versus federated. It is about access patterns.

In theory, pushing compute to where the data sits is attractive. In practice, the answer depends on access patterns.

Some data absolutely does not need to be centralized all the time. Endpoint system calls are a good example. You do not want to shovel every low-level signal into a central platform by default if you can process, summarize, or prioritize it earlier.

But the moment an analyst, agent, or investigation workflow needs context, enrichment, and cross-correlation, some centralization comes back. You need to connect what happened on the endpoint with what happened on the firewall, identity plane, SaaS layer, email stack, and elsewhere.

So the future is probably not pure centralized or pure federated. It is hybrid:

  • Keep some data local or near-source
  • Route and centralize the parts that matter
  • Pull deeper context only when needed
  • Optimize around how investigations actually happen

This is why I keep coming back to smart data routing. Most organizations do not need to send every piece of data to the same place forever. But they do need an architecture that knows when to summarize, when to correlate, and when to pull more detail back in.

Data pipelines became the Trojan horse

Vendors in this space positioned themselves as optimization and routing tools. Send your data here, normalize it, trim low-value volume, route it to the right storage tier, keep costs down, and retain optionality. In many environments, that solved a real problem.

But the strategic consequence is bigger than cost control.

Once a pipeline vendor owns your ingestion layer and your integrations, it becomes an abstraction layer between you and the SIEM. That makes the SIEM less sticky. At first the pipeline vendor only routes data. Then it adds search. Then it runs lightweight detections. Then it supports simple rules. At some point it starts to look suspiciously like a simple SIEM.

If someone else owns the data path, they eventually get a shot at owning more of the security brain.

Pricing remains one of the category’s hardest unsolved problems

Almost everyone agrees that SIEM pricing has been a problem. Much fewer people agree on what the right answer is.

The vendor reality is straightforward: data volume drives cost. The customer reality is equally straightforward: they hate unpredictability.

That tension gets even worse in the service-provider world. MSSPs and MSPs often sell packaged services, per-user offerings, or per-device contracts. Their customers do not want a fluctuating bill because log volume spiked this month. So the thing that is economically clean for the vendor can be operationally ugly for the buyer.

There is no perfect answer here. But the next generation of pricing models will need to do a better job of separating:

  • Predictable commercial packaging
  • Actual backend resource consumption
  • Incentives for better data quality rather than more raw ingestion

The market has already started experimenting. Bring-your-own-storage, bring-your-own-compute, lower-cost data lakes, and more selective routing are all responses to the same pressure. Pricing is one of the core forces reshaping the market.

Detection engineering still needs much more help from the platform

Rules still need adaptation by environment. Thresholds differ. Data quality differs. Sources differ. Customer expectations differ. Generic content does not simply drop in and work.

What is surprising is how much low-hanging product work still remains. A modern platform should do far more to help users answer basic but critical questions:

  • Is the data required for this detection even present?
  • Is it configured in a way that can ever make this rule fire?
  • Are there obvious gaps or mistakes in the source configuration?
  • Which detections are silent because they are poorly mapped to the environment?

The more interesting direction, in my view, is not just better standalone rules. It is better context. Call it a context graph, an entity graph, a risk graph, or something else. The naming matters less than the function.

You want a living model of users, devices, applications, identities, behaviors, and risk signals. If the system knows that a user is coming from their normal IP, on a familiar device, through a known browser pattern, after strong authentication, that should shape how other events are interpreted. If all of those signals change at once, that should shape the response differently.

That kind of context is where detection quality meaningfully improves.

Network telemetry is not “back,” but it is still critical

I do not think this automatically means a major standalone NDR renaissance. But I do think many teams went too far in treating network telemetry as secondary once endpoint and application visibility improved.

An endpoint is still a single point of failure. If you lose visibility there, the network can still tell you a lot. It can help validate what else is happening. It can show you unmanaged systems, OT environments, choke points, and traffic patterns you will not otherwise see clearly.

This matters even more now because some organizations are reassessing where systems and data live. In parts of Europe, I am seeing more discussion around data sovereignty, political trust, private clouds, and selective moves back toward local or regional infrastructure. As architectures spread and governance constraints tighten, network visibility becomes more important again.

So no, I would not frame this as “throw away EDR and buy NDR.” That is the wrong lesson.

What happens next

The real question is not whether SIEM survives. It is which vendors understand they are now selling data architecture, detection quality, analyst workflow, and decision support.

The SIEM market is heading into another rebuild cycle. Some AI SOC and pipeline startups will disappear, some will be absorbed, and some incumbents will finally fix what they should have fixed years ago. But the core need is not going away: security teams still need a place where signals come together, context gets built, detections improve, and response decisions get made.

That is still SIEM territory, even if the implementation looks very different from what we used to buy.

? If you are building, buying, operating, or replacing SIEM, I’d love your input. I’m collecting market data at raffy.ch/SIEM. Anyone can contribute, and everyone is welcome.

No comments yet.


文章来源: https://raffy.ch/blog/2026/03/19/siem-is-not-dead-it-just-stopped-moving-fast-enough/
如有侵权请联系:admin#unsafe.sh