Martin Roesch, CEO
Events don’t just happen; they happen to something. Context gives us the ability to understand what an event is happening to so we can determine its impact and make decisions about the criticality of the event, which is a capability I pioneered back at Sourcefire two decades ago. But the environment security professionals operate in has changed dramatically in the intervening years, and the way we go about generating and applying context has not kept pace.
There are now so many sources of events and alerts that we have entire sectors of the security industry built for dealing with alert fatigue. But if you look at how security teams operate, they typically analyze events with one goal in mind, determining when there has been a compromise in order to initiate their incident response (IR) playbooks and begin risk mitigation procedures.
We need a way to get to concrete detections faster without the massive amount of noise generated by traditional methods to help teams more easily and fully understand what has happened and what it has happened to in order to accelerate response.
How did we get here and how can we do better?
The number of event sources continues to grow, so security teams are dealing with what has become a giant funnel process to filter the massive volume of events generated by their security technologies. Context – the set of data describing the composition, attributes, and vulnerabilities of users, applications, and devices – can aid in this process by acting as a simple prefilter to determine all the events that could possibly matter in your environment. By identifying the things that could possibly matter, it’s relatively simple to remove all of the events that can’t possibly matter. But the model of leveraging deep packet inspection (DPI) to inspect communications in order to generate context across ever-expanding networks now provides only an initial level of visibility for on-prem networks. Teams of analysts are left having to sift through an ever-growing subset of incredibly noisy data to try to find the handful of events that could be compromises, investigate further, decide what are actual compromises, and initiate IR.
The other aspect to consider is because of today’s dispersed, ephemeral, encrypted, and diverse (DEED) environments and the way DPI works, we can’t field that technology efficiently and effectively anymore because it doesn’t work the same way across modern enterprise networks. It’s not practical to use DPI in the cloud or everywhere in your on-prem networks to see and inspect East-West traffic. Organizations are making tradeoffs and typically only instrument the prime routes of concern in and out of the enterprise. As networks expand, deploying sensors where you need them and when you need them becomes next to impossible and even when they are deployed, much of the traffic they’re trying to inspect is encrypted.
Cost piles on as you deploy more broadly with the need for additional taps, more decryption capability, and more sensors while still not effectively addressing the cloud. As the sensor infrastructure compounds, analysts are increasingly bombarded by the “alert cannon”, the mountainous raw feed of detections that is winnowed down into what turns out to be a very small amount of information that matters.
Managing the chaos requires a new approach that starts with our Netography Fusion cloud-native Network Defense Platform (NDP). Leveraging streaming metadata, it characterizes and analyzes the activities of all the participants in an environment – the users, applications, and devices – using context to add richness to the observed activities in an enterprise network to detect compromise and misuse. This method is outcome-focused to relieve analysts that are being bombarded with tens of thousands of events per hour and replacing the alert cannon with alerts that describe compromises and activities that are truly problematic. It is the only way to scalably monitor modern DEED environments and it actively answers the question of how to avoid alert fatigue, replying “by only telling you about things that matter”.
How do we do this with confidence?
As we build a picture of what you’ve got, what it’s doing, and how it’s changing, we can use that information to define the trust boundaries in an organization using both automated and manual features in the Netography platform. Trust boundaries consist of logical entities such as a physical location, an area within that location like a data center or the finance department, a country, a block of IP addresses, a group of users, the identity of the device, or the application in use, etc. Based on context, Netography Fusion can look for signs of compromise by monitoring the interactions of users, applications, or devices that may violate those trust boundaries, as well as other activities that may indicate trouble. Netography detection models (NDMs) can trigger on a violation of trust boundaries, suspicious activities, changes in behavior or even changes in composition. Using these mechanisms for driving to the outcome of identifying compromise significantly reduces the number of events operators are faced with while greatly increasing the actionability and value of alerts.
In our platform, all of this happens in real time and continuously, delivering alerts on real problems that exist based on your specific environment and reducing the chance of false positives by eliminating the need to tell you about a massive set of things that most users aren’t even vulnerable to. For example: A device is starting to behave differently. Based on information regarding its real-time activities and the device context it’s a developer machine talking to a device in the finance department, and it has crossed three trust boundaries it shouldn’t have to get there. This is obviously a problem that would kick off an investigation in almost any organization, but in the threat-centric detection world it almost certainly would not be noticed.
Our updated approach to context that uses enriched streaming metadata to analyze the activities of participants, helps you manage the chaos by both eliminating existing alert cannon infrastructure based on DPI and providing concrete detections of active compromise. It’s much more useful for getting to the outcome that most SOC operators are driving toward – launching an investigation. It’s much more informative and actionable for your SIEM. And it provides a much smaller set of more useful data that you need to kick off your IR playbooks.
The post Managing the Chaos with Context appeared first on Netography.
*** This is a Security Bloggers Network syndicated blog from Netography authored by Martin Roesch. Read the original post at: https://netography.com/managing-the-chaos-with-context/