I talk a lot about how I may go about finding adversary behavior, but I have not spoken very much about how teams may be alerted. This is a much needed conversation in my opinion. As teams gain capability and visibility, their alert volumes will likely increase too. The obvious example may be the team that implemented a threat feed and wants to incorporate a watchlist that is largely derived from this. Sure, you will probably receive alerts from this and management will be happy that you are finally doing "Intel driven detection" ;) , but how will your analysts work these without the proper context as to why that indicator may be bad?
I believe there are 3 different forms of detection. Using these correctly can:
1. Decrease analyst fatigue.
2. Decrease false positive rate.
3. Decrease alert volume.
3. Gain additional visibility.
4. Gain additional alerting capability.
Detections that are fed directly to an analyst as an alert.
These detections are generally high fidelity and are well documented. Available to the analyst are descriptions of what the intention of the detection rules are along with true positive and false positive examples.
Detections that are used for correlation.
These detections are generally low fidelity. They may happen often in our environments as normal activity, but when combining multiple detections and looking at order / timing, they may indicate malicious actions being taken by an attacker. I also believe that all detections that go directly to analysts should go in this bucket as well. You never know when looking at clusters of detections can change how an alert is categorized.
The downside of alerting from dynamically correlated events is that they may be more difficult to analyze. You may often be looking for behaviors and those analysts with limited experience may miss key indicators that point to malicious behavior. If tuned correctly the alert volume should be low so it may be possible to route all alerts derived from these to more senior analysts.
Detections written to increase visibility.
These detections are used to increase our ability to perform direct alerting as well as correlation. An example may be that we want to know when a Windows command prompt is spawned across a smb session. We can use an IDS such as snort to gain this visibility. We can then feed that into our correlation bucket or directly to an analyst depending on fidelity. This is just one example, think about other technologies your org has that would allow you to write rules and gain additional capability (HIPS, HIDS, Proxy, Sysmon..).
So now if we take our initial example of alerting directly off a newly purchased threat feed, we may see (based on alert volume and fidelity) that a better option could be to use these detections in the correlation bucket. An example could then be Watchlist alert + Rare User-Agent + URI Volume. Alone these detections may fire 1000's of times a day, but together they may mean a newly discovered compromise.
Well done! It is so well written and interactive. Keep writing such brilliant piece of work. Glad i came across this post. Last night even i saw similar wonderful Splunk tutorial on youtube so you can check that too for more detailed knowledge on Splunk.https://www.youtube.com/watch?v=ZlKPqjuM0wo
ReplyDelete