I’ve seen some email threads on a few listserv groups talking about developing a capability to take indicators from threat feeds and automatically generating signatures that can be used in various detection technologies. I have some issues with taking this approach and thought a blog post on it may be better than replying to these threads. I believe these various feeds can provide some valuable indicators, but for the most part will produce so much noise that your analysts will eventually discount the alerts they produce just by the sheer number of false positives that can come along with alerting on these.
If you think about what is most helpful to an analyst when triaging an alert related to an ip address or a domain it is typically context around why that may be important. Was this ip related to exploitation, delivery, c2… ? How old is the indicator? What actor is it related to? Additionally helpful could be: Are we looking for a GET or a POST if it’s related to HTTP traffic? Is there a specific uri that’s related to the malicious activity or is the entire domain considered bad? Typically these feeds don’t come with the context needed to properly analyze an alert so the analyst spends time looking for oddities in the traffic. As the analyst begins to see this same indicator generate additional alerts his confidence in that indicator may diminish and can soon become noise.
For the people that have implemented this type of feeding process, walk up to one of your analysts and pick out an alert that was generated by one of the ip’s or domains. Ask them why they are looking at it and what would constitute an escalation. If they can’t answer those 2 questions ask yourself if there may be anything you can do to enhance the value of that alert. If the answer is no I would question the value of how that indicator is implemented. To go along with that, if you have never analyzed an alert or at least sat down with an analyst as they are going through them, can you really understand how these indicators can best be utilized? I would argue that until that happens your view may be limited.
Another extremely important aspect is the ability to validate the alerts that are generated. If you don’t have the ability to look at the network traffic (PCAP), then alerting on these ip’s and domains is pretty much useless given the lack of context. One thing I find most important is the ability to determine how the machine got to where it was at and what happened after they got there. Was the machine redirected to a domain, that was included in some feed, and download a legitimate GIF or did the machine go directly to the site and download an obfuscated executable with a .gif extension. If you don’t have the ability to answer these types of questions your analysts will likely wind up performing much unneeded analysis on the host or just discounting these alerts all together.
By blindly alerting on these types of indicators you also run the risk of cluttering your alert console with items that will be deemed, 99.99% of the time, false positive. This can cause your analysts to spend much unneeded time analyzing these while higher fidelity alerts are sitting there waiting to be analyzed. Another issue that relates to console clutter is indicator lifetime. An example may be if a site was compromised and hosted some type of exploit, are you still alerting on that domain after the site has been remediated. Having an ability to manage this process is extremely important if you are wanting to go down this road.
An additional issue I have surrounds some of the information sharing groups. Often these groups will produce lists of bad ip’s and domains and are shared by parties that may not have the experience needed to share indicators that are of a certain standard. Blindly alerting on these can be a mistake as well unless you have confidence in the group or party that is sharing the indicators.
I’m not saying that these feeds and groups don’t provide value. I’ve seen some very good, reliable sharing groups as well as some threat feeds that have had some spot on indicators. A lot comes down to numbers, indicator fidelity and trust as well as doing some work up front to vet the indicators before they are fed into detection and alerted on.
One of the benefits I see in collecting this data is the ability to add additional confidence in an alert. If an analyst is unsure of the validity of an alert they may be analyzing, having a way to see if anything is already known about the ip or domain can be very helpful. It may actually sway their decision in escalating vs discounting.
For ip’s and domains that are deemed high fidelity indicators I don’t see any reason not to alert on these. I think some thought needs to be given to what constitutes high fidelity though. If high fidelity relates to a particular backdoor, exploit or some other malicious activity that you know about, ask yourself if you currently have detection for that. If the answer is no, can you build detection and cover more than a single atomic indicator. Once you have detection and alerting in place for the activity, the ip or domain may not be as important to alert on.
Detecting a determined adversary can be difficult and I feel that some see these feeds and groups as the answer. By implementing and relying on this type of process you can actually weaken your ability to detect these types of intrusions as focus may shift to more of an atomic based detection. I’m all for collecting this type of information, but think about the most effective way to implement it and spend the time to build and verify solid detection. You will be much farther ahead.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.