Wednesday, November 23, 2016

The Hunting Cycle and Measuring Success

I typically write about the technical aspects of hunting, but wanted to do something different here as a result of a conversation I had a while back which was spurred from this tweet: https://twitter.com/MDemaske/status/792068652371550208.  While I strongly agree that hunters should be hunting and not building power points or driving projects, I also don’t believe many functions in business have an open ended budget and at some point you will need to justify your work or even value to the company.

So how do we measure success?  Before answering that question, I believe it’s important to define the various stages of hunting (from my perspective).  By defining the process I believe it will be much easier to see where we can point to areas of success, regardless if you come away empty handed or not on a hunt.

From my experience, here are the different stages of a typical hunting cycle.

Define:  In my opinion, hunts are more successful when focused on a task.  One way to do this is to define a general topic and then list the ways that an attacker may take advantage of that topic.  As an example, lets say that we want to focus a hunt on persistence mechanisms. What are the different ways that persistence can be maintained? 
  1. Services
  2. Scheduled tasks
  3. Load order hijacking
  4. Run keys
  5. Startup folder
  6. Valid credentials for externally facing devices
  7. VPN access 

Research:  Based on the above topics, do we know enough about each one to be able to successfully identify what malicious instances may look like?  Are there other methods that can be used to identify the above topics other than what we already know about?  In a lot of cases we are aware that attackers are using $method, but we simply don’t know enough about the topic to be able to look for $method as an indicator of malicious activity.  It’s ok if you don’t know (I’m in that boat a lot) and the researching / learning part is actually kind of fun.    

Availability:  Once we have our list assembled, do we have the data available to be able to find what we are looking for?  If not, is it something that we have the ability to collect?

Develop:  Here is the big one.  Can we assemble queries that will reliably identify the activity while keeping the volume and false positive rates very low?  If these queries will be transitioned to another team to operationalize, then maintaining analyst confidence in the data output is very important.  This confidence can be undermined by the quality and quantity of data being presented to them.

Automate:  Queries that have a high likelihood of being transitioned or those that may be candidates for further development should be scheduled for routine runs.  Seeing the output on a regular basis may give you insight into further improvement.

Knowledge Transfer:  When going through this process you have likely learned more about the topic then when you first began.  Little nuances in data me be a strong indicator that something is amiss and may be the entire reason for the query.  Don’t assume that the person sitting next to you will know the reasoning behind the data.  Document each hunt that will be transitioned and include your thought process behind each one. 

Operationalize:  If this data will be consumed by another team, provide recommendations on scheduling, alerting, and actions related to positive results.  Remember that this team will need to consume it into their existing processes, but you likely have the most experience behind the query and it may be good practice to give them insight into your expertise with it.  It’s also a good idea to create a feedback loop where modifications or enhancements can be requested or proposed.

Track:  These hunts may not initially prove to be valuable, but over time they may show to be very effective at identifying malicious behavior.  Being able to track the good and the bad is important from a team perspective.  They may give you insight into areas that need improvement from a methodology or data standpoint.  You may also find areas where you are doing well and can toot you horn for impacting an adversary's ability to operate in your environment.

So the next question is how do we measure success.  With all of the work that goes into a single hunt, I don’t think that the only measurement is if you find an intrusion during the above process.  Sure, that’s the ultimate goal, but if that’s what hinges on your existence then I feel bad for you.  Some things to look at that can show the value of all the work that you are doing are:


  1. Number of incidents by severity
  2. Number of compromised hosts by severity
  3. Dwell time of any incidents discovered
  4. Number of detection gaps filled
  5. Logging gaps identified and corrected
  6. Vulnerabilities identified
  7. Insecure practices identified and corrected
  8. Number of hunts transitioned
  9. False positive rate of transitioned hunts
  10. Any new visibility gained

Threat hunting can have a very positive impact on the security posture of a company and being able to show this value is important.   Of course we all want to find that massive intrusion, but there is a lot that goes into this so don’t sell yourself short when asked to justify what you do for your company! 

2 comments:

  1. What are your thoughts on using MITRE's ATT&CK framework to jumpstart research? https://attack.mitre.org/wiki/Technique_Matrix

    ReplyDelete
  2. ATT&CK is a great resource and would definitely recommend it for both jump starting research as well as base lining where your org is at.

    ReplyDelete

Note: Only a member of this blog may post a comment.