Thursday, February 23, 2017

Hunting for Chains

In my last blog post I touched on something that I want to talk more about.  I think far too often, when hunting, we can focus on single indications for signs of something being bad.  Intrusions are far more than a single event though, so why spend all of your time hunting that way?  Often the actions that an attacker will take will be in short time bursts.  These bursts may only last for a few seconds to a few minutes.  Looking for multiple events within these windows of time, I feel, can be a very effective way to spend your time.

One of the things I try and do with my blog is to share how I think about the work that I do.  I have gained so much knowledge from others over the years and feel strongly about giving back.  Lately I have been thinking a lot about , what I like to call, behavior chains.  Basically these are the actions that an attacker would likely take to accomplish their goal.  Each action is a link and the sum of actions make up the chain.  A link can be something as simple as a process name like net.exe or cmd.exe (these are probably 2 of the most commonly spawned processes and attackers definitely take advantage of these).  A link can also be something as complex as a correlation search that surfaces a specific behavior (see my last blog post for some of these correlations).  So why do I care about looking at hunting this way? There are multiple reasons, but a few of them are to model how attackers may operate, data reduction and higher fidelity output.

Lets say that we want to look for odd network logon behavior because we know that when an attacker is active in our environment they will generate anomalous 4624 type 3 events.  The problem with this is that there is likely thousands upon thousands, if not millions, of these events per day.  We could try and reduce this data by looking for rare occurrences of source machine, dest machine, user or a combination of the three. This will likely produce anomalies, but will also take time to investigate each one.  Now take that time and multiply it by the number of days in a week.  How much time are you spending looking at these events and are the results beneficial?

I think the question to ask is what are the things an attacker would do that would generate these logon events.  We know that just by the meaning of the event that it was the result of an action taken by a remote machine and we probably care more about the action than the logon itself.  We can also assume that we would be trying to identify KC7 type of activity so lets break that down into different categories that may likely generate these types of events.  I’ll also include a couple of sample commands for each one.
  1. Recon / Enumeration
    1. net group “domain admins” /domain
    2. net view /domain
  2. Command Execution
    1. psexec -accepteula \\192.168.56.10 ipconfig
    2. wmic /node:192.168.56.10 /user:administrator /password:supersecret process call create "cmd.exe /c ipconfig" 
  3. Data Movement
    1. copy backdoor.exe \\192.168.56.10\c$\windows\temp\backdoor.exe
    2. net use x: \\192.168.56.10\c$ /user:administrator “superscecret”

Seeing the above commands should obviously raise some alarms, but often it can be difficult to distinguish bad from legitimate activity.  I’ve personally chased many admins over the years.  The difficulty can be raised by how you craft your queries, the amount of data that’s returned and your ability to identify anomalies.  I think we can also limit ourselves based on truly understanding what we are looking for, the log sources we have available and what we perceive as our limitations to meld the two together.  I should say that I don’t have command line process audit logging enabled, but am still able to accurately identify the actions of the above commands based on the logs generated on the destination machine or the domain controller.

When I began looking into the best way to identify these chains I knew that there would be numerous actions that I would want to be bale to identify and that I would want to have some freedom in how I was able to search the data.  Being able to correlate based on source user, dest user, source machine, dest machine, process or any other pivot point that the data provided was very important.  I also wanted to be able to look across different windows of time and to easily incorporate multiple data sources.

I’ll explain what was accomplished using Splunk.  If you want to explore this and you are not a Splunk user, my hope is that you will be able to use the thought behind the technology and incorporate it into a solution that you have available.  

  1. Scheduled queries for specific behaviors are sent to a summary index to house only interesting data.
  2. Custom event types are created for broader queries (logon types, process names…).
  3. Normalized field names for pivoting through data.  This can be important when searching or displaying fields from multiple log sources.

The screen shots below show 2 different dates.  The first clusters of events on 2/21 are the results of running the following commands on my domain controller (95N).
    net view \\192.168.56.10
    net group “domain admins” /domain
    net use x: \\192.168.56.10\c$ /user:administrator “password”

The second cluster of events on 2/23 are the results of copying a batch script to a member server and executing it.  The contents of the batch script were:
    net group “domain admins” /domain >> data.txt
    netstat -an >> data.out
    ipconfig >> data.out

By ComputerName

By Source IP


I know that this may be an overly simplistic view, but I believe that it shows the value of trying to create chains of events.  If someone was looking at each action individually they may be easily glossed over, but when combining them you may begin to see clusters of anomalous behavior.  The more of these individual actions you can include in your data collection the greater chance you have of surfacing these anomalies.

The power of this is not the technology, but the effort you put into your methods and analysis.  Also, I’m not at all saying that you should stop looking for single events that point to malicious activity, but think of this as simply another arrow you can put in your quiver.

As always, I would love to hear your thoughts.  Feel free to use the comment section below or reach out to me on twitter @jackcr. 

Saturday, February 11, 2017

Patterns of Behavior

If you have been in security for any length of time then you have probably seen David Bianco’s Pyramid of Pain.  One of the thoughts behind the pyramid is that the higher up you go the more cost to the adversary you bring.  The top of the pyramid is where we really want to be as it’s much harder for an adversary to change how they operate and we, as defenders, can make it very difficult for them to carry out their operations if we can begin to find them at these upper levels.. 

I said yesterday in a tweet that a goal of mine is to be able to track an entire intrusion simply by the alerts that I generate.  While that tweet may be a little far fetched, the sentiment behind the tweet is not.  I want to know as much as I can about attacker activity so that I can effectively build a web of detection.  The bigger and tighter the web, the more chances I will have to identify patterns that would be indicative of attacker activity.  By studying past attacks, public reporting and any peer sharing I can shape the things that I should be going after.

Lets say that we have a fictitious actor that we are tracking that we’re calling FridayFun.  Lets also say that we have been tracking FridayFun since 2012 when we initially responded to them HERE (Note: This is a link to the download of a memory forensics challenge I created in 2012).  During the course of time we have identified some of the following regarding techniques used for Kill Chain 7 activity.
  1. The use of net commands for enumeration and lateral movement.
  2. The scheduling of remote tasks.
  3. The use of psexec for tool execution.
  4. Copying of files via command line.

Lets take a look at each of the above and how we may be able to find it in the Windows security events.  Note that these methods require additional windows logging.  I’m also including Splunk searches that can be used to find this behavior.  If you aren’t a Splunk user, my hope is that you can take these queries and incorporate them into a solution or method you have available.

We have identified FridayFun often copying files to $ shares via the command line as in the example below.
copy thisisbad.bat \\192.168.56.10\c$\windows\thisisbad.bat

When a file is copied via the command the logs produced are different than if you were to copy them via Windows Explorer.  Both methods produce multiple file share access events, but the Access Masks are different depending on method. From the command line, theses are the unique values:
0x100180
0x80
0x130197

The following query will identify the 3 unique access masks being created within the same second on a single machine.  

sourcetype=wineventlog:security EventCode=5145 Object_Type=File Share_Name=*$ (Access_Mask=0x100180 OR Access_Mask=0x80 OR Access_Mask=0x130197) |bucket span=1s _time |rex "(?<thingtype>(0x100180|0x80|0x130197))" |stats values(Relative_Target_Name) AS Relative_Target_Name, values(Account_Name) AS Account_Name, values(Source_Address) AS Source_Address, dc(thingtype) AS distinct_things by ComputerName, _time |search distinct_things=3

For the scheduling of remote tasks, lets say that we have identified a pattern of behavior each time FridayFun has utilized this tactic.
  1. Query the remote machine for the local time (think about machines in different time zones).
  2. The remote task is created.
  3. The source machine queries the scheduled task on the remote machine.
This command sequence may look like:
  1. net time \\192.168.56.10
  2. at \\192.168.56.10 03:25 c:\windows\thisisbad.bat
  3. at \\192.168.56.10

The net time command and remote At query will both a produce a 5145 file share event where the IPC$ is the share being accessed.  The Access Mask for these 2 events will be 0x12019f.  The Relative Target Name will be different.  A srvsvc named pipe will be created for the net time command while an atsvc named pipe will be created for the At query.  For the creation of the scheduled task we are looking for 4698 events and specifically looking for At jobs in the Task Name.

The following query will identify the above 3 events occurring within an hour time frame on a single machine.

sourcetype=wineventlog:security (EventCode=4698 Task_Name=\\At*) OR (EventCode=5145 Object_Type=File Share_Name=*IPC$ Access_Mask=0x12019f (Relative_Target_Name=srvsvc OR Relative_Target_Name=atsvc)) |bucket span=1h _time |rex "(?<thingtype>(\\\\At.|srvsvc|atsvc))" |stats values(Relative_Target_Name) AS Relative_Target_Name, values(Account_Name) AS Account_Name, values(Task_Name) AS Task_Name, dc(thingtype) AS distinct_things by ComputerName, _time |search distinct_things=3

Friday fun often utilizes psexec to aid in execution on remote machines.  We have identified them using commands such as the following:

psexec -accepteula \\192.168.56.10 -c gsecdump.exe

When looking for psexec being executed I have found that it may be more beneficial to look for indications on the destination side.  Psexec can often be renamed and may be harder to identify on the source if you aren’t able to log the FileDescription metadata field of the binary when it executes.  On the destination side a 5145 event will be created where the share name is ADMIN$ and the Relative Target Name is PSEXESVC.EXE.  We are also looking for any new process creation within that same second to identify what psexec executed on the destination machine (Note: when psexec copies a file and executes it , it will be spawned from the IPC$ share).

This behavior can be identified by the following query.

sourcetype=wineventlog:security ((EventCode=4688 New_Process_Name!=*\\PSEXESVC.EXE New_Process_Name!=*\\conhost.exe New_Process_Name!=*\\dllhost.exe) OR (EventCode=5145 Share_Name=*ADMIN$ Relative_Target_Name=PSEXESVC.EXE)) |bucket span=1s _time |rex "(?<thingtype>(4688|PSEXESVC))" |stats values(Relative_Target_Name) AS Relative_Target_Name, values(Account_Name) AS Account_Name, values(Source_Address) AS Source_Address, values(New_Process_Name) AS New_Process_Name, dc(thingtype) AS distinct_things by ComputerName, _time |search distinct_things=2

Copying files from the destination to the source machine will produce different logs than if I were to copy files from the source to the destination.  FridayFun has used commands such as the following.

copy \\192.168.56.20\c$\windows\system32\1.txt .\1.txt

This method is much like the first query with the exception of the Access Masks.  The following query can be used to identify this behavior.

sourcetype=wineventlog:security EventCode=5145 Object_Type=File Share_Name=*$ (Access_Mask=0x100081 OR Access_Mask=0x80 OR Access_Mask=0x120089)  |bucket span=1s _time |rex "(?<thingtype>(0x100081|0x80|0x120089))" |stats values(Relative_Target_Name) AS Relative_Target_Name, values(Account_Name) AS Account_Name, values(Source_Address) AS Source_Address, dc(thingtype) AS distinct_things by ComputerName, _time |search distinct_things=3

For those that like dashboards, I created the following to show what the output would look like.



As a defender, i think far too often we are focused on singular events that will give us that "ah ha" moment.  Sure, there are some that may be a dead giveaway of a malicious actor operating in our environment, but I think often singular alerts are valued by their true to false positive ratio.  With the method I showed above I can look for specific behavior, regardless of noise, and bucket events into previously identified attacker activity.  When the number of unique events meets a certain value based on source, destination or user I can then raise an alarm.  The more behavior that I can identify and query for, the larger my web becomes.  I don’t know that I will ever be able to piece together wan entire intrusion by the alerts I generate, but I at least want to make it extremely difficult for any adversary to be able to operate freely in my environment. 


I would love to hear any comments.  You can find me on twitter under @jackcr. 

Monday, January 30, 2017

Hunting: What does it look like?

Its been a while since Ive posted anything and wanted to get this out.  Partly because of an article that I read that talks about how attackers may move laterally within your network.  The other reason is because I think it really shows what you can do with windows event logs (provided you have certain logging enabled) to identify anomalous behavior.

We know that attackers will often use built-in operating system utilities to facilitate host/user enumeration and lateral movement (if you want to dig more into what I will be talking about I would suggest starting here https://attack.mitre.org/wiki/Discovery).  I have previously made the recommendation that if you have command line process auditing enabled to focus much attention on the system32/syswow64 directories.  Looking for rare or fist seen command line arguments from processes that are spawned from these directories can, at times, point to malicious behavior when lateral movement is involved.  If attackers have brought tools in with them to facilitate enumeration or lateral movement this method can become much more difficult as the command line arguments may be unknown and simply glossed over.  If we flip this around though and look at effects of enumeration on the destination side we may be able to spot what may have otherwise been missed.

I am a big advocate of looking for the same things multiple ways and in different log sources.  There are times where logging events fail due to network issues, application issues…  Having layers of coverage is a good idea, imho, and this includes trying to identify the same activity on both source and destination machines.

Now for the article which you can read here: https://blogs.technet.microsoft.com/enterprisemobility/2017/01/24/cyber-security-attackers-toolkit-what-you-need-to-know/.  The author does a good job of describing some of the tools and methods that may be employed by an attacker, but falls short when it comes to discussing how to find indications of this behavior.  Instead they make the statement "With an assumed-breach mindset, we assume the attacker has already breached the perimeter and is on the network. Unfortunately, this is when the adversary goes dark to the defender, as the attacker has already breached network defenses as well as antivirus.  If you have the necessary logging enabled, I don’t believe they need to go dark. 

If we dive into what they describe they first talk about the “net user” and “net group” commands.  How might these look to a defender?

Source: 
    net user administrator /domain
Destination:
    Event Code: 4661
    Object Type: SAM_USER
    Object Name: S-1-5-21-*-500 (* represents domain)
    Access Mask: 0x2d 
Note: In my testing, users in the Domain Admins group will display a SID.  Other users will not. The exception is the Guest and krbtgt accounts.  I would also pay attention to the krbtgt SID S-1-5-21-*-502.  I would think that it would be very odd to see this and may indicate an attacker is intending to use Golden Tickets.

Source: 
    net group “Domain Admins” /domain
Destination:
    Event Code: 4661
    Object Type: SAM_GROUP
    Object Name: S-1-5-21-*-512
    Access Mask: 0x2d
Note: Also pay attention to the Enterprise Admins group with the SID of S-1-5-21-*-519

The following can be used to identify PowerSploit’s Get-NetSession, Get-NetShare, netsess.exe and net view.  The net view command may look something like "net view \\192.168.56.10".

Destination:
    Event Code: 5145
    Relative Target Name: srvsvc
    Share Name: IPC$
    Access_Mask: 0x12019f
Note: These events may be very loud.  I would suggest looking for a single source creating srvsvc pipes on multiple machines within a specified time frame.  This may be indicative of enumeration activity.

The article goes on to talk about the use of mimikatz and the use of hashes and kerberos tickets.  

One method I have found to identify an attacker obtaining hashes through lsadump is to look for the following.  I have also found this to be a pretty accurate indicator and would suggest, if you have these logs, to monitor for this event.
    Event Code: 4656
    Object Type: SAM_DOMAIN
    Process Name: lsass.exe
    Access Mask: 0x705

For kerberos ticket theft, I have not found an accurate way to identify this in the event logs, but when executed with mimikatz there will be files written with the .kirbi extension.  If an attacker was using WCE there would be two files created named wce_ccache and wce_krbtkts.  These files can obviously be renamed, but If you have the ability to monitor file creations I would recommend including all of these and hopefully catch when they are dropped on the file system and before renaming.

With respect to the article, I would like to thank Microsoft for putting it out.  I believe that more should release what attackers are doing so that people can focus attention to what is being used.  I see a lot of threat hunting providers say you need it, but don’t share any of what they look for or why they look for it.  I hope those that do this choose to share more in the future.


I would love to hear what others are doing with regards to hunting for and detecting the above.  Finding innovative ways to hunt for bad guys is what I’m passionate about so please share in the comments section. 

Wednesday, November 23, 2016

The Hunting Cycle and Measuring Success

I typically write about the technical aspects of hunting, but wanted to do something different here as a result of a conversation I had a while back which was spurred from this tweet: https://twitter.com/MDemaske/status/792068652371550208.  While I strongly agree that hunters should be hunting and not building power points or driving projects, I also don’t believe many functions in business have an open ended budget and at some point you will need to justify your work or even value to the company.

So how do we measure success?  Before answering that question, I believe it’s important to define the various stages of hunting (from my perspective).  By defining the process I believe it will be much easier to see where we can point to areas of success, regardless if you come away empty handed or not on a hunt.

From my experience, here are the different stages of a typical hunting cycle.

Define:  In my opinion, hunts are more successful when focused on a task.  One way to do this is to define a general topic and then list the ways that an attacker may take advantage of that topic.  As an example, lets say that we want to focus a hunt on persistence mechanisms. What are the different ways that persistence can be maintained? 
  1. Services
  2. Scheduled tasks
  3. Load order hijacking
  4. Run keys
  5. Startup folder
  6. Valid credentials for externally facing devices
  7. VPN access 

Research:  Based on the above topics, do we know enough about each one to be able to successfully identify what malicious instances may look like?  Are there other methods that can be used to identify the above topics other than what we already know about?  In a lot of cases we are aware that attackers are using $method, but we simply don’t know enough about the topic to be able to look for $method as an indicator of malicious activity.  It’s ok if you don’t know (I’m in that boat a lot) and the researching / learning part is actually kind of fun.    

Availability:  Once we have our list assembled, do we have the data available to be able to find what we are looking for?  If not, is it something that we have the ability to collect?

Develop:  Here is the big one.  Can we assemble queries that will reliably identify the activity while keeping the volume and false positive rates very low?  If these queries will be transitioned to another team to operationalize, then maintaining analyst confidence in the data output is very important.  This confidence can be undermined by the quality and quantity of data being presented to them.

Automate:  Queries that have a high likelihood of being transitioned or those that may be candidates for further development should be scheduled for routine runs.  Seeing the output on a regular basis may give you insight into further improvement.

Knowledge Transfer:  When going through this process you have likely learned more about the topic then when you first began.  Little nuances in data me be a strong indicator that something is amiss and may be the entire reason for the query.  Don’t assume that the person sitting next to you will know the reasoning behind the data.  Document each hunt that will be transitioned and include your thought process behind each one. 

Operationalize:  If this data will be consumed by another team, provide recommendations on scheduling, alerting, and actions related to positive results.  Remember that this team will need to consume it into their existing processes, but you likely have the most experience behind the query and it may be good practice to give them insight into your expertise with it.  It’s also a good idea to create a feedback loop where modifications or enhancements can be requested or proposed.

Track:  These hunts may not initially prove to be valuable, but over time they may show to be very effective at identifying malicious behavior.  Being able to track the good and the bad is important from a team perspective.  They may give you insight into areas that need improvement from a methodology or data standpoint.  You may also find areas where you are doing well and can toot you horn for impacting an adversary's ability to operate in your environment.

So the next question is how do we measure success.  With all of the work that goes into a single hunt, I don’t think that the only measurement is if you find an intrusion during the above process.  Sure, that’s the ultimate goal, but if that’s what hinges on your existence then I feel bad for you.  Some things to look at that can show the value of all the work that you are doing are:


  1. Number of incidents by severity
  2. Number of compromised hosts by severity
  3. Dwell time of any incidents discovered
  4. Number of detection gaps filled
  5. Logging gaps identified and corrected
  6. Vulnerabilities identified
  7. Insecure practices identified and corrected
  8. Number of hunts transitioned
  9. False positive rate of transitioned hunts
  10. Any new visibility gained

Threat hunting can have a very positive impact on the security posture of a company and being able to show this value is important.   Of course we all want to find that massive intrusion, but there is a lot that goes into this so don’t sell yourself short when asked to justify what you do for your company! 

Sunday, October 16, 2016

Threat Hunting - Getting Closer to Anomalous Behavior

I just read this post by @hexacorn which is related to threat hunting and how impractical it may be to implement due to the vast amount of data generated on a corporate network.  He states that this data is good for post mortem analysis or during an investigation, but to use it for hunting is largely ineffective.  First I would like to say that I agree with @hexacorn in a sense that people generally look at threat hunting as singular events which can produce huge amounts of data to sift through and may be largely impractical.  I think it’s natural for people that are new to this area of work to view it this way and eventually want to give up because it’s not proven to be effective.  

I like to look at hunting in a different way.  How can I use the knowledge I have about actors and behaviors to apply to hunting so that I can get closer to the events that need to be investigated.  By not focusing on singular indicators, but more accurately focusing on behaviors of attackers, I can vastly reduce the amount of data and get closer to the anomalous events that may need attention.  

In the blogpost @hexacorn described hackers using ipconfig.exe, netstat.exe, net.exe, cscript.exe, etc, but said that it’s used so often that it’s impossible to sift through.  I consider the above, as well as many others, to be lateral movement tools and are often used in conjunction with each other.  A good way to look at this differently would be to look for combinations of these tools being executed within 10 minutes from a single process name with a single user on a single host.  This will vastly reduce the amount of data being returned as well as surfacing a behavior that attackers commonly use.  This is far different than surfacing a single tool, used legitimately thousands of times a day and also that attackers commonly use.

In the same sentence, psexec was mentioned.  It’s true that this tool is often used by administrators and can be difficult to look at a single process with the ability or confidence to say this is bad.  If I step back and think about this differently I can say that when an attacker moves through my network they often use the same credentials and tools to perform lateral movement.  Applying this thought to the problem I can say that it’s odd for a single user to be identified on multiple machines, executing psexec against different hosts.  Again, this will reduce the amount of data needed to be looked at while surfacing, what may be, anomalous behavior.
 
For identifying odd processes, I agree that it’s difficult to look at these en masse and pick out the bad ones.  I like to look at this a couple of different ways and both are from a parent process.  The first is, knowing that cmd.exe is probably the most commonly used executable during any intrusion, what processes are parents of cmd.exe and how many times was it executed by the number users (a single user executing cmd.exe on multiple machines from a rare processes may be anomalous).  The second way is to think about how commands can be executed.  Tools such as powershell, wmic, at.exe, schtasks.exe… and what users and parent processes are executing those (would if be odd for a local admin or service account to be scheduling a task or the owner of an IIS processes to be executing powershell?).  I would also suggest that if you want to find odd occurrences of svchost.exe, which is mentioned in the post, to look at the parent processes of svchost and not necessarily the location on disk.

I truly feel that if you want to be successful at hunting you need to look at the problem differently.  In an IDS/IPS world we deal in singular events that match a specific signature.  We need to throw this model out for hunting because it simply doesn’t work.  As @hexacorn described in his post, it produces huge amounts of data that are simply impractical review effectively. 

Another issue I see, that wasn’t talked about in the post, is knowledge transfer.  I may think about hunting in the way i described above, but when I transition something to a different team will they view the data in a way that was intended, or will they even understand the data that is being given to them.  If they don’t understand the data, their confidence may be low as to the severity of the event.  An example would be looking for anomalies in msrpc pipes.  Host or user enumeration of windows machines likely happens over smb and the communication between machines during these events will often occur over named pipes.  If I generate an alert for multiple samr pipes being created with multiple remote machines in the UNC path over a given period of time, what will the analyst do with this?  Don’t assume that an analyst, regardless of experience , will know what your thought process was when you created this hunt.  Give the analyst context because these can be just as difficult to understand as an IP address that alerted with no explanation as to why.  If the analyst doesn’t react with the same sense of urgency to the alert as you feel it deserves, it may not be a failure on their part, but more of a failure on your part for not giving them the proper knowledge transfer and ensuring they understand what they are looking at.  


My hope is that @hexacorn doesn’t take offense to my rebuttal to his post.  I think about hunting as being creative in the ways that I look for threats.  This is not an extension to an IDS/IPS, but an entirely different method and mindset of finding bad. 

Wednesday, September 28, 2016

Forensic Analysis of Anti-Forensic Activities

Screen Shot 2014-01-29 at 8.36.49 PM
If you were to believe the above tweet you may think “Whats the point, we’re doomed by this anti-forensic stuff”. No offence to Brent Muir, but I think differently.
A few weeks ago I attended Shmoocon and sat in the presentation by Jake Williams and Alissa Torres regarding subverting memory forensics.  As I sat through the talk I kept thinking to myself that it would be impossible to completely hide every artifact related to your activity, which Jake also stated in his presentation.  Seeing a tweet the other day that had a link to download the memory image, I quickly grabbed it to see what I could find  (If you are interested in having a look at the image it can be found here).  I will also state that I have not looked at anything else that was posted as I didn’t want to have any hints on what to look for.
My first course of action once I had the memory dump was to figure out what OS I was dealing with. To find this I used volatility with the imageinfo plugin.
Screen Shot 2014-01-29 at 8.19.59 AM
We can see from the above output we have an image from a Windows 7 SP1 machine. My next step is to create a strings file, grabbing both ascii and unicode strings, of the memory image.
Screen Shot 2014-01-29 at 10.20.09 AM
Now that I had all the data I needed I could begin my investigation.
Whenever you are given a piece of data to analyze it is usually given to you for a reason. Whatever that reason is, our first course of action should be to find any evidence that will either confirm or deny. I know based on the presentation that the purpose of the tool was to hide artifacts from a forensic investigator. Knowing this, I set out to find any evidence that will validate these suspicions.
I first began by looking at the loaded and unloaded drivers since the tool will most likely need to be interacting with the kernel to be able to modify these various memory locations. I wanted to see if there was any odd filenames, odd memory locations or filepaths and used the volatility plugins modscan and unloadedmodules for this. This unfortunately led me to a dead end.
My next course of action was to take a look at the processes that were running on the machine. I compared the output from pslist and psscan as a way to look to look for processes in different ways and to note any differences. The pslist plugin will walk the doubly-linked list of EPROCESS structures whereas the psscan plugin will scan for EPROCESS structures. My thought was that if a process had been hidden by unlinking itself, I would be able to identify it by comparing the 2.
Screen Shot 2014-01-29 at 12.55.21 PM
By diff’ing the 2 files I was able to see something odd in the psscan output.
Screen Shot 2014-01-29 at 12.58.10 PM
We can see that qggya123.exe definitely looks odd. If we look for our odd process name using this plugin it appears that the process may no longer be running as there is no end time listed in the output, or this may simply be related to Jake’s tool.
Screen Shot 2014-01-29 at 1.14.45 PM
The psxview plugin will attempt to enumerate processes using different techniques. The thought being that if you attempt to hide a processes you will have to evade detection a number of different ways. Looking at the output below, the only way we are able to see that there was this suspicious looking process running (at least at one point in time) was by scanning for these EPROCESS structors.
Screen Shot 2014-01-29 at 1.09.24 PM
Now that we have something to go off of, my next step was to see if I could gain a little more context around the process name. The PID of our suspicious process is 5012 with a PPID of 512. If we look to see what PID 512 is, we can see that it was started under services.exe.
Screen Shot 2014-01-29 at 1.30.28 PM
Naturally I will want to see what services were running on the machine. For this I used the svcscan plugin, but was unable to locate our suspicious filename.
The next step I took was to search for the filename in the memory strings. The output below shows the file being executed from the command line by a file named add.exe. We also see that our PID and PPID are included as a parameter. Now that we know how it was executed we have an additional indicator to search for.
Screen Shot 2014-01-29 at 3.15.19 PM
Searching for add.exe I was able to see several different items of interest.
1. add.exe with the /proc parameter was used to execute 3 different files. qggya123.exe, cmd.exe and rundll32.exe
2. qggya123.exe is the parent of cmd.exe
3. cmd.exe is the parent of rundll32.exe
4. The /file parameter I assume is used to hide the presence of various files in the /private directory
5. The /tcpCon command was used to specify tcp connections given src ip (decimal) src port dst ip (decimal) dst port and I’m guessing the final 4 is indicating IPv4
1885683891 = 112.101.64.179
1885683931 = 112.101.64.219
1885683921 = 112.101.64.209
Screen Shot 2014-01-29 at 1.41.57 PM
If we look around those strings we are able to see even more of the malicious activity.
Screen Shot 2014-01-29 at 1.52.44 PM
I did try to dump these files from memory, but was unable to. I also attempted to create a timeline in an attempt to identify sequence of activities, but was unable to locate the activity I was looking for.
One item that would be a top priority at this point would be to identify all the port 443 connections generated from this host. Based on the output from netscan we can see the known malicious connections were established, but we still need to verify that these are the only ones. We would also want to be able to look at any pcap associated with these connections in an attempt to identify what data was transferred as well as possibly creating some network detection.
Screen Shot 2014-01-29 at 4.15.24 PM
Knowing that there is some type of process / file / network connection hiding and the fact that I’m not able to analyze the memory image as I would typically expect to be able to, I would most likely request a disk image at this point. Even with the anti forensics tactics that were employed, I was able to determine enough to know that this machine is definitely compromised as well as some of the methods being used by the attacker. I was also able to generate a list of good indicators that could be used to scan additional hosts for the same type of activity.
In case anyone is interested I created a yara rule for the activity identified.
rule add
{
strings:
$a = “p_remoteIP = 0x”
$b = “p_localIP = 0x”
$c = “p_addrInfo = 0x”
$d = “InetAddr = 0x”
$e = “size of endpoint = 0x”
$f = “FILE pointer = 0x”
$g = ” /tcpCon ”
$h = “Bytes allocated for fake Proc = ”
$i = “EPROC pool pointer = 0x”
$j = “qggya123.exe”
$k = “add.exe” wide
$l = “c:\\add\\add\\sys\\objchk_win7_x86\\i386\\sioctl.pdb”
$m = “sioctl.sys”
$n = “\\private”
condition:
any of them
}
This was a quick analysis of the memory image and am still poking around at it. As I find any additional items I’ll update this post.
Update 1:
Running the driverscan plugin, scanning for DRIVER_OBJECTS, we can see something that looks a little suspicious.
Screen Shot 2014-01-29 at 6.04.14 PM
After seeing this I ran the devicetree plugin to see if I was able to get an associated driver name. Based on the output below, it looks like our malicious driver is named sioctl.sys.
Screen Shot 2014-01-29 at 6.11.49 PM
So now that we have a driver name, lets see if we can dump the driver.
Screen Shot 2014-01-29 at 6.13.36 PM
Looking at the .pdb path in the strings output we can see that we have our malicious driver.
Screen Shot 2014-01-29 at 6.16.30 PM
Update 2:
An additional piece of information I found while running filescan, looking for the files called by the execution of add.exe, was the file path returned by volatility. All the paths appear to have \private as the root directory. I suspect this could be a great indicator for identifying compromised machines.
Screen Shot 2014-01-29 at 7.18.01 PM
I would expect to see filepaths that have the root of \Device\HarddiskVolume1\
Screen Shot 2014-01-29 at 7.22.14 PM
I verified this by locating every file that has \private in the filepath and all files that were returned were those files that appear to be faked.
Screen Shot 2014-01-29 at 7.31.08 PM
Update 3:
One of the cool things we can do with volatility is run our yara rules against the memory image. When a hit is found it will display the process in which the string was found. The syntax we would use to scan the image with our yara rule is:
Screen Shot 2014-01-29 at 10.36.28 PM
We can see that all of the command line input and output is located in the vmtoolsd process. I limited the output for brevity.
Screen Shot 2014-01-29 at 10.38.53 PM
Screen Shot 2014-01-29 at 10.43.37 PM
We also see 2 different processes where add.exe was found.
Screen Shot 2014-01-29 at 10.41.53 PM
Screen Shot 2014-01-29 at 10.42.54 PM
If we compare what we found with the strings plugin we see one difference that we were unable to find with the yara plugin. By mapping the string of our original suspicious file back to a process we see that it’s located in kernel space.
Screen Shot 2014-01-29 at 10.51.58 PM
We can also take the strings from the binary and see where those map back to as well.
Screen Shot 2014-01-29 at 10.55.55 PM
Had we been unable to locate the driver with driverscan and device tree we still may have been able to identify the file based on the string found.
Update #4:
The awesome people over at volatility were kind enough to send me an unreleased plugin to demonstrate the faked connections that ADD.exe created. I’m told the plugin will be coming out with their new book. The netscan plugin uses pool scanning so it may find objects that were faked like we have already seen. The new plugin works against the partition tables and hash buckets that are found in the kernel. I’m sure MHL can explain it much better then I just did :)
If we look at the normal netscan output we are able to see the faked established connections.
Screen Shot 2014-01-30 at 12.37.29 AM
Now if we use the tcpscan plugin we can see the difference (or not see the difference) :)
Screen Shot 2014-01-30 at 12.38.58 AM
Valid established connections would look like this with the tcpscan output.
Screen Shot 2014-01-30 at 12.40.48 AM
As you can see we were able to find additional artifacts related to the add.exe tool by comparing the output of the 2 plugins.
Again thank’s to the volatility team for floating me the plugin to use for this blog post as well as all of their research and work going into the tool and moving the field of memory forensics forward!