Sunday, February 20, 2022

Hunting for Fakes

I've seen people on twitter refer to hunting as more of a pre detection action than a completely separate process.  Granted, there are often times when hunting will lead to new detections, but I don't necessarily think that should always be the goal or even the focus.  I believe more that hunting should produce insights into your data and this hunting process is defined with a goal of surfacing unknown, malicious activity.  I also don't believe that the results of a hunt should always lead to a true positive or false positive (this is where the pre detection thought comes in), but instead, provide enough information to act, watch or discard.

Over the past few years I've spent a lot of time creating capabilities around categories or stages of threats and have really found great success in this.  In building Jupyter notebooks that focus on these categories or stages you have the ability consume, enrich and output data in ways that will allow you to gain insights that you may not otherwise be able to see.  Something that I learned along the way was not to let my knowledge or current tool/data set at that point in time be the limiting factor, but that my ability to hunt often hinged on my willingness to add additional methods, sources of data or even people with a particular expertise.

These past few months I've been working toward a methodology around hunting for social media targeting.  I initially wanted to be able to distinguish normal interactions vs the anomalous ones.  As I started working with this more and more though, there were several key decision points that needed to be answered over and over, once suspicious profiles have been identified.  Things like...   Does the profile send connection requests to multiple employees with diverse job roles or do they send connection requests to a few people that are similar?  Does the profile follow up connection accepts with messages?  Are there multiple messages between a fake profile and an employee?  What is the job title of the fake profile?  Would the job title seem to entice a user into further interactions?

The biggest hurdle though is being able to identify fake profiles.  While I'm not going to go into the types of data we're collecting, I will say that the more visibility you have into web based traffic, the better your collection will be.  You can also do a lot with simple email parsing though.  An email notification for a connection request or a message will include a link with the profile name as well as a link to the profile image.  You may also be surprised at the percentage of people that have their work email linked to their account.  

The first stage of identification is looking for anomalous interactions.  I calculate statistics around how many users the profile interacted with.  How many interactions between a user and a profile.  How many distinct interactions were there and what is the span of time between user and profile.  I then run these numbers through an isolation forest to highlight outliers.  These results are basically eyeballed to see if anything stands out.




The next stage is to run profile pictures through two stages of detection.  The first stage is a reverse image search to see if the picture was taken from somewhere on the internet.  The second stage is a GAN detection model (remember when I mentioned enlisting people with a particular expertise).  





All interactions are ingested into a Neo4j database before any detections take place.  Once a detection takes place, labels are assigned to the profile in the database and all previous and future interactions are marked as suspicious.


Code for database inserts of all interactions.


Code for adding labels.



I also believe that there are stages or escalations in a linkedin connection.  One of these escalations is sending a message to a user, so regular interactions and messaging are different relationship types in the database.  In the image below, KNOWS signifies a message was sent.




By labeling nodes we can start grouping by labels.  You may have noticed in the screenshot for the initial insert, a label "jobtitle" was added.  Visually you would be able to see if similar job roles were being targeted or if you want to look at maybe all engineers that had interactions with fake accounts.  Below is a graph of interactions with fake accounts within a 7 day period.


 

Being able to also pull data back into the notebook based on labels and relationships brings additional opportunities for further enrichment and analysis.  The following query would retrieve interactions for the stated profile.

data_list = tx.run("""match (r)-[KNOWS]-(p) where (r)-[KNOWS]-(p {name: 'fake-profile-123456'}) return r,p""").data().

One of the enrichments I've done is retrieving users's based off of interactions with watched accounts and bring all of the detections each identified user has had in the past 'x' number of days into the notebook.

The notebook does so much more and the capability here is obviously not something you can write detection for, but is rather an environment to hunt for a specific type of threat in.  This allows us to hunt at scale without needing several people spending hours a day.  It also allows us to pivot in ways that you simply wouldn't be able to do otherwise.

This is the result of shared work and I would personally like to thank the people that know way more than I do about social media targeting as well as those that helped with the GAN detection model.  You know who you are.



Sunday, October 31, 2021

Measuring User Behavior

 I'm always looking for different ways to look at data.  Things that will give me insights into how users act as they go about their daily activities.  Having done this for a while now I can say that people typically like routines as they go about their work, but how they go about their work is largely dictated by their job role.  Examples may be the developer that often creates archive files or the sales person that authenticates from multiple locations per day.   

Operating under the assumption that people will generally look the same from day to day it would make sense to understand why someone may deviate from their normal routine, especially from a security context.  One way we can do this is to measure their own, unique behavior and apply some anomaly detection to highlight days of interest.

For the example I'm going to show, I wanted to focus on insider from a data theft standpoint, but this method can be applied to many different scenarios as long as you can define the points of interest to observe.  The hypothesis that I'm using for insider data theft is:

1. Users who knowingly steal data will often use deceptive actions. 
2. They will perform actions that are new or rare for them. 
3. They will use uncommon exfiltration paths.
4. Rare actions across multiple phases can be identified.

The phases (categories) I'm using here are Deception, data staging and exfiltration.  

Below I'm pasting screenshots of the relevant portion of the Jupyter notebook I'm using.  The cool thing about using Jupyter is it's independent of the log source.  I can use this as long as the data can be exported or accessed via api.  Code blocks below are commented to describe the function.



Below is an example rule from an external rule file.




Below would be the output from the anomaly detection.  



The fields that are being measured are:

Rule_1 - Rule_6: Detection rules from rules file
Archive_Count: The number of archives created by day
Mail_Count: The number by day of emails sent to a personal email provider with an attachment
Rename_Count: The number of files renamed in a day
NTU_Count: The number of files uploaded to an external source in a day
Weighted: The sum by day of additional values given to rule detections
Day_Count: The total count of all events by day

This method won't tell you 100% that anomaly is malicious.  But you may be able to infer what the user was doing and the likely hood of needing further investigation.  An example may be 2 archive events + 2 email events may result in 2 zip files being emailed to a personal email address.  That may or may not need investigating depending on the users role.







 

Saturday, February 27, 2021

More Behavioral Hunting and Insider Data Theft

I consider hunting for insider data theft to be the apex in user behavioral analysis.  I recently gave a presentation on this at an internal conference that my team holds once a year.  The talk was titled "How I spent my pandemic" and focused on the things that I've built, discovered and learned over these past months as they relate to this topic.  I'd like to share some of those things in this post.

If you've paid attention to the many indictments handed down by the DoJ over the past few years you can see that espionage via insider theft is real and happens quite often.  I don't see that companies are as well positioned to identify evidence of this type of theft.  Some reasons for this I believe are that there are no public repositories of knowledge.  No security companies blogging about cases they've investigated and nobody publicly talking about what works and doesn't work in the detection realm.  I liken it to how the APT was viewed and discussed 10 years ago. 

The end goal of insiders and many state sponsored external actors are the same, but how they get there can be very different.  Generally speaking, there is no exploitation to gain access to a target network.  No malware to maintain or facilitate further access.  No internal recon, or the many other actions that are typically taken by external actors during active intrusions.  To the contrary.  Insiders will often use approved applications to target data they typically access on devices they are assigned.  What we are left with are changes in their behavior.  That's great.  We have changes in behavior, but changes can happen all of the time for many different reasons.  How do we find the changes that matter is the question.  Here is where we begin.

I've written previous blogs around behavior anomalies at an individual level, but have not discussed measuring behavior of a population of people.  The thought around this is that data theft in general should be an anomaly, so when this occurs the user should end up in a cluster that is far outside the norm.  Here's the Splunk query I'm using to generate the numbers I'm using for scoring and clustering:

index=a_summary_index |stats values(phase_id) as phase_id count(phase_id) as detection_count dc(phase_id) as dc_phase_count values(source_id) as source_id dc(source_id) as dc_detection_count by user |eventstats count(user) as user_count by phase_id |nomv phase_id |nomv source_id |eval userhash=md5(user) |eval phase_hash=md5(phase_id) |eval source_hash=md5(source_id) |eventstats count(user) as user_source_count by source_id |table user,userhash,phase_hash,source_hash,phase_id,source_id,user_count,user_source_count,detection_count,dc_phase_count,dc_detection_count |eval user_count_mult=case(user_count=1, 200, user_count<5, 150, user_count<10, 50, user_count>=10, 0) |eval dc_phase_count_mult=case(dc_phase_count<2, 0, dc_phase_count>=2, 100) |eval dc_detection_count_mult=case(dc_detection_count>2, 100, dc_detection_count=2, 50, dc_detection_count=1, 0) |eval addedweight = (user_count_mult+dc_phase_count_mult+dc_detection_count_mult)

To explain the search:

index=a_summary_index |stats values(phase_id) as phase_id count(phase_id) as detection_count dc(phase_id) as dc_phase_count values(source_id) as source_id dc(source_id) as dc_detection_count by user |eventstats count(user) as user_count by phase_id |nomv phase_id |nomv source_id |eval userhash=md5(user) |eval phase_hash=md5(phase_id) |eval source_hash=md5(source_id) |eventstats count(user) as user_source_count by source_id |table user,userhash,phase_hash,source_hash,phase_id,source_id,user_count,user_source_count,detection_count,dc_phase_count,dc_detection_count 

  • Each detection is logged to a summary index so that I can search over past events.
  • phase_id is a name given to a stage that the user is at in relation to data exfil
  • source_id is the name of the search that generated the detection
  • dc_phase_count is the distinct count of phases
  • dc_detection_count is the distinct count of different detection names
  • user_count is the count of users that were seen in a distinct "phase_id".
  • nomv phase_id: converts multivalue field to a single value (used for hashing)
  • nomv source_id: converts multivalue field to a single value (used for hashing)
  • userhash is the md5 hash of the username
  • phase_hash is the md5 hash of the distinct phases that the user was seen in
  • source_hash is the md5 hash of the distinct detection names that was generated by the user
  • user_source_count is the count of users with matching detections
The remainder of the search is used for scoring:

|eval user_count_mult=case(user_count=1, 200, user_count<5, 150, user_count<10, 50, user_count>=10, 0) |eval dc_phase_count_mult=case(dc_phase_count<2, 0, dc_phase_count>=2, 100) |eval dc_detection_count_mult=case(dc_detection_count>2, 100, dc_detection_count=2, 50, dc_detection_count=1, 0) |eval addedweight = (user_count_mult+dc_phase_count_mult+dc_detection_count_mult)

  • user_count_mult is a value given to numbers of users with pahse_id's.  Fewer users = higher score
  • dc_phase_count_mult is a value given to the count of a distinct phase
  • dc_detection_count_mult is a value given to to the number of users that generated a detection name
  • addedweight is the sum of the above values
Sample output would look like the following:



I can then use kmeans to cluster the output:











The outlier in this case was cluster number 5 which generated a risk score of 400:




I feel that when a user begins to execute on their intent to steal data they will generate anomalous clusters of detections.  These detections can be measured, scored and highlighted against the overall population of users.  Looking for these higher scores is a great way to hunt for anomalous behavior patterns.  This method could also be applied to external threats as well as an external attacker will likely generate anomalous behavior patterns when moving laterally within your environment.

Tuesday, July 7, 2020

Insider Threat Hunting


If you subscribe to the notion that a user, who is intent on stealing data from your org, will require a change in their behavior.  Then identifying that change is critically important.  As this change happens, they will take actions that they have not previously taken.  These actions can be seen as anomalies and is where we want to identify and analyze their behavior.

I've been studying insider IP theft, particularly those with a connection to China, for a number of years now.  I feel that, in a way, this problem mimics the APT of 10 years ago.  Nobody with exposure to it is willing to talk about the things that work or don't.  This leaves open the opportunity for this behavior to successfully continue.  While I'm not going to share specific signatures, I would like to talk about the logic I use for hunting.   This is my attempt to generate conversation around an area that, in my opinion, can't be ignored.

Just as in network intrusions, there are phases that an insider will likely go through before data exfil occurs.  But unlike network intrusions, these phases are not all required as the insider probably has all the access they need.   They may perform additional actions to hide what they are doing.  They may collect data from different areas of the network.  They may even  simply exfil data with no other steps.  There are no set TTP's for classes of insiders, but below are some phases that you could see:

Data Discovery
Data Collection
Data Packaging
Data Obfuscation
Data Exfil

I've also added a few additional points that may be of interest.  These aren't necessarily phases, but may add to the story behind their behavior.  I'm including it in the phase category for scoring purposes though.  The scoring will be explained more below.  The additional points of interest are:

Motive - Is there a reason behind their actions?
Job Position - Does their position give them access to sensitive data?
Red Flags - e.g. Employee has submitted 2 week notice.

By assigning these tags, behaviors that enter multiple phases suddenly become more interesting.  In many cases, multiple phases such as data packaging -> data exfil should rise above a single phase such as data collection.  This is because a rule is only designed to accurately identify an action, regardless of intent.  But by looking at the sum of these actions we can begin to surface behaviors.  This is not to say that the total count of a single rule or a single instance of a highly critical rule will not draw the same attention.  It should, and that's where rule weighting comes in. 

Weighting is simply assigning a number score to the criticality of an action.  If a user performs an action that is being watched, a score is assigned to their total weight (weighted) for the day.  Depending on a user's behavior, their weighted score may rise over the course of that day.  If a user begins exhibiting anomalous behavior and a threshold is met, based on certain conditions, an alert may fire.

An explanation of alert generation.  My first attempt at this was simply to correlate multiple events per user.  As I developed new queries the number of alerts I received grew drastically.  There was really no logic other than looking for multiple events which simply led to noise.  I then sat down and thought about the reasons why I would want to be notified and came up with:

User has been identified in multiple rules + single phase + weight threshold met (500)
User has been identified in multiple phases + weight threshold met (300)
User exceeds weight threshold (1000)

To describe this logic in numbers it would look like:
|where ((scount > 1 and TotalWeight > 500) OR (pcount > 1 and TotalWeight > 300) OR (TotalWieght > 1000))

By implementing those 3 requirements I was able to eliminate the vast majority of noise and began surfacing interesting behavior.  I did  wonder what I may be missing though.  Were there users exhibiting behavior  that I would potentially want to investigate?  Obviously my alert logic wouldn't identify everything of interest.  I needed a good way to hunt for threads to pull, so  I set about describing behaviors in numbers.  I wrote about this a little in a previous post http://findingbad.blogspot.com/2020/05/its-all-in-numbers.html, but I'll go through it again.

Using the existing detections rules I used the rule name, weight and phase fields to create metadata that would describe a user's behavior.  Here's the fields I created and use for them:

Total Weight - The sum weight of a user's behavior.
Distinct rule count - The number of unique rule names a user has been seen in.
Total rule count - The total number of rules a user has been seen in.
Phase count - The number of phases a user has been seen in.

Knowing that riskier behavior often involves multiple actions taken by a user, I created the following fields to convey this.

Phase multiplier - Additional value given to a user that is seen in multiple phases.  Increases for every phase above 1.
Source multiplier - Additional value given to a user that is seen in multiple rules.  Increases for every rule above 1.

We then add Total Weight + Total rule count + Phase multiplier + Source multiplier to get the users weighted score.







By generating these numbers we can, not only observe how a user acted over the course of that day, but also surface anomalous behavior when compared to how others users acted.  For this I'm using an isolation forest and feeding it the total weight, phase count, total rule count and weighted numbers.  I feel these values best describe how a user acted and therefore are best used to identify anomalous activity. 














I'm also storing this metadata so that I can:

Look at their behavior patterns over time This particular user was identified on 4 different days:
















Compare their sum of activity to other anomalous users.  This will help me identify the scale of their behavior.  This user's actions are leaning outside of normal ptterns:













I can also look at the daily activity and compare that against top anomalous users or where they rank as a percentage.  You can see on the plot below that the user's actions were anomalous on a couple of different days. 













There are also a number of other use cases for retaining this metadata.

This has taken a lot of time and effort to get to this point and is still a work in progress.  I can say though that I have found this to be a great way to quickly identify the threads that need be pulled.

Again, I'm sharing this so that maybe a conversation will begin to happen.  Are orgs hunting for insiders, and if so, how?  It's a conversation that's long overdue in my opinion.

Monday, June 22, 2020

Dynamic Correlation, ML and Hunting


Hunting has been my primary responsibility for the last several years.  Over this time I've done a lot of experimentation around different processes and methods of finding malicious activity in log data. What has always stayed true though is the need for a solid understanding of the hypothesis you're using, familiarity with all the data you can take advantage of and a method to produce/analyze the results.  For this post I'd like to share one of the ideas I've been working on lately. 

I've previously written a number of blog posts on beaconing.  Over time I've refined much of how I go about looking for these anomalous connections.  My rule of thumb for hunting beacons (or other types of malicious activity) is to ignore static IOC's as those are best suited for detection.  Instead, focus on behaviors or clusters of behaviors that will return higher confidence output.  Here's how I'm accomplishing this in a single Splunk search.

  1. Describe what you are looking for in numbers.  This will allow you to have much more control over your conditional statements which impacts the quality of your output.
  2. Define those attributes that you are interested in and assign number values to them.  These attributes will be your points of correlation.
  3. Reduce your output to those connections that exhibit any of what you are looking for.  This is the correlation piece where we can use the total score of all attributes identified within a src/dest pair.  Higher sums translate to greater numbers of attributes identified.

Below is a screenshot of the search I came up with.  This is again using the botsv3 data set from Splunk's Boss of the SOC competition.  Thanks Splunk!














The following is a description of the fields in the output.

-dest: Based on the data source, this field may include the ip address or domain name.
-src_ip: Source of request
-dest_ip: Destination of request
-bytes_out: Data sent from src_ip.
-distinct_event_count: The total number of connections per destination.
-i_bytecount: The total count of bytes_out by src/dest/bytes_out. Large numbers may indicate beaconing.
-t_bytecount: The total count of connections between src/dest.
-avgcount: i_bytecount / t_bytecount.  Beacon percentage.  Values closer to 1 are more indicative of a beaocn.
-distinct_byte_count: Total count of bytes_out (used in determining percentages for beaconing).
-incount: The count of unique bytes_in values.  When compared with t_bytecount you may see the responses to beacons.
-time_count: The number of hours the src/dest have been communicating.  Large numbers may indicate persistent beaconing.
-o_average: The average beacon count between all src/dests.
-above: The percentage above o_average.
-beaconmult:  weight multiplier given to higher above averages.
-evtmult: weight multiplier given to destinations with higher volume connections.
-timemult: weight multiplier given to connections that last multiple hours.
-addedweight: The sum of all multipliers.

You can see from the search results that we reduced 30k+ events down to 1700 that exhibit some type of behavior that we're interested in.  This is good, but still not feasible to analyze every event individually.  I have a couple of choices to reduce my output at this point.  I can adjust my weighted condition to something like "|where weighted > 100" which would have the effect of requiring multiple characteristics being correlated.  My other choice is to use some type of anomaly detection to surface those odd connections.  You can probably tell from the "ML" portion of the title which direction I'm going to go.  So from here we need a method to pick out the anomalies as the vast majority of this data is likely legitimate traffic.  For this I'll be inserting our results into a MySQL database.  I don't necessarily need to for this analysis, but it's a way for me to keep the metadata of the connections for greater periods of time.  This will allow me to do longer term analysis based on the data that is being stored.







Once it's in the database we can use python and various ML algorithms to surface anomalous traffic.  For this I'll be using an Isolation Forest.  I'll also be choosing fields that I think best represents what a beacon looks like as I don't want to feed every field through this process.

distinct_event_count: Overall activity.
time_count: How persistent is the traffic?
above: How does the beacon frequency compare to all other traffic?
addedweight: How many beacon characteristics does this traffic exhibit?

The following screenshot contains the code as well as the output.

Looking at the top 3 tenths of 1 percent of the most anomalous src/dest pairs you can see that there are 4 destination ip addresses that may need investigating.  If you've read my last 2 posts on beaconing the 45.77.53.176 ip should look familiar.  This ip was definitely used for C2.  The 172.16.0.178 ip is also interesting.  Taking a quick look at the destination in the botsv3 data, you can see memchached injection that appears to be successful.  Additional investigation of the src ip's in this output would definitely be justified. 

I will say that this method is very good at identifying beacons, but beacons are not always malicious.  Greater work may be needed to surface those types malicious connections.  Some additional ideas may be first seen ip's or incorporating proxy data where even more characteristics can be defined, scored and correlated.

A large portion of hunting is experimentation so experiment with the data and see what you can come up with!

Sunday, May 17, 2020

It's all in the numbers


In my last few posts I talked about hunting for anomalies in network data.  I wanted to expand on that a bit and specifically talk about a way we can create metadata around detectable events and use those additional data points for hunting or anomaly detection.  The hope being that the metadata will help point us to areas of investigation that we may not normally take.

For this post I'm again using the BOTS data from Splunk and I've created several saved searches based on behaviors we may see during an intrusion.  Once the saved searches run, the output results are logged to a summary index.  More on that topic can be found here: http://findingbad.blogspot.com/2017/02/hunting-for-chains.html.  The goal is to get all of our detect data into a queryable location as well as a way that we count.

For our saved searches we want to ensure the following.

Create detections based on behaviors:
  • Focus on accuracy regardless of fidelity.
  • A field that will signify an intrusion phase where this detection would normally be seen.
  • A field where a weight can be assigned based on criticality.
  • A common field that can be found in each detection output that will identify the asset or user (src_ip, hostname, username...).
Once the output of our saved searches begins to populate the summary index we would like to have results similar to the screenshot below:













The following is the definition of the fields:
(Note: the events in the screenshot have been deduped.  All calculations have taken place, but am limiting the number of rows.  Much of what is identified in the output is data from the last detection before the dedup occurred.)
  • hostname: Self explanatory, but am also using the src_ip where the hostname can't be determined.
  • source: The name of the saved search.
  • weight: Number assigned that represents criticality of event.
  • phase: Identifier assigned for phase of intrusion.
  • tweight: The sum weight of all detected events.
  • dscount: The distinct county of unique detection names (source field).
  • pcount: The number of unique phases identified.
  • scount: Total number of detection identified.
  • phasemult: An additional value given for number of unique phases identified where that number is > 1.
  • sourcemult: An additional value given for number of unique sources identified where that number is > 1.
  • weighted: The sum score of all values from above.
There are a few points that I want to discuss around the additional fields that I've assigned and the reasons behind them.
  • Phases (phase,pcount,phasemult): Actors or insiders will need to step through multiple phases of activity before data theft occurrs.  Identifying multiple phases in a given period of time may be an indicator of malicious activity. 
  • Sources (source,scount,dscount,sourcemult): A large number of detections may be less concerning if all detections are finding the same activity over and over.  Actors or insiders need to perform multiple steps before data theft occurs and therefor fewer numbers of detections, where those detections surround different actions, would be more concerning.
  • Weight: Weight is based on criticality.  If I see a large weight with few detections, I can assume the behavior may have a higher likely hood of being malicious.
  • Weighted: High scores tend to have more behaviors identified where those behaviors reach multiple behaviors.
Now that we've performed all of these calculations and have a good understanding of what they are, we can run k-means and cluster the results.  I downloaded a csv from the splunk output and named it cluster.csv.  Using the below code you can see I chose 3 clusters using the tweight, phasemult and scount fields.  I believe that the combinations of these fields can be a good representation of anomalous behavior (I could also plug in other combinations and have the potential to surface other behaviors.).












The following is the contents of those clusters.
















Based on the output, the machine in cluster 1 definitely should be investigated.  I would also investigate those machines in cluster 2 as well.

Granted, this is a fairly small data set, but is a great representation of what can be done in much larger environments.  The scheduling of this method could also be automated where the results are actioned, correlated, alerted on ...).

Again I would like to thank the Splunk team for producing and releasing BOTS.  It's a great set of data to test with and learn from.




Thursday, May 7, 2020

Hunting for Beacons Part 2


In my last post I talked about a method of hunting for beacons using a combination of Splunk and K-Means to identify outliers in network flow data.  I wanted to write a quick update to that post so that I can expand on a few things.

In that blog post I gave these different points that help define general parameters that I can begin to craft a search around .   This helps to define what it is I'm trying to look for and in a way,  builds sort of a framework that I can follow as I begin looking for this behavior.

  1. Beacons generally create uniform byte patterns
  2. Active C2 generates non uniform byte patterns
  3. There are far more flows that are uniform than non uniform
  4. Active C2 happens in spurts
  5. These patterns will be anomalous when compared to normal traffic

Using the definition above, I went out and built a method that will identify anomalous traffic patterns that may indicate malicious beaconing.  It worked well for the sample data I was using, but when implementing this method in a much larger dataset I had problems.  The anomalous datapoints were much greater and therefore the fidelity of the data I was highlighting was much lower (if you haven't read my last post I would recommend it).  The other issue was that it took much longer to pivot into the data of interest and then try and understand why that pattern was identified as an outlier.  I then decided to see if I could take what I was trying to do with k-means and build it into my Splunk query.  Here is what I came up with:

The entire search looks like this:


index=botsv3 earliest=0 (sourcetype="stream:tcp" OR sourcetype="stream:ip") (dest_port=443 OR dest_port=80) |stats count(bytes_out) as "beacon_count" values(bytes_in) as bytes_in by src_ip,dest_ip,bytes_out |eventstats sum(beacon_count) as total_count dc(bytes_out) as unique_count by src_ip,dest_ip |eval beacon_avg=('beacon_count' / 'total_count') |stats values(beacon_count) as beacon_count values(unique_count) as unique_count values(beacon_avg) as beacon_avg values(total_count) as total_count values(bytes_in) as bytes_in by src_ip,dest_ip,bytes_out |eval incount=mvcount(bytes_in) |join dest_ip [|search index=botsv3 earliest=0 (sourcetype="stream:tcp" OR sourcetype="stream:ip") |stats values(login) as login by dest_ip |eval login_count=mvcount(login)] |eventstats avg(beacon_count) as overall_average |eval beacon_percentage=('beacon_count' / 'overall_average') |table src_ip,dest_ip,bytes_out,beacon_count,beacon_avg,beacon_percentage,overall_average,unique_count,total_count,incount,login_count |sort beacon_percentage desc

Breaking it down:


Collect the data that will be parsed:
  • index=botsv3 earliest=0 (sourcetype="stream:tcp" OR sourcetype="stream:ip") (dest_port=443 OR dest_port=80)


Count the number of times a unique byte size occurrs between a src, dst and dst port:
  • |stats count(bytes_out) as "beacon_count" values(bytes_in) as bytes_in by src_ip,dest_ip,bytes_out


Count the total number of times all bytes sizes occur regardless of size and the distinct number of unique byte sizes:
  • |eventstats sum(beacon_count) as total_count dc(bytes_out) as unique_count by src_ip,dest_ip


Calculate average volume of src,dst,byte size when compared to all traffic between src,dst:
  • |eval beacon_avg=('beacon_count' / 'total_count')


Define fields that may be manipulated, tabled, counted:
  • |stats values(beacon_count) as beacon_count values(unique_count) as unique_count values(beacon_avg) as beacon_avg values(total_count) as total_count values(bytes_in) as bytes_in by src_ip,dest_ip,bytes_out


Count the number of unique bytes_in sizes between src,dst,bytes_out.  Can be used to further define parameters with respect to beacon behavior:
  • |eval incount=mvcount(bytes_in)


*** Below is additional from original query ***

Generally there will be a limited number of users beaconing to s single destination.  If this query is looking at an authenticated proxy this will count the total number of users communicating with the destination (this can also add a lot of overhead to your query):
  • |join dest_ip [|search index=botsv3 earliest=0 (sourcetype="stream:tcp" OR sourcetype="stream:ip") |stats values(login) as login by dest_ip |eval login_count=mvcount(login)]


Calculate the average number of counts between all src,dst,bytes_out:
  • |eventstats avg(beacon_count) as overall_average


Calculate the volume percentage by src,dst,bytes_out based off the overall_average:
  • |eval beacon_percentage=('beacon_count' / 'overall_average')


And the output from the Splunk botsv3 data:
















You can see from the output above, the first 2 machines were ones identified as compromised.  The volume of their beacons were 1600 and 400 times more than the average volume of traffic between src,dst,bytes_out.  By adding the bottom portion of the search I've basically built the outlier detection into the query.  You could even add a parameter to the end of the search like "|where beacon_percentage > 500" and only surface anomalous traffic.  Also, by adjusting the numbers in these fields you can really turn the levers and tune the query to different environments. 

(beacon_count,beacon_avg,beacon_percentage,overall_average,unique_count,total_count,incount,login_count)

If you were to apply this to proxy data you could also run multiple queries based on category.  This may increase the speed and take some of the load off Splunk.

I've also not given up on K-Means.  I just pivoted to using a different method for this.

*** Adding an update to include a Splunk search with a risk scoring function ***

index=someindex sourcetype=somesourcetype earliest=-1d 
|stats count(bytes_out) as "i_bytecount" values(bytes_in) as bytes_in by src_ip,dest_ip,bytes_out 
|eventstats sum(i_bytecount) as t_bytecount dc(bytes_out) as distinct_byte_count by src_ip,dest_ip 
|eval avgcount=('i_bytecount' / 't_bytecount') 
|stats values(i_bytecount) as i_bytecount values(distinct_byte_count) as distinct_byte_count values(avgcount) as avgcount values(t_bytecount) as t_bytecount values(bytes_in) as bytes_in by src_ip,dest_ip,bytes_out |eval incount=mvcount(bytes_in) 
|join dest_ip 
[|search index=someindex sourcetype=somesourcetye earliest=-1d 
|bucket _time span=1h 
|stats values(user) as user values(_time) as _time dc(url) as distinct_url_count count as distinct_event_count by dest_ip,dest 
|eval time_count=mvcount(_time) 
|eval login_count=mvcount(user)] 
|table dest,src_ip,dest_ip,bytes_out,distinct_url_count,distinct_event_count,i_bytecount,distinct_byte_count,avgcount,t_bytecount,incount,login_count,user,time_count 
|search t_bytecount > 1 login_count < 3 
|eventstats avg(i_bytecount) as o_average 
|eval above=('i_bytecount' / 'o_average') 
|eval avgurl=(distinct_url_count / distinct_event_count) 
|eval usermult=case(login_count=1, 100, login_count=2, 50, login_count>2, 0) 
|eval evtmult=case(distinct_event_count>60, 50, distinct_event_count>300, 100, distinct_event_count<60, 0) 
|eval beaconmult=case(above>5, 100, above>100, 200, above<=5, 0) 
|eval urlmult=case(avgurl>.06 AND avgurl<.94, 0, avgurl>.95 ,100, avgurl<.05, 100) 
|eval timemult=case(time_count > 7, 100, time_count<=7, 0) 
|eval addedweight = (evtmult+usermult+beaconmult+urlmult+timemult) 
|dedup dest 
|search addedweight > 250