Wednesday, September 28, 2016

Forensic Analysis of Anti-Forensic Activities

Screen Shot 2014-01-29 at 8.36.49 PM
If you were to believe the above tweet you may think “Whats the point, we’re doomed by this anti-forensic stuff”. No offence to Brent Muir, but I think differently.
A few weeks ago I attended Shmoocon and sat in the presentation by Jake Williams and Alissa Torres regarding subverting memory forensics.  As I sat through the talk I kept thinking to myself that it would be impossible to completely hide every artifact related to your activity, which Jake also stated in his presentation.  Seeing a tweet the other day that had a link to download the memory image, I quickly grabbed it to see what I could find  (If you are interested in having a look at the image it can be found here).  I will also state that I have not looked at anything else that was posted as I didn’t want to have any hints on what to look for.
My first course of action once I had the memory dump was to figure out what OS I was dealing with. To find this I used volatility with the imageinfo plugin.
Screen Shot 2014-01-29 at 8.19.59 AM
We can see from the above output we have an image from a Windows 7 SP1 machine. My next step is to create a strings file, grabbing both ascii and unicode strings, of the memory image.
Screen Shot 2014-01-29 at 10.20.09 AM
Now that I had all the data I needed I could begin my investigation.
Whenever you are given a piece of data to analyze it is usually given to you for a reason. Whatever that reason is, our first course of action should be to find any evidence that will either confirm or deny. I know based on the presentation that the purpose of the tool was to hide artifacts from a forensic investigator. Knowing this, I set out to find any evidence that will validate these suspicions.
I first began by looking at the loaded and unloaded drivers since the tool will most likely need to be interacting with the kernel to be able to modify these various memory locations. I wanted to see if there was any odd filenames, odd memory locations or filepaths and used the volatility plugins modscan and unloadedmodules for this. This unfortunately led me to a dead end.
My next course of action was to take a look at the processes that were running on the machine. I compared the output from pslist and psscan as a way to look to look for processes in different ways and to note any differences. The pslist plugin will walk the doubly-linked list of EPROCESS structures whereas the psscan plugin will scan for EPROCESS structures. My thought was that if a process had been hidden by unlinking itself, I would be able to identify it by comparing the 2.
Screen Shot 2014-01-29 at 12.55.21 PM
By diff’ing the 2 files I was able to see something odd in the psscan output.
Screen Shot 2014-01-29 at 12.58.10 PM
We can see that qggya123.exe definitely looks odd. If we look for our odd process name using this plugin it appears that the process may no longer be running as there is no end time listed in the output, or this may simply be related to Jake’s tool.
Screen Shot 2014-01-29 at 1.14.45 PM
The psxview plugin will attempt to enumerate processes using different techniques. The thought being that if you attempt to hide a processes you will have to evade detection a number of different ways. Looking at the output below, the only way we are able to see that there was this suspicious looking process running (at least at one point in time) was by scanning for these EPROCESS structors.
Screen Shot 2014-01-29 at 1.09.24 PM
Now that we have something to go off of, my next step was to see if I could gain a little more context around the process name. The PID of our suspicious process is 5012 with a PPID of 512. If we look to see what PID 512 is, we can see that it was started under services.exe.
Screen Shot 2014-01-29 at 1.30.28 PM
Naturally I will want to see what services were running on the machine. For this I used the svcscan plugin, but was unable to locate our suspicious filename.
The next step I took was to search for the filename in the memory strings. The output below shows the file being executed from the command line by a file named add.exe. We also see that our PID and PPID are included as a parameter. Now that we know how it was executed we have an additional indicator to search for.
Screen Shot 2014-01-29 at 3.15.19 PM
Searching for add.exe I was able to see several different items of interest.
1. add.exe with the /proc parameter was used to execute 3 different files. qggya123.exe, cmd.exe and rundll32.exe
2. qggya123.exe is the parent of cmd.exe
3. cmd.exe is the parent of rundll32.exe
4. The /file parameter I assume is used to hide the presence of various files in the /private directory
5. The /tcpCon command was used to specify tcp connections given src ip (decimal) src port dst ip (decimal) dst port and I’m guessing the final 4 is indicating IPv4
1885683891 = 112.101.64.179
1885683931 = 112.101.64.219
1885683921 = 112.101.64.209
Screen Shot 2014-01-29 at 1.41.57 PM
If we look around those strings we are able to see even more of the malicious activity.
Screen Shot 2014-01-29 at 1.52.44 PM
I did try to dump these files from memory, but was unable to. I also attempted to create a timeline in an attempt to identify sequence of activities, but was unable to locate the activity I was looking for.
One item that would be a top priority at this point would be to identify all the port 443 connections generated from this host. Based on the output from netscan we can see the known malicious connections were established, but we still need to verify that these are the only ones. We would also want to be able to look at any pcap associated with these connections in an attempt to identify what data was transferred as well as possibly creating some network detection.
Screen Shot 2014-01-29 at 4.15.24 PM
Knowing that there is some type of process / file / network connection hiding and the fact that I’m not able to analyze the memory image as I would typically expect to be able to, I would most likely request a disk image at this point. Even with the anti forensics tactics that were employed, I was able to determine enough to know that this machine is definitely compromised as well as some of the methods being used by the attacker. I was also able to generate a list of good indicators that could be used to scan additional hosts for the same type of activity.
In case anyone is interested I created a yara rule for the activity identified.
rule add
{
strings:
$a = “p_remoteIP = 0x”
$b = “p_localIP = 0x”
$c = “p_addrInfo = 0x”
$d = “InetAddr = 0x”
$e = “size of endpoint = 0x”
$f = “FILE pointer = 0x”
$g = ” /tcpCon ”
$h = “Bytes allocated for fake Proc = ”
$i = “EPROC pool pointer = 0x”
$j = “qggya123.exe”
$k = “add.exe” wide
$l = “c:\\add\\add\\sys\\objchk_win7_x86\\i386\\sioctl.pdb”
$m = “sioctl.sys”
$n = “\\private”
condition:
any of them
}
This was a quick analysis of the memory image and am still poking around at it. As I find any additional items I’ll update this post.
Update 1:
Running the driverscan plugin, scanning for DRIVER_OBJECTS, we can see something that looks a little suspicious.
Screen Shot 2014-01-29 at 6.04.14 PM
After seeing this I ran the devicetree plugin to see if I was able to get an associated driver name. Based on the output below, it looks like our malicious driver is named sioctl.sys.
Screen Shot 2014-01-29 at 6.11.49 PM
So now that we have a driver name, lets see if we can dump the driver.
Screen Shot 2014-01-29 at 6.13.36 PM
Looking at the .pdb path in the strings output we can see that we have our malicious driver.
Screen Shot 2014-01-29 at 6.16.30 PM
Update 2:
An additional piece of information I found while running filescan, looking for the files called by the execution of add.exe, was the file path returned by volatility. All the paths appear to have \private as the root directory. I suspect this could be a great indicator for identifying compromised machines.
Screen Shot 2014-01-29 at 7.18.01 PM
I would expect to see filepaths that have the root of \Device\HarddiskVolume1\
Screen Shot 2014-01-29 at 7.22.14 PM
I verified this by locating every file that has \private in the filepath and all files that were returned were those files that appear to be faked.
Screen Shot 2014-01-29 at 7.31.08 PM
Update 3:
One of the cool things we can do with volatility is run our yara rules against the memory image. When a hit is found it will display the process in which the string was found. The syntax we would use to scan the image with our yara rule is:
Screen Shot 2014-01-29 at 10.36.28 PM
We can see that all of the command line input and output is located in the vmtoolsd process. I limited the output for brevity.
Screen Shot 2014-01-29 at 10.38.53 PM
Screen Shot 2014-01-29 at 10.43.37 PM
We also see 2 different processes where add.exe was found.
Screen Shot 2014-01-29 at 10.41.53 PM
Screen Shot 2014-01-29 at 10.42.54 PM
If we compare what we found with the strings plugin we see one difference that we were unable to find with the yara plugin. By mapping the string of our original suspicious file back to a process we see that it’s located in kernel space.
Screen Shot 2014-01-29 at 10.51.58 PM
We can also take the strings from the binary and see where those map back to as well.
Screen Shot 2014-01-29 at 10.55.55 PM
Had we been unable to locate the driver with driverscan and device tree we still may have been able to identify the file based on the string found.
Update #4:
The awesome people over at volatility were kind enough to send me an unreleased plugin to demonstrate the faked connections that ADD.exe created. I’m told the plugin will be coming out with their new book. The netscan plugin uses pool scanning so it may find objects that were faked like we have already seen. The new plugin works against the partition tables and hash buckets that are found in the kernel. I’m sure MHL can explain it much better then I just did :)
If we look at the normal netscan output we are able to see the faked established connections.
Screen Shot 2014-01-30 at 12.37.29 AM
Now if we use the tcpscan plugin we can see the difference (or not see the difference) :)
Screen Shot 2014-01-30 at 12.38.58 AM
Valid established connections would look like this with the tcpscan output.
Screen Shot 2014-01-30 at 12.40.48 AM
As you can see we were able to find additional artifacts related to the add.exe tool by comparing the output of the 2 plugins.
Again thank’s to the volatility team for floating me the plugin to use for this blog post as well as all of their research and work going into the tool and moving the field of memory forensics forward!

Thoughts on Incident Response Teams

With all of the breach notifications that seem to be flying around daily over the past few months I can’t help but wonder how their IR teams operate.  I won’t speculate or cast any blames or failures as I simply don’t know.  I do have definite opinions on how I think teams can operate and grow though.  These are only my opinions and I’m sure other’s have different ones.  If you have differing opinions I would love to hear about them.
First, you can’t have a team without having some type of structure that defines roles and responsibilities.  This structure is by no means new and my opinion is “if it’s not broke then why try and fix it”.
Incident Handlers
The incident handler is your subject matter expert.  He/she should be a highly technical person that is also cognizant of risk and business impact.  The IH needs to understand the threats that your company faces and be able to direct efforts based on these threats and data being relayed by others performing analysis.  The IH should have the ability and freedom to say “contain this device based on these facts” (of course, good or bad, business needs can always trump these decisions).  All aspects of response should flow through this person so that they can delegate duties appropriately. The IH should also be the go to person for any information/explanation related to the response efforts.
Response Analysts
The response analysts can have various levels of skills and abilities.  Their function is to analyze incoming data for signs of compromise, lateral movement, data exfiltration and so on. Their findings should be documented and communicated to the incident handler so that appropriate actions can be taken.  The people filling this role should also be familiar with the different detection technologies that your org has as well as the ability to create or recommend detection signatures based on their analysis.
Incident Coordinator
This person, unlike all the other’s, does not need to be technical.  The role of this position is to handle all of those administrative tasks that come along with response.  Things such as communication with management regarding incident status, contacting POC’s to help with containment and collection, and maintaining timeline of activity for reporting would all fall under this role.
Reverse Engineer
This position is probably the most difficult to fill.  Harlan Carvey wrote a blog post a few months ago regarding the disconnect between RE and IR which can be found here and describes the reasons why I feel this person needs to be directly involved.  Having an RE person that is directly involved with IR can bridge that disconnect.  There are usually specific questions that need to be answered when you are in the heat of battle, such as how to decode the C2 communication or identifying indicators of malware that can be used in immediate detection.  Having this person directly involved with the response teams can greatly speed up answering those needed questions.
POC’s
This position is not directly related to the response teams, but I feel needs to be mentioned.  You can think of these people as your boots on the ground.  They are directly responsible for collection and containment of assets (depending on the size of your org, they may even be someone on your IR team).  They should be very familiar with your processes and have the ability to act on your team’s behalf.

Please understand that I’m not saying that your org needs to go out and hire multiple people that can fill all of these roles.  I work in an environment that has close to 1 million endpoints so we require a fairly large team.  If your environment is smaller, then I would see people performing many of the above tasks simultaneously.  They are just roles that I think need to be filled for any response team.

Training
Yearly DFIR classes are great.  I have taken a few over the years and can say that they have filled a much needed gap when it comes to learning a specific need.  They can also really get you focused in the right direction, unfortunately the yearly class is not enough in my opinion.  I am a big fan of peer led training and mentoring.  I see this as a must have if you are focused on building teams from within.
As a senior member of our CIRT, I think it’s important that I find out what aspects of response people are interested in.  It may be network analysis, host analysis, reverse engineering to name a few.  Whatever they are interested in I think it’s important not to discourage, but to encourage further development.  That being said, if their passion is red team activities I may try and steer them back to our team needs.  If I or someone else on our team is knowledgeable in that area, working with them can go a long way and can be so much more beneficial then a yearly class as you can focus on methods and tools that your company employs as well as the face time and personal interaction with that junior team member.
I was speaking with a colleague the other day about this.  When I was in the military each unit had a METL (Mission Essential Task List).  This was basically a list of all the essential actions the unit was expected to be able to perform during times of war.  When we trained, our training was always focused according to our METL.  I think response teams can benefit from this methodology.  If you list out the critical tasks that your team needs to be able to perform during response, you can then test your people / team against this list.  This will greatly help you figure out the different aspects in which you should focus your training.
If you are a SME in a certain aspect of response, take the time to develop training in that area.  If the training is good your team will love you for it and can be a great way to get everyone on the same page.  Along those same lines, there is nothing wrong with having junior members develop training to share among peers on your team.  Knowing that you will be teaching someone a certain aspect of response or analysis usually is enough to get that person to do a little deeper research than they may have done before.  I am a firm believer that knowledge not shared is knowledge wasted.
Demand Involvement
As I stated above with regards to the response analyst.  Team members with varying degrees of skill level can often be nervous about analyzing data that may be outside their comfort zone.  As a result, they may shy away and not be as involved as they should be for fear of missing something.  IR is a team sport and needs team participation.  If you notice a team member begin to shy away from analysis, call them on it and get them back in the game.  Mistakes happen and fear of making them should not impact response.  As long as the mistake is recognized and the person has learned from it then I think that’s OK.
Technical Competency
I’ve seen teams that have segmented the different roles on their teams.  This is not always bad as you can get clear focus on different aspects of detection and analysis, but when the s**t hits the fan and you have multiple response activities going on at once you may need to shift people.  If they have no idea how to write a detection rule or look at a memory dump then this may not be so good.  I think that cross training should be a focus and analysts should know the different aspects and technologies.  They also get the added benefit of not being pigeon holed into one area, but can grow and eventually become that senior incident handler if that is their desire.  You also never know, you may find that diamond in the rough that you may never have known about otherwise.
Again, these are just some of my thoughts and I would love to hear what other people think.  You can find me on twitter at @jackcr.

Building analysts from the ground up

If you read my last blog post then you may know that I’m a big fan of internal training to help improve your teams technical capabilities. I feel that if you can focus training that is specific to your company’s needs it is not only a win for the company, but a win for that analyst as they will be able to perform better in their job and feel more comfortable in doing so. This is especially true for new hires that may be entirely new to the DFIR field.
At the beginning of last year my company started moving forward with staffing an internal 24/7 security operations center. Knowing that we were not going to be able to find entry level people that will be able to sit down at a NSM (Network Security Monitoring) console on day 1 and be able to effectively analyze alerts, we knew that we were going to need to develop some type of training. We also knew that we were going to need to have a level of confidence in their abilities prior to turning anything over to them. I feel that it takes a person at least 3 – 6 months before they are comfortable analyzing alerts and probably an additional 3 – 6 months before they are truly effective at it. Our initial goal was to get these new people familiar with the tools they would be working with and spend time coaching, mentoring and monitoring them until they became that effective analyst.
Based on time constraints for our first class, we initially began training them on the tools they would be using and work the specifics of the alerts they were analyzing as we went through the weeks upon weeks of coaching and mentoring. These people eventually began to understand what they were looking at and the reasons why they were looking at these alerts, but there were definitely some shortcomings in this approach. I would say one of the drawbacks to this approach was the initial lack of understanding around what exactly they were looking at. For example, if we take the http protocol. The difference between a GET and a POST seems obvious to me, but it’s far different for someone who has only had experience provisioning users or installing operating systems. Another major drawback to this approach was the limited amount of time we had to spend on peripheral topics that will help them do their job more efficiently. Things like identifying a hostname based off an ip address may seem like a simple task, but in a very large environment it can be quite challenging at times. We eventually got them to a place where they were able to handle alerts, but there were many late night phone calls, weeks spent off site training as these people were in a different geographical location and just the expected amount of time it takes for someone to grow into a position vs some expectations that they would have gotten there quicker.
Having our first class of students up and running we had some time before our next class was scheduled to start. We spent this time going over lessons learned and developing training based off those identified areas. When we were finished we had developed 5 weeks of training that each class would go through. Our goal for this training was to get them familiar with the most important aspects of their job. We developed content centered around topics such as:
1. Org structure
2. Linux overview
3. Networking fundamentals
4. Pcap analysis
5. Network flow analysis
6. Http / smb protocol analysis
7. Log analysis
8. Alert analysis
9. Regular expressions to aid in reading detection rules
10. Host identification
11. Hands on labs and testing centered around positive and false positive alerts
12. Live supervised alert analysis
We also ended the training with a written test to see, not only how much the student had retained, but also to identify areas where we potentially needed to tweak our training.
Having put our new analysts through these 5 weeks of training we still understood that we only provided them with the tools and hopefully the mindset to be successful. The real learning and development comes by actively analyzing alerts, making decisions based off your analysis and collaborating with your peers about what you and they are seeing.
This was truly an amazing opportunity for me and one in which I learned a lot. Some of the takeaways I would like to share if you are developing this kind of training and teaching it to others are:
1. Take note of the questions your students ask. If they have a general theme you may need to tweak your training.
2. Don’t assume your students are at a certain technical level.
3. Regardless of job pressures, try to devote your full attention to your class.
4. Continually ask questions throughout the class to gauge level of understanding.
5. Focus on analysis methodologies vs individual task or alert.
6. Understand that not everyone will “get it”. Some people have a hard time with analysis and unfortunately this job is not for everyone.
It has been some time since that first class and we have definitely seen the fruits of our labor as we have some very capable analysts now. These analysts have not only gained a new skill, but the majority have gained a whole new career path. I say that’s a win for everyone involved!
If you have any questions or comments you can hit me up on twitter at @jackcr.

To silo or not to silo

Picture yourself knee deep in an incident, racing to contain an adversary that is actively moving laterally within your network. You have people tasked, based on skill set, that will enable you to achieve this goal as quickly as possible. Some may be looking at network data, some at host/log data, some at malware found, and others building detection for what has already been learned. Response actions seem to be going well until you get the word that a second, unrelated, intrusion was detected. My question is, would you be able to shift people based on your team members individual skill sets and be confident that it was being responded to appropriately or even confident that they would be able to handle it at all.
I’ve talked to a few people and have even seen myself how an overall team is structured can severely limit your capabilities. I attended a IANS symposium last week on incident response and brought this topic up to the group. Some people agreed with what I was saying while others seemed to be dead set against the idea. I understand that there are reasons for these types of roles within your overall team, such as providing defined positions for hiring purposes and creating clear structure (often for management). Here are my reasons why I think that it can hinder, not only your overall capabilities, but you teams morale as well.
1. The more teams within teams you have, the more rigid you become.
If you look at my example above, there are several areas within a single response where skills are needed. If multiple response activities are running concurrent to each other and the people available to respond only have skills in writing detection, this is a problem. Your only option at that point may be to expand the scope of some of your analysts to include the second intrusion and run the risk of missing critical pieces of information or burning them out completely.
2. Overall team communication will likely decrease.
People who have a common focus or interest usually communicate well with each other. If you break those people apart into new groups with an even more defined focus, then wrap unique processes and goals tied to each of those new groups, people will naturally focus on those and will not be attune to what others are doing to support the overall mission. If people perceive they no longer need to collaborate to accomplish their tasks or goals then the overall communication will suffer.
3. Career progression
I wrote a post not long ago where I described how I thought a CIRT should be structured. One of the positions I described was an incident handler. Here’s how I described that position:
The incident handler is your subject matter expert. He/she should be a highly technical person that is also cognizant of risk and business impact. The IH needs to understand the threats that your company faces and be able to direct efforts based on these threats and data being relayed by others performing analysis. The IH should have the ability and freedom to say “contain this device based on these facts” (of course, good or bad, business needs can always trump these decisions). All aspects of response should flow through this person so that they can delegate duties appropriately. The IH should also be the go to person for any information/explanation related to the response efforts.
If I’m a new analyst and I’m only allowed to focus on a single area within IR, will I ever gain the experience to do what’s described above? I emphatically say no. If I’m a junior analyst I may get a little discouraged at this fact and unfortunately I have not had the opportunity to gain the additional skills and knowledge where I can easily move to a different silo within the defined team structure.
Additionally, if I have an incident handler that decides to quit and go somewhere else (yes it does happen), have I put myself in the best position where I can easily promote someone within my team. I would argue that you would most likely have to hire someone external in order to fill all of the requirements needed for that position.
4. Handling response activities
The other issue with this approach that I see is for the incident handler. If all of the work and new capabilities being developed are within these separately defined structures it may leave the person who may need to know most blind in certain areas, especially if the overall team communication has dropped. It also may be difficult for the IH to assign individual tasks as he/she likely is not fully aware of individual talents across the entire team.
Final Thoughts
I feel that often you can break down these silos and not lose focus on critical areas by tasking senior and mid level people to projects that they can lead. Allowing them to define, develop and work with others on the team to accomplish these tasks or goals will help increase communication and likely motivation. Speaking from experience, it’s great when you can dive into something completely new and interesting while having an experienced person there to guide the overall project. It’s also extremely beneficial when you know what others are working on and can bounce questions off of because it has peaked your interest, which doesn’t typically happen in silos.
I know that companies have reasons for creating these focus areas and my blog post will likely not change any of this. I think at a minimum we need to be able to cross train across these silos though. I’m not just talking about introducing them to a new area that they may not be familiar with, but a method in which they can continue to grow and progress. I will argue that you will lose minimal momentum on current projects if you allow for a set number of hours a week to grow your people. You will likely keep them happier if they are learning something completely new to them and your overall capabilities will increase over time. I hear time and time again that companies have a difficult time trying to hire qualified people, this is one way to grow, well rounded, responders from within an organization and may be able to promote talent from within vs. always needing to bring in an experienced person from the outside when the need arises.

Feeds, feeds and more feeds

I’ve seen some email threads on a few listserv groups talking about developing a capability to take indicators from threat feeds and automatically generating signatures that can be used in various detection technologies. I have some issues with taking this approach and thought a blog post on it may be better than replying to these threads. I believe these various feeds can provide some valuable indicators, but for the most part will produce so much noise that your analysts will eventually discount the alerts they produce just by the sheer number of false positives that can come along with alerting on these.
If you think about what is most helpful to an analyst when triaging an alert related to an ip address or a domain it is typically context around why that may be important. Was this ip related to exploitation, delivery, c2… ? How old is the indicator? What actor is it related to? Additionally helpful could be: Are we looking for a GET or a POST if it’s related to HTTP traffic? Is there a specific uri that’s related to the malicious activity or is the entire domain considered bad? Typically these feeds don’t come with the context needed to properly analyze an alert so the analyst spends time looking for oddities in the traffic. As the analyst begins to see this same indicator generate additional alerts his confidence in that indicator may diminish and can soon become noise.
For the people that have implemented this type of feeding process, walk up to one of your analysts and pick out an alert that was generated by one of the ip’s or domains. Ask them why they are looking at it and what would constitute an escalation. If they can’t answer those 2 questions ask yourself if there may be anything you can do to enhance the value of that alert. If the answer is no I would question the value of how that indicator is implemented. To go along with that, if you have never analyzed an alert or at least sat down with an analyst as they are going through them, can you really understand how these indicators can best be utilized? I would argue that until that happens your view may be limited.
Another extremely important aspect is the ability to validate the alerts that are generated. If you don’t have the ability to look at the network traffic (PCAP), then alerting on these ip’s and domains is pretty much useless given the lack of context. One thing I find most important is the ability to determine how the machine got to where it was at and what happened after they got there. Was the machine redirected to a domain, that was included in some feed, and download a legitimate GIF or did the machine go directly to the site and download an obfuscated executable with a .gif extension. If you don’t have the ability to answer these types of questions your analysts will likely wind up performing much unneeded analysis on the host or just discounting these alerts all together.
By blindly alerting on these types of indicators you also run the risk of cluttering your alert console with items that will be deemed, 99.99% of the time, false positive. This can cause your analysts to spend much unneeded time analyzing these while higher fidelity alerts are sitting there waiting to be analyzed. Another issue that relates to console clutter is indicator lifetime. An example may be if a site was compromised and hosted some type of exploit, are you still alerting on that domain after the site has been remediated. Having an ability to manage this process is extremely important if you are wanting to go down this road.
An additional issue I have surrounds some of the information sharing groups. Often these groups will produce lists of bad ip’s and domains and are shared by parties that may not have the experience needed to share indicators that are of a certain standard. Blindly alerting on these can be a mistake as well unless you have confidence in the group or party that is sharing the indicators.
I’m not saying that these feeds and groups don’t provide value. I’ve seen some very good, reliable sharing groups as well as some threat feeds that have had some spot on indicators. A lot comes down to numbers, indicator fidelity and trust as well as doing some work up front to vet the indicators before they are fed into detection and alerted on.
One of the benefits I see in collecting this data is the ability to add additional confidence in an alert. If an analyst is unsure of the validity of an alert they may be analyzing, having a way to see if anything is already known about the ip or domain can be very helpful. It may actually sway their decision in escalating vs discounting.
For ip’s and domains that are deemed high fidelity indicators I don’t see any reason not to alert on these. I think some thought needs to be given to what constitutes high fidelity though. If high fidelity relates to a particular backdoor, exploit or some other malicious activity that you know about, ask yourself if you currently have detection for that. If the answer is no, can you build detection and cover more than a single atomic indicator. Once you have detection and alerting in place for the activity, the ip or domain may not be as important to alert on.
Detecting a determined adversary can be difficult and I feel that some see these feeds and groups as the answer. By implementing and relying on this type of process you can actually weaken your ability to detect these types of intrusions as focus may shift to more of an atomic based detection. I’m all for collecting this type of information, but think about the most effective way to implement it and spend the time to build and verify solid detection. You will be much farther ahead.