Wednesday, November 23, 2016

The Hunting Cycle and Measuring Success

I typically write about the technical aspects of hunting, but wanted to do something different here as a result of a conversation I had a while back which was spurred from this tweet: https://twitter.com/MDemaske/status/792068652371550208.  While I strongly agree that hunters should be hunting and not building power points or driving projects, I also don’t believe many functions in business have an open ended budget and at some point you will need to justify your work or even value to the company.

So how do we measure success?  Before answering that question, I believe it’s important to define the various stages of hunting (from my perspective).  By defining the process I believe it will be much easier to see where we can point to areas of success, regardless if you come away empty handed or not on a hunt.

From my experience, here are the different stages of a typical hunting cycle.

Define:  In my opinion, hunts are more successful when focused on a task.  One way to do this is to define a general topic and then list the ways that an attacker may take advantage of that topic.  As an example, lets say that we want to focus a hunt on persistence mechanisms. What are the different ways that persistence can be maintained? 
  1. Services
  2. Scheduled tasks
  3. Load order hijacking
  4. Run keys
  5. Startup folder
  6. Valid credentials for externally facing devices
  7. VPN access 

Research:  Based on the above topics, do we know enough about each one to be able to successfully identify what malicious instances may look like?  Are there other methods that can be used to identify the above topics other than what we already know about?  In a lot of cases we are aware that attackers are using $method, but we simply don’t know enough about the topic to be able to look for $method as an indicator of malicious activity.  It’s ok if you don’t know (I’m in that boat a lot) and the researching / learning part is actually kind of fun.    

Availability:  Once we have our list assembled, do we have the data available to be able to find what we are looking for?  If not, is it something that we have the ability to collect?

Develop:  Here is the big one.  Can we assemble queries that will reliably identify the activity while keeping the volume and false positive rates very low?  If these queries will be transitioned to another team to operationalize, then maintaining analyst confidence in the data output is very important.  This confidence can be undermined by the quality and quantity of data being presented to them.

Automate:  Queries that have a high likelihood of being transitioned or those that may be candidates for further development should be scheduled for routine runs.  Seeing the output on a regular basis may give you insight into further improvement.

Knowledge Transfer:  When going through this process you have likely learned more about the topic then when you first began.  Little nuances in data me be a strong indicator that something is amiss and may be the entire reason for the query.  Don’t assume that the person sitting next to you will know the reasoning behind the data.  Document each hunt that will be transitioned and include your thought process behind each one. 

Operationalize:  If this data will be consumed by another team, provide recommendations on scheduling, alerting, and actions related to positive results.  Remember that this team will need to consume it into their existing processes, but you likely have the most experience behind the query and it may be good practice to give them insight into your expertise with it.  It’s also a good idea to create a feedback loop where modifications or enhancements can be requested or proposed.

Track:  These hunts may not initially prove to be valuable, but over time they may show to be very effective at identifying malicious behavior.  Being able to track the good and the bad is important from a team perspective.  They may give you insight into areas that need improvement from a methodology or data standpoint.  You may also find areas where you are doing well and can toot you horn for impacting an adversary's ability to operate in your environment.

So the next question is how do we measure success.  With all of the work that goes into a single hunt, I don’t think that the only measurement is if you find an intrusion during the above process.  Sure, that’s the ultimate goal, but if that’s what hinges on your existence then I feel bad for you.  Some things to look at that can show the value of all the work that you are doing are:


  1. Number of incidents by severity
  2. Number of compromised hosts by severity
  3. Dwell time of any incidents discovered
  4. Number of detection gaps filled
  5. Logging gaps identified and corrected
  6. Vulnerabilities identified
  7. Insecure practices identified and corrected
  8. Number of hunts transitioned
  9. False positive rate of transitioned hunts
  10. Any new visibility gained

Threat hunting can have a very positive impact on the security posture of a company and being able to show this value is important.   Of course we all want to find that massive intrusion, but there is a lot that goes into this so don’t sell yourself short when asked to justify what you do for your company! 

Sunday, October 16, 2016

Threat Hunting - Getting Closer to Anomalous Behavior

I just read this post by @hexacorn which is related to threat hunting and how impractical it may be to implement due to the vast amount of data generated on a corporate network.  He states that this data is good for post mortem analysis or during an investigation, but to use it for hunting is largely ineffective.  First I would like to say that I agree with @hexacorn in a sense that people generally look at threat hunting as singular events which can produce huge amounts of data to sift through and may be largely impractical.  I think it’s natural for people that are new to this area of work to view it this way and eventually want to give up because it’s not proven to be effective.  

I like to look at hunting in a different way.  How can I use the knowledge I have about actors and behaviors to apply to hunting so that I can get closer to the events that need to be investigated.  By not focusing on singular indicators, but more accurately focusing on behaviors of attackers, I can vastly reduce the amount of data and get closer to the anomalous events that may need attention.  

In the blogpost @hexacorn described hackers using ipconfig.exe, netstat.exe, net.exe, cscript.exe, etc, but said that it’s used so often that it’s impossible to sift through.  I consider the above, as well as many others, to be lateral movement tools and are often used in conjunction with each other.  A good way to look at this differently would be to look for combinations of these tools being executed within 10 minutes from a single process name with a single user on a single host.  This will vastly reduce the amount of data being returned as well as surfacing a behavior that attackers commonly use.  This is far different than surfacing a single tool, used legitimately thousands of times a day and also that attackers commonly use.

In the same sentence, psexec was mentioned.  It’s true that this tool is often used by administrators and can be difficult to look at a single process with the ability or confidence to say this is bad.  If I step back and think about this differently I can say that when an attacker moves through my network they often use the same credentials and tools to perform lateral movement.  Applying this thought to the problem I can say that it’s odd for a single user to be identified on multiple machines, executing psexec against different hosts.  Again, this will reduce the amount of data needed to be looked at while surfacing, what may be, anomalous behavior.
 
For identifying odd processes, I agree that it’s difficult to look at these en masse and pick out the bad ones.  I like to look at this a couple of different ways and both are from a parent process.  The first is, knowing that cmd.exe is probably the most commonly used executable during any intrusion, what processes are parents of cmd.exe and how many times was it executed by the number users (a single user executing cmd.exe on multiple machines from a rare processes may be anomalous).  The second way is to think about how commands can be executed.  Tools such as powershell, wmic, at.exe, schtasks.exe… and what users and parent processes are executing those (would if be odd for a local admin or service account to be scheduling a task or the owner of an IIS processes to be executing powershell?).  I would also suggest that if you want to find odd occurrences of svchost.exe, which is mentioned in the post, to look at the parent processes of svchost and not necessarily the location on disk.

I truly feel that if you want to be successful at hunting you need to look at the problem differently.  In an IDS/IPS world we deal in singular events that match a specific signature.  We need to throw this model out for hunting because it simply doesn’t work.  As @hexacorn described in his post, it produces huge amounts of data that are simply impractical review effectively. 

Another issue I see, that wasn’t talked about in the post, is knowledge transfer.  I may think about hunting in the way i described above, but when I transition something to a different team will they view the data in a way that was intended, or will they even understand the data that is being given to them.  If they don’t understand the data, their confidence may be low as to the severity of the event.  An example would be looking for anomalies in msrpc pipes.  Host or user enumeration of windows machines likely happens over smb and the communication between machines during these events will often occur over named pipes.  If I generate an alert for multiple samr pipes being created with multiple remote machines in the UNC path over a given period of time, what will the analyst do with this?  Don’t assume that an analyst, regardless of experience , will know what your thought process was when you created this hunt.  Give the analyst context because these can be just as difficult to understand as an IP address that alerted with no explanation as to why.  If the analyst doesn’t react with the same sense of urgency to the alert as you feel it deserves, it may not be a failure on their part, but more of a failure on your part for not giving them the proper knowledge transfer and ensuring they understand what they are looking at.  


My hope is that @hexacorn doesn’t take offense to my rebuttal to his post.  I think about hunting as being creative in the ways that I look for threats.  This is not an extension to an IDS/IPS, but an entirely different method and mindset of finding bad. 

Wednesday, September 28, 2016

Forensic Analysis of Anti-Forensic Activities

Screen Shot 2014-01-29 at 8.36.49 PM
If you were to believe the above tweet you may think “Whats the point, we’re doomed by this anti-forensic stuff”. No offence to Brent Muir, but I think differently.
A few weeks ago I attended Shmoocon and sat in the presentation by Jake Williams and Alissa Torres regarding subverting memory forensics.  As I sat through the talk I kept thinking to myself that it would be impossible to completely hide every artifact related to your activity, which Jake also stated in his presentation.  Seeing a tweet the other day that had a link to download the memory image, I quickly grabbed it to see what I could find  (If you are interested in having a look at the image it can be found here).  I will also state that I have not looked at anything else that was posted as I didn’t want to have any hints on what to look for.
My first course of action once I had the memory dump was to figure out what OS I was dealing with. To find this I used volatility with the imageinfo plugin.
Screen Shot 2014-01-29 at 8.19.59 AM
We can see from the above output we have an image from a Windows 7 SP1 machine. My next step is to create a strings file, grabbing both ascii and unicode strings, of the memory image.
Screen Shot 2014-01-29 at 10.20.09 AM
Now that I had all the data I needed I could begin my investigation.
Whenever you are given a piece of data to analyze it is usually given to you for a reason. Whatever that reason is, our first course of action should be to find any evidence that will either confirm or deny. I know based on the presentation that the purpose of the tool was to hide artifacts from a forensic investigator. Knowing this, I set out to find any evidence that will validate these suspicions.
I first began by looking at the loaded and unloaded drivers since the tool will most likely need to be interacting with the kernel to be able to modify these various memory locations. I wanted to see if there was any odd filenames, odd memory locations or filepaths and used the volatility plugins modscan and unloadedmodules for this. This unfortunately led me to a dead end.
My next course of action was to take a look at the processes that were running on the machine. I compared the output from pslist and psscan as a way to look to look for processes in different ways and to note any differences. The pslist plugin will walk the doubly-linked list of EPROCESS structures whereas the psscan plugin will scan for EPROCESS structures. My thought was that if a process had been hidden by unlinking itself, I would be able to identify it by comparing the 2.
Screen Shot 2014-01-29 at 12.55.21 PM
By diff’ing the 2 files I was able to see something odd in the psscan output.
Screen Shot 2014-01-29 at 12.58.10 PM
We can see that qggya123.exe definitely looks odd. If we look for our odd process name using this plugin it appears that the process may no longer be running as there is no end time listed in the output, or this may simply be related to Jake’s tool.
Screen Shot 2014-01-29 at 1.14.45 PM
The psxview plugin will attempt to enumerate processes using different techniques. The thought being that if you attempt to hide a processes you will have to evade detection a number of different ways. Looking at the output below, the only way we are able to see that there was this suspicious looking process running (at least at one point in time) was by scanning for these EPROCESS structors.
Screen Shot 2014-01-29 at 1.09.24 PM
Now that we have something to go off of, my next step was to see if I could gain a little more context around the process name. The PID of our suspicious process is 5012 with a PPID of 512. If we look to see what PID 512 is, we can see that it was started under services.exe.
Screen Shot 2014-01-29 at 1.30.28 PM
Naturally I will want to see what services were running on the machine. For this I used the svcscan plugin, but was unable to locate our suspicious filename.
The next step I took was to search for the filename in the memory strings. The output below shows the file being executed from the command line by a file named add.exe. We also see that our PID and PPID are included as a parameter. Now that we know how it was executed we have an additional indicator to search for.
Screen Shot 2014-01-29 at 3.15.19 PM
Searching for add.exe I was able to see several different items of interest.
1. add.exe with the /proc parameter was used to execute 3 different files. qggya123.exe, cmd.exe and rundll32.exe
2. qggya123.exe is the parent of cmd.exe
3. cmd.exe is the parent of rundll32.exe
4. The /file parameter I assume is used to hide the presence of various files in the /private directory
5. The /tcpCon command was used to specify tcp connections given src ip (decimal) src port dst ip (decimal) dst port and I’m guessing the final 4 is indicating IPv4
1885683891 = 112.101.64.179
1885683931 = 112.101.64.219
1885683921 = 112.101.64.209
Screen Shot 2014-01-29 at 1.41.57 PM
If we look around those strings we are able to see even more of the malicious activity.
Screen Shot 2014-01-29 at 1.52.44 PM
I did try to dump these files from memory, but was unable to. I also attempted to create a timeline in an attempt to identify sequence of activities, but was unable to locate the activity I was looking for.
One item that would be a top priority at this point would be to identify all the port 443 connections generated from this host. Based on the output from netscan we can see the known malicious connections were established, but we still need to verify that these are the only ones. We would also want to be able to look at any pcap associated with these connections in an attempt to identify what data was transferred as well as possibly creating some network detection.
Screen Shot 2014-01-29 at 4.15.24 PM
Knowing that there is some type of process / file / network connection hiding and the fact that I’m not able to analyze the memory image as I would typically expect to be able to, I would most likely request a disk image at this point. Even with the anti forensics tactics that were employed, I was able to determine enough to know that this machine is definitely compromised as well as some of the methods being used by the attacker. I was also able to generate a list of good indicators that could be used to scan additional hosts for the same type of activity.
In case anyone is interested I created a yara rule for the activity identified.
rule add
{
strings:
$a = “p_remoteIP = 0x”
$b = “p_localIP = 0x”
$c = “p_addrInfo = 0x”
$d = “InetAddr = 0x”
$e = “size of endpoint = 0x”
$f = “FILE pointer = 0x”
$g = ” /tcpCon ”
$h = “Bytes allocated for fake Proc = ”
$i = “EPROC pool pointer = 0x”
$j = “qggya123.exe”
$k = “add.exe” wide
$l = “c:\\add\\add\\sys\\objchk_win7_x86\\i386\\sioctl.pdb”
$m = “sioctl.sys”
$n = “\\private”
condition:
any of them
}
This was a quick analysis of the memory image and am still poking around at it. As I find any additional items I’ll update this post.
Update 1:
Running the driverscan plugin, scanning for DRIVER_OBJECTS, we can see something that looks a little suspicious.
Screen Shot 2014-01-29 at 6.04.14 PM
After seeing this I ran the devicetree plugin to see if I was able to get an associated driver name. Based on the output below, it looks like our malicious driver is named sioctl.sys.
Screen Shot 2014-01-29 at 6.11.49 PM
So now that we have a driver name, lets see if we can dump the driver.
Screen Shot 2014-01-29 at 6.13.36 PM
Looking at the .pdb path in the strings output we can see that we have our malicious driver.
Screen Shot 2014-01-29 at 6.16.30 PM
Update 2:
An additional piece of information I found while running filescan, looking for the files called by the execution of add.exe, was the file path returned by volatility. All the paths appear to have \private as the root directory. I suspect this could be a great indicator for identifying compromised machines.
Screen Shot 2014-01-29 at 7.18.01 PM
I would expect to see filepaths that have the root of \Device\HarddiskVolume1\
Screen Shot 2014-01-29 at 7.22.14 PM
I verified this by locating every file that has \private in the filepath and all files that were returned were those files that appear to be faked.
Screen Shot 2014-01-29 at 7.31.08 PM
Update 3:
One of the cool things we can do with volatility is run our yara rules against the memory image. When a hit is found it will display the process in which the string was found. The syntax we would use to scan the image with our yara rule is:
Screen Shot 2014-01-29 at 10.36.28 PM
We can see that all of the command line input and output is located in the vmtoolsd process. I limited the output for brevity.
Screen Shot 2014-01-29 at 10.38.53 PM
Screen Shot 2014-01-29 at 10.43.37 PM
We also see 2 different processes where add.exe was found.
Screen Shot 2014-01-29 at 10.41.53 PM
Screen Shot 2014-01-29 at 10.42.54 PM
If we compare what we found with the strings plugin we see one difference that we were unable to find with the yara plugin. By mapping the string of our original suspicious file back to a process we see that it’s located in kernel space.
Screen Shot 2014-01-29 at 10.51.58 PM
We can also take the strings from the binary and see where those map back to as well.
Screen Shot 2014-01-29 at 10.55.55 PM
Had we been unable to locate the driver with driverscan and device tree we still may have been able to identify the file based on the string found.
Update #4:
The awesome people over at volatility were kind enough to send me an unreleased plugin to demonstrate the faked connections that ADD.exe created. I’m told the plugin will be coming out with their new book. The netscan plugin uses pool scanning so it may find objects that were faked like we have already seen. The new plugin works against the partition tables and hash buckets that are found in the kernel. I’m sure MHL can explain it much better then I just did :)
If we look at the normal netscan output we are able to see the faked established connections.
Screen Shot 2014-01-30 at 12.37.29 AM
Now if we use the tcpscan plugin we can see the difference (or not see the difference) :)
Screen Shot 2014-01-30 at 12.38.58 AM
Valid established connections would look like this with the tcpscan output.
Screen Shot 2014-01-30 at 12.40.48 AM
As you can see we were able to find additional artifacts related to the add.exe tool by comparing the output of the 2 plugins.
Again thank’s to the volatility team for floating me the plugin to use for this blog post as well as all of their research and work going into the tool and moving the field of memory forensics forward!

Thoughts on Incident Response Teams

With all of the breach notifications that seem to be flying around daily over the past few months I can’t help but wonder how their IR teams operate.  I won’t speculate or cast any blames or failures as I simply don’t know.  I do have definite opinions on how I think teams can operate and grow though.  These are only my opinions and I’m sure other’s have different ones.  If you have differing opinions I would love to hear about them.
First, you can’t have a team without having some type of structure that defines roles and responsibilities.  This structure is by no means new and my opinion is “if it’s not broke then why try and fix it”.
Incident Handlers
The incident handler is your subject matter expert.  He/she should be a highly technical person that is also cognizant of risk and business impact.  The IH needs to understand the threats that your company faces and be able to direct efforts based on these threats and data being relayed by others performing analysis.  The IH should have the ability and freedom to say “contain this device based on these facts” (of course, good or bad, business needs can always trump these decisions).  All aspects of response should flow through this person so that they can delegate duties appropriately. The IH should also be the go to person for any information/explanation related to the response efforts.
Response Analysts
The response analysts can have various levels of skills and abilities.  Their function is to analyze incoming data for signs of compromise, lateral movement, data exfiltration and so on. Their findings should be documented and communicated to the incident handler so that appropriate actions can be taken.  The people filling this role should also be familiar with the different detection technologies that your org has as well as the ability to create or recommend detection signatures based on their analysis.
Incident Coordinator
This person, unlike all the other’s, does not need to be technical.  The role of this position is to handle all of those administrative tasks that come along with response.  Things such as communication with management regarding incident status, contacting POC’s to help with containment and collection, and maintaining timeline of activity for reporting would all fall under this role.
Reverse Engineer
This position is probably the most difficult to fill.  Harlan Carvey wrote a blog post a few months ago regarding the disconnect between RE and IR which can be found here and describes the reasons why I feel this person needs to be directly involved.  Having an RE person that is directly involved with IR can bridge that disconnect.  There are usually specific questions that need to be answered when you are in the heat of battle, such as how to decode the C2 communication or identifying indicators of malware that can be used in immediate detection.  Having this person directly involved with the response teams can greatly speed up answering those needed questions.
POC’s
This position is not directly related to the response teams, but I feel needs to be mentioned.  You can think of these people as your boots on the ground.  They are directly responsible for collection and containment of assets (depending on the size of your org, they may even be someone on your IR team).  They should be very familiar with your processes and have the ability to act on your team’s behalf.

Please understand that I’m not saying that your org needs to go out and hire multiple people that can fill all of these roles.  I work in an environment that has close to 1 million endpoints so we require a fairly large team.  If your environment is smaller, then I would see people performing many of the above tasks simultaneously.  They are just roles that I think need to be filled for any response team.

Training
Yearly DFIR classes are great.  I have taken a few over the years and can say that they have filled a much needed gap when it comes to learning a specific need.  They can also really get you focused in the right direction, unfortunately the yearly class is not enough in my opinion.  I am a big fan of peer led training and mentoring.  I see this as a must have if you are focused on building teams from within.
As a senior member of our CIRT, I think it’s important that I find out what aspects of response people are interested in.  It may be network analysis, host analysis, reverse engineering to name a few.  Whatever they are interested in I think it’s important not to discourage, but to encourage further development.  That being said, if their passion is red team activities I may try and steer them back to our team needs.  If I or someone else on our team is knowledgeable in that area, working with them can go a long way and can be so much more beneficial then a yearly class as you can focus on methods and tools that your company employs as well as the face time and personal interaction with that junior team member.
I was speaking with a colleague the other day about this.  When I was in the military each unit had a METL (Mission Essential Task List).  This was basically a list of all the essential actions the unit was expected to be able to perform during times of war.  When we trained, our training was always focused according to our METL.  I think response teams can benefit from this methodology.  If you list out the critical tasks that your team needs to be able to perform during response, you can then test your people / team against this list.  This will greatly help you figure out the different aspects in which you should focus your training.
If you are a SME in a certain aspect of response, take the time to develop training in that area.  If the training is good your team will love you for it and can be a great way to get everyone on the same page.  Along those same lines, there is nothing wrong with having junior members develop training to share among peers on your team.  Knowing that you will be teaching someone a certain aspect of response or analysis usually is enough to get that person to do a little deeper research than they may have done before.  I am a firm believer that knowledge not shared is knowledge wasted.
Demand Involvement
As I stated above with regards to the response analyst.  Team members with varying degrees of skill level can often be nervous about analyzing data that may be outside their comfort zone.  As a result, they may shy away and not be as involved as they should be for fear of missing something.  IR is a team sport and needs team participation.  If you notice a team member begin to shy away from analysis, call them on it and get them back in the game.  Mistakes happen and fear of making them should not impact response.  As long as the mistake is recognized and the person has learned from it then I think that’s OK.
Technical Competency
I’ve seen teams that have segmented the different roles on their teams.  This is not always bad as you can get clear focus on different aspects of detection and analysis, but when the s**t hits the fan and you have multiple response activities going on at once you may need to shift people.  If they have no idea how to write a detection rule or look at a memory dump then this may not be so good.  I think that cross training should be a focus and analysts should know the different aspects and technologies.  They also get the added benefit of not being pigeon holed into one area, but can grow and eventually become that senior incident handler if that is their desire.  You also never know, you may find that diamond in the rough that you may never have known about otherwise.
Again, these are just some of my thoughts and I would love to hear what other people think.  You can find me on twitter at @jackcr.

Building analysts from the ground up

If you read my last blog post then you may know that I’m a big fan of internal training to help improve your teams technical capabilities. I feel that if you can focus training that is specific to your company’s needs it is not only a win for the company, but a win for that analyst as they will be able to perform better in their job and feel more comfortable in doing so. This is especially true for new hires that may be entirely new to the DFIR field.
At the beginning of last year my company started moving forward with staffing an internal 24/7 security operations center. Knowing that we were not going to be able to find entry level people that will be able to sit down at a NSM (Network Security Monitoring) console on day 1 and be able to effectively analyze alerts, we knew that we were going to need to develop some type of training. We also knew that we were going to need to have a level of confidence in their abilities prior to turning anything over to them. I feel that it takes a person at least 3 – 6 months before they are comfortable analyzing alerts and probably an additional 3 – 6 months before they are truly effective at it. Our initial goal was to get these new people familiar with the tools they would be working with and spend time coaching, mentoring and monitoring them until they became that effective analyst.
Based on time constraints for our first class, we initially began training them on the tools they would be using and work the specifics of the alerts they were analyzing as we went through the weeks upon weeks of coaching and mentoring. These people eventually began to understand what they were looking at and the reasons why they were looking at these alerts, but there were definitely some shortcomings in this approach. I would say one of the drawbacks to this approach was the initial lack of understanding around what exactly they were looking at. For example, if we take the http protocol. The difference between a GET and a POST seems obvious to me, but it’s far different for someone who has only had experience provisioning users or installing operating systems. Another major drawback to this approach was the limited amount of time we had to spend on peripheral topics that will help them do their job more efficiently. Things like identifying a hostname based off an ip address may seem like a simple task, but in a very large environment it can be quite challenging at times. We eventually got them to a place where they were able to handle alerts, but there were many late night phone calls, weeks spent off site training as these people were in a different geographical location and just the expected amount of time it takes for someone to grow into a position vs some expectations that they would have gotten there quicker.
Having our first class of students up and running we had some time before our next class was scheduled to start. We spent this time going over lessons learned and developing training based off those identified areas. When we were finished we had developed 5 weeks of training that each class would go through. Our goal for this training was to get them familiar with the most important aspects of their job. We developed content centered around topics such as:
1. Org structure
2. Linux overview
3. Networking fundamentals
4. Pcap analysis
5. Network flow analysis
6. Http / smb protocol analysis
7. Log analysis
8. Alert analysis
9. Regular expressions to aid in reading detection rules
10. Host identification
11. Hands on labs and testing centered around positive and false positive alerts
12. Live supervised alert analysis
We also ended the training with a written test to see, not only how much the student had retained, but also to identify areas where we potentially needed to tweak our training.
Having put our new analysts through these 5 weeks of training we still understood that we only provided them with the tools and hopefully the mindset to be successful. The real learning and development comes by actively analyzing alerts, making decisions based off your analysis and collaborating with your peers about what you and they are seeing.
This was truly an amazing opportunity for me and one in which I learned a lot. Some of the takeaways I would like to share if you are developing this kind of training and teaching it to others are:
1. Take note of the questions your students ask. If they have a general theme you may need to tweak your training.
2. Don’t assume your students are at a certain technical level.
3. Regardless of job pressures, try to devote your full attention to your class.
4. Continually ask questions throughout the class to gauge level of understanding.
5. Focus on analysis methodologies vs individual task or alert.
6. Understand that not everyone will “get it”. Some people have a hard time with analysis and unfortunately this job is not for everyone.
It has been some time since that first class and we have definitely seen the fruits of our labor as we have some very capable analysts now. These analysts have not only gained a new skill, but the majority have gained a whole new career path. I say that’s a win for everyone involved!
If you have any questions or comments you can hit me up on twitter at @jackcr.