If you read my last blog post then you may know that I’m a big fan of internal training to help improve your teams technical capabilities. I feel that if you can focus training that is specific to your company’s needs it is not only a win for the company, but a win for that analyst as they will be able to perform better in their job and feel more comfortable in doing so. This is especially true for new hires that may be entirely new to the DFIR field.
At the beginning of last year my company started moving forward with staffing an internal 24/7 security operations center. Knowing that we were not going to be able to find entry level people that will be able to sit down at a NSM (Network Security Monitoring) console on day 1 and be able to effectively analyze alerts, we knew that we were going to need to develop some type of training. We also knew that we were going to need to have a level of confidence in their abilities prior to turning anything over to them. I feel that it takes a person at least 3 – 6 months before they are comfortable analyzing alerts and probably an additional 3 – 6 months before they are truly effective at it. Our initial goal was to get these new people familiar with the tools they would be working with and spend time coaching, mentoring and monitoring them until they became that effective analyst.
Based on time constraints for our first class, we initially began training them on the tools they would be using and work the specifics of the alerts they were analyzing as we went through the weeks upon weeks of coaching and mentoring. These people eventually began to understand what they were looking at and the reasons why they were looking at these alerts, but there were definitely some shortcomings in this approach. I would say one of the drawbacks to this approach was the initial lack of understanding around what exactly they were looking at. For example, if we take the http protocol. The difference between a GET and a POST seems obvious to me, but it’s far different for someone who has only had experience provisioning users or installing operating systems. Another major drawback to this approach was the limited amount of time we had to spend on peripheral topics that will help them do their job more efficiently. Things like identifying a hostname based off an ip address may seem like a simple task, but in a very large environment it can be quite challenging at times. We eventually got them to a place where they were able to handle alerts, but there were many late night phone calls, weeks spent off site training as these people were in a different geographical location and just the expected amount of time it takes for someone to grow into a position vs some expectations that they would have gotten there quicker.
Having our first class of students up and running we had some time before our next class was scheduled to start. We spent this time going over lessons learned and developing training based off those identified areas. When we were finished we had developed 5 weeks of training that each class would go through. Our goal for this training was to get them familiar with the most important aspects of their job. We developed content centered around topics such as:
1. Org structure
2. Linux overview
3. Networking fundamentals
4. Pcap analysis
5. Network flow analysis
6. Http / smb protocol analysis
7. Log analysis
8. Alert analysis
9. Regular expressions to aid in reading detection rules
10. Host identification
11. Hands on labs and testing centered around positive and false positive alerts
12. Live supervised alert analysis
2. Linux overview
3. Networking fundamentals
4. Pcap analysis
5. Network flow analysis
6. Http / smb protocol analysis
7. Log analysis
8. Alert analysis
9. Regular expressions to aid in reading detection rules
10. Host identification
11. Hands on labs and testing centered around positive and false positive alerts
12. Live supervised alert analysis
We also ended the training with a written test to see, not only how much the student had retained, but also to identify areas where we potentially needed to tweak our training.
Having put our new analysts through these 5 weeks of training we still understood that we only provided them with the tools and hopefully the mindset to be successful. The real learning and development comes by actively analyzing alerts, making decisions based off your analysis and collaborating with your peers about what you and they are seeing.
This was truly an amazing opportunity for me and one in which I learned a lot. Some of the takeaways I would like to share if you are developing this kind of training and teaching it to others are:
1. Take note of the questions your students ask. If they have a general theme you may need to tweak your training.
2. Don’t assume your students are at a certain technical level.
3. Regardless of job pressures, try to devote your full attention to your class.
4. Continually ask questions throughout the class to gauge level of understanding.
5. Focus on analysis methodologies vs individual task or alert.
6. Understand that not everyone will “get it”. Some people have a hard time with analysis and unfortunately this job is not for everyone.
2. Don’t assume your students are at a certain technical level.
3. Regardless of job pressures, try to devote your full attention to your class.
4. Continually ask questions throughout the class to gauge level of understanding.
5. Focus on analysis methodologies vs individual task or alert.
6. Understand that not everyone will “get it”. Some people have a hard time with analysis and unfortunately this job is not for everyone.
It has been some time since that first class and we have definitely seen the fruits of our labor as we have some very capable analysts now. These analysts have not only gained a new skill, but the majority have gained a whole new career path. I say that’s a win for everyone involved!
If you have any questions or comments you can hit me up on twitter at @jackcr.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.