In the last installment of Sniper Forensics we discussed the nuances of “finding evil” in cases where it may be unclear weather or not a breach has occurred. In the specific example I used, a breach had not occurred, yet a proactive investigation was requested due to a previous incident on systems that were no longer in place.
In that days that followed my post Harlan Carvey made some good points that I think need further attention and/or clarification. I really appreciate when folks like Harlan take the time to not only read what I have written, but actually internalize it, try to extract the good, and point out the confusing. It can really help to make us (all bloggers) better writers.
To start off with, while this may seem like a basic concept to me, I suppose it requires some additional clarification. In my blog posts…pretty much all of them…there are details that I intentionally leave out. I work in a world with paranoid customers, non-disclosure agreements, and security clearances. I cannot divulge too much information about anyone or anything that could potentially allow the specific situation or specific customer be identified. Most folks in the security community have faced, or are currently facing similar challenges, so this should not come as a surprise to anyone. When I write a post to explain something that I did, or how I figured something out, you are not getting the entire picture, and you won’t, but that’s simply the nature of my work. Hopefully, my posts will provide enough of a background and enough technical information to allow you to improve your own methodology based on my experiences. If I shared everything I did, the post would be exponentially longer, and you would probably get bored of reading the post. For the sake of brevity and clarity, I only include the details I think made the most significant impact in the case. In my mind, that is the whole purpose for writing! Helping others be better by sharing experiences with the larger community.
In my previous post, I mentioned using the lessons learned from previous cases to help me “find evil”. I thought then, and still think today, that drawing from your own past experience is extremely valuable when working cases that are more nebulous than others that may be more straight forward. Since this was a financial case, and I have worked on a LOT of these types of cases over the past five years, I had a decent data sample to draw from.
Now, that is not to say that every case is similar. I have certainly seen my share of unusual cases that were unlike any I have ever seen before. But…and this is a pretty big BUT…the overwhelming majority of cases will share common elements. It only makes sense in a case where the existence of a breach is uncertain, that you draw from what you know to exist in similar cases from similar victims (type of business, type of systems, type of web server, etc) where a breach DID occur. If you don’t find what you are looking for at first using this methodology, there are obviously other checks that you will need to make, but beginning from a place of a “known bad” is a great place to start.
In terms of what to look at, from a technical standpoint, I mentioned dumping RAM, generating a timeline that includes the active file system, registry hives and ntuser.dat files, pulling local system registry hives and NTUSER.dat files, and parsing the Master File Table ($MFT). I will also extract any log files I can get my hands on to include the local Windows Event logs, Dr. Watson logs, firewall logs (if there are any) and any specific application logs that may be present. I will also grab the Master Boot Record ($Boot or MBR) so that I can look for signs of boot sector infections.
I rip the hives with Harlan’s tool, “RegRipper”, and generate the timeline with the Sleuth Kit’s FLS and Kristinn Gudjonsson's tool, "Log2Timeline". To parse the log files, I use Mandiant’s Highlighter, Microsoft’s Log Parser, and the command line utilities, “strings”, “grep”, “gawk”, and “cut”. Notice how to this point, I have not said anything about forensic images…it’s because while they are useful, they ARE NOT the only thing you need, and in my opinion are of least importance in solving the types of cases that I usually investigate.
Once you have your raw data, you can start looking for “bad stuff”. This is where your prior experience will really come in handy! In the example I used in my previous post, I stated that malware has to do three things. Here is how those three things relate to the Breach Triad:
- Get onto the system somehow (Infiltration)
- Run, and hopefully gather something – like credit card numbers (Aggregation)
- Do something with that something – generate an output file and/or get that data from the target system onto another system (Exfiltration)
To follow up with what I stated in my previous post, allow me to provide a bit more detail into each of those components.
Infiltration simply stated is how the bad guys got onto the system. Technically, there are only a few ways this can happen.
- Insider activity (intentional or unintentional)
- Open remote access
- Web based exploit
So, when looking for looking for an infiltration point, in any case, regardless of the type of environment, start looking here. Where is here you ask? OK…how about Security Event Logs if you have them. Do you have connections from unknown, or non-standard IP addresses? Do you have logins from known IP addresses but outside of the scope of what the customer would consider to be “normal”? For example if, Snuffy Joe only works from 0900 – 1700 Monday through Friday, and you suddenly see Snuff Joe logging in at 2100, there might be an issue. You can also look in the log files generated by certain remote administration applications for the same type of data. Something else we see a lot of is printer share error messages. This is predominantly seen when an attacker uses Terminal Services (or RDP) to connect to the target, and he forgets to uncheck the box to bring his local resources (like the printer his clipboard) with him. Then, when he connects, the target system tries to resolve his hostname as a local printer and fails. This will generate an error is the System Event logs.
Insider activity may a bit more difficult to nail down since, in my opinion, you have to have something present (like malware, or a malicious PDF, or whatever) before you can try to identify where it came from. So in many of my cases, I circle back to infiltration method after I have identified the result of the breach. Also, sadly, in a vast majority of my cases, I don’t have the logs necessary to identify initial infiltration. Many systems are only set to the Windows default for security event logging which normally only gives me one or two days of information to work with. Since the average time from breach to investigation is between 30 and 45 days, having a couple days of logs is of little help.
Also, in cases where the malware may no longer be present, you might be stuck with only scattered artifacts. The key is to know what those artifacts are, what they mean, and in what context (more to come on this later).
Attacker’s do stuff (very technical, I know), or else why make the attack in the first place. They are either looking to destroy something, or steal something. In my world, they are looking to steal stuff….credit card data, trade secrets, personally identifiable information, personal health care information…something they can turn around, sell, and make a profit from. With this being the case, attackers normally (but not always) use malware to do their dirty work for them. Remember, malware has to run, generate an output, and/or exfiltrate data.
This is why RAM dumps, timelines, and the registry hives of such vital importance. Something can’t run on a system without showing up in RAM now can it? And it can’t run in RAM if it’s first on actually ON the system, right? And, many good pieces of malware register themselves as services, so the registry hives would show potentially when the malware was launched, who launched, which command line switches were used, and when it was registered as a service. ALSO...while it can run and be deleted, you may still be able to find traces of its existence in the pagefile, or in the registry. So, I cannot stress enough the importance of knowing what you are looking for...and not just blindly wandering around.
Here is a brief example from a case I recently worked. When I got onsite, I fired up F-Response, grabbed RAM, generated volatile data outputs and a timeline with my custom script, and started the images. While the images were burning, I opened up my RAM dumps with Memoryze, and started looking for things that were “out of place” (this comes from my 13 years of experience working with Windows systems). I also began looking at the output from my volatile data collection script. The first thing that popped out at me was that the output from promiscdetect indicated that the NIC was in promiscuous mode. Now, that does not mean that the NIC has loose morals…it means that it was picking up ALL network traffic…not just his own. This is normally an indicator of a network sniffer. So, I pulled out my handy keyword sheet that our team keeps of known malware packages and started grep’ing through the timeline for known sniffers. Within a few minutes, I had a hit! So, I looked in my RAM dumps for that process name, and there it was. So then, I grep’ed through the parsed registry hives and found that the malware was registered as a service. Next, I exported the binary from the RAM dump and ran strings against it to see if I could get a general idea of what it did. Right there was an IP address, a username, and some other colorful keywords. There was also a password that was used to protect the output files.
So you can see how having the information that I been referring to can be very helpful in quickly identifying a breach even when you are not 100% sure what you are looking for. Also, using your own past experience (like a list of know malware keywords, or prior administrative experience) as well as the collective experience from your team can prove extremely useful.
The third aspect of the Breach Triad is Exfiltration, or getting the stolen data from the victim system to a system controlled by the attackers. After all, what good is an attack if you can’t get your hands on the good right? It HAS to go from one system to another, so there are a few places that you can look.
Logs are a great place to start (if you have them). Firewall logs are usually where I begin to look for exfiltration points. Many organizations that I have seen that actually have firewalls in place have good ingress (inbound) ACLs (Access Control Lists or rules) but fail to implement solid egress (outbound) rules. I suppose they are more worried about what’s coming into their network rather than what’s making its way out. And normally, I can understand and agree with that, unless of course you have malware on your system that is siphoning important data and sending it to an attacker half way across the globe. Then that would be bad. So in my opinion, egress filters are just as important as ingress filters. By checking the logs, you can usually see what connections were being made and where they were being made to. Always helpful. Additionally the registry may also contain information that could prove valuable…stuff like drive shares (especially when made to hosts that don’t exist on the internal network), and typed URLs.
I understand that not every case is the same. You can count on seeing differences in each and every case you work. Some are extreme, others are more subtle, but I also guarantee you will see similarities. I also understand that not every blog post is as comprehensive as a textbook. It’s not meant to be. These posts are meant to be short snippets of information that will be useful to other investigators.
Hopefully, by sharing what we do and how we do it (to a limited extent) we can help others who find themselves in a similar situation and are wondering what to do and where to start.