Attribution and Threat Hunting, the Missing Steps After an Incident

October 5, 2020 Sage Advice
4,539 views
Reading Time: 4 minutes
Attribution and Threat Hunting

Attribution and Threat Hunting, the Missing Steps After an Incident

You’ve cut off access to the attacker, locked them out of the compromised computer, and determined the responsible group. My God, now it’s time to get the president on the phone, call the National Guard and launch an offensive cyber strike. At least that what I wish would happen when we catch the bad guys. Sadly, and more often than not, what I hear from many SOC analysts is that they just stop the investigation after the threat has been neutralized and move on to the next alert. It’s the equivalent of someone breaking into your house, getting them out, and not following through to see who they were, what did they want, and how to stop them next time. This is where attribution becomes important. And, while we may not be launching offensive cyber strikes against an adversary, attribution to a specific threat actor group can yield huge results in understanding motivation, performing threat hunting, and building alerts to stop the next attack.

Attribution to a Group

To apply this process, let’s look at a malware-based incident. You catch a computer that has been infected with malware from a vbscript that then attempts to drop a bad DLL file that is executed with regsvr32.exe. The incident has been contained, but let’s keep running with the malware. What would success look like in this campaign? To help determine that, I like to pull the malware to further analyze it. Once I have the malicious file, I then will feed it through various techniques to extract behavioral IOCs and what malware family it belongs to. Quick actions to analyze malware and extract TTPs:

Sandbox detonation, static analysis, and dynamic analysis.

A lot of malware has very distinct techniques used by specific groups. Attributing the malware to a specific group is huge. This allows for understanding the possible objective of the group. In the case above, this was Ursnif malware as told by performing quick malware analysis. From here I can the perform OSINT (Googling) research, or even better just look them up on the MITRE ATT&CK Framework.

https://attack.mitre.org/software/S0386/

“Ursnif is a banking trojan and variant of the Gozi malware observed being spread through various automated exploit kits, Spearphishing Attachments, and malicious links.[1][2] Ursnif is associated primarily with data theft, but variants also include components (backdoors, spyware, file injectors, etc.) capable of a wide variety of behaviors.[3]”

Threat Hunting

Perfect! Now we’ve matched the malware to a specific software or threat actor, and we can say their primary motivation is financial gain through use of a banking trojan. Next let’s use the MITRE ATT&CK Framework to map out their techniques and perform threat hunting. On the MITRE ATT&CK Framework go back to the Ursnif software page, find the “Attack Navigate Layers”, click the drop down and view.

Here is the actionable intel we need to start threat hunting. Based on the prior success of the malware infecting a computer it is likely that they might have already gotten in the environment. Hunting should immediately begin to look for other endpoints infected through the same initial access and execution. After that, hunting should than focus on later post-exploitation techniques such as persistence, lateral movement, and CnC. If anything is found, it’s time to go back into incident response mode. If nothing is found, we can now go full circle and start to develop alerts based off the behavioral IOCs found in the malware.

Alert Development

Moving into alert development now, there are two goals here, reducing attacker dwell time and detecting attacker behaviors. The first goal is to take lessons learned from the previous incident and then see how the attack can be prevented or the attacker dwell time reduced. This is making sure we don’t get burned by the same attack again. It’s horrible when security teams keep getting hit through the same attack vector and do absolutely nothing to try to stop it. If a bad guy kept trying to break into my house through the back window, I would sure as hell put a lock on the window and maybe an additional control like my last target from the shooting range to scare them.

Behavioral Alerts

The second step as mentioned is always the most difficult and not always achievable. That goal is setting alerts to either deny or detect attacker behavior. What this means is we are not focused on blocking atomic IOCs like IPs, and URLs. We are focused on detecting when behaviors are seen like regsvr32.exe being used to load a DLL that was dropped on the system from a VB script. Between different campaigns, attacker IPs, and URLs will change, but what does not usually change are there behaviors. If we can detect these patterns it will go a long way to stop the next campaign. Much of the working performed during threat hunting can be turned around and built into alerts. In the case of this threat actor, I was able to take the dropped malicious DLL and then set a detection so anytime regsvr32 is used to load the DLL, it will light up the alerts.

Recap

The end result of going through this process with the malware mentioned above, I was able to cut attacker dwell time in half during the next campaign. Also, during the hunt I found malware that had come from a different attack vector that used similar TTPs. Another fine day of doing incident response and finding bad guys. So next time an analyst tells you attribution doesn’t matter in an incident, show them the step they might not be taking and how attribution leads to threat hunting and stopping the next attack.

Leave a Reply

Additional Resources

Archives