Case 001 Super Timeline Analysis

April 12, 2021 Labs, Sage Advice, The Hunt
34,084 views
Reading Time: 24 minutes

Case001 Super Timeline Creation and Analysis

Before Starting this lab it is strongly recommended you examine the memory, autoruns, pcap, or logs first. Come to this lab with indicators to search for.

Learning Objectives of Super Timeline Creation and Analysis

  • Be able to explain what a super timeline is.
  • Understand the sources of data used for a super timeline.
  • Be capable of building a super timeline and analyzing it.
  • Have a good understanding of MACB Time Stamps.
  • Understand the basic concepts in identifying lateral movement.

Semi-Required Knowledge

  • Events and indicators of interest from investigating other artifacts first (prior labs).
  • Basic Linux Command Line Fu
  • Basic Virtual Machine Operation.
  • Mounting E01 files <– This needs to be done prior to starting this lab.
    • Process the image file itself – not the mountpoint. We are mounting it in case we want to explore the image.
  • Creating a memory image timeliner body file (covered here again and in the FLS section).

Required Posters

  1. SANS Hunt Evil Poster
  2. SANS Windows Forensics Poster

A note about posters: updated posters seem to require a sans.org account. For analysts with an active account updated posters can be found at:

SANS Posters

Common Tools

Tools Covered Here

  • Volatility
    • timeliner
  • Log2Timeline
  • Pinfo
  • Psort
  • Eric Zimmermans Timeline Explorer

Other Learning Resources on this Topic

Music

Notes

  • Super Timeline analysis is easy to start, and hard to master. Be patient. Read sections on the matter from the books above.
  • Before beginning this lab you should have the E01 file from the system you are processing mounted to your SIFT Station
  • Keep solid notes on your thinking around evidence and data that you find
    • This is for teammates to understand your thinking
    • Understand your own thinking later… or after sleep.
  • Notes should be accompanied by screenshots that tell a story
    • Examples: Highlights, Boxes, Arrows Text. The reader should quickly understand what they’re looking at
  • A great note-keeping App that teams can use to coordinate is OneNote.
    • Each host gets a tab etc.
  • A great piece of software to take Screen Shots is Greenshot

Tools

This is a tools heavy lab. The following are highlights of the tools you should be familiar with.

Volatility Timeliner, MFTParser, and Shellbags modules

Volatility timeliner is a module for volatility that extracts many timeline-able events from memory and outputs them into a format suitable for timelining software. The MFTParser and Shellbags grab additional data from the Master File Table (MFT) and user Shell Bags for the timeline. Artifacts that have not been written to disk yet can sometimes be found in memory.

Vol2 Timeliner Docs

Vol3 Timeliner Docs

Vol MFT Parser

Vol Shellbags

Andrea Fortuna’s AutoTimeliner Tool

Log2Timeline

Log2Timeline is an amazing piece of kit. This essential program was written by a SANS Student, Kristinn Gudjonsson. This was allegedly done following a suggestion from Rob Lee. What an amazing outcome. Log2Timeline, also known as Plaso, is one of the best programs for creating timelines. Period. Investigators can process entire hard disk images where they “throw the kitchen sink at it” and parse everything. Alternatively, they can parse only select items from the disk for more concise timelines. Log2Timeline processes disk images and places the findings in a plaso dump file.

Pinfo.py

Plaso Info is a tool that returns information about a plaso dump file.

Psort.py

Plaso sort processes the plaso dump file. The most common file type to process the data into is a CSV.

PSteal.py

Combining Log2timeline and Psort into one action for a quick slice of the image. This will not be demonstrated in this lab.

Image_Export.py

From the help header:

This is a simple collector designed to export files inside an image, both within a regular RAW image as well as inside a VSS. The tool uses a collection filter that uses the same syntax as a targeted plaso filter.

Eric Zimmerman’s Timeline Explorer

Eric Zimmerman wrote, and maintains, one of the best Timeline viewing tools on the planet: Timeline Explorer. It is highly recommended that Timeline Explorer is used to view the super timeline CSV files.

Super Timelines Super Knowledge

Super timelines are made up of many different data sources found on a systems disk and in it’s memory. Investigators can use tools like “Log2Timeline.py” to process a disk image and collect vast amounts of data for the timeline. Sources of data include the registry, system logs, and much more. All of these data points are placed into an SQLite database called a dump file. Volatility is also able to take time stamped events from memory images and add those to a body file. This body file is then processed by Log2timeline using the “mactime” parser. The Log2timeline “mactime” parser extracts timestamped events from the memory body file and places them into the dump file. Finally, Psort processes the dump file into a CSV file that can easily be examined using Eric Zimmerman’s Timeline Explorer or Excel. This process is illustrated later on in this section.

The best way to approach super timeline creation is to kick it off almost immediately after collecting the hard drive, and having obtained an FLS collection of the drive. Super timeline processing can take many hours for large server drives. That won’t be the case for this lab, but in reality it can easily take hours. The best approach is for investigators to have the super timeline processing in the background as they tackle other tasks such as memory analysis.

It is best to process the E01 image file with Log2Timeline and NOT the mount point where it is mounted.

You Don’t Need to Go Big to Win

“Throwing the Kitchen Sink at it” is an American term to say, “throw everything at it”.  Often, Super timelines are created with the Kitchen Sink method of using all the parsers to parse all the things for MAXIMUM DATA. Kitchen Sink timelines are not always needed. A targeted timeline is often sufficient for an investigator to get a very good idea of what occurred during an incident. Targeted timelines also have the advantage of being made much quicker than a Kitchen Sink timeline. Speed can be important in today’s investigations; for example, when facing a ransomware event.

A great practice is to create a targeted timeline, and analyze it while the Super Timeline is processing in the background. Modern digital forensics enables digital forensic professionals to conduct “battlefield forensics” on machines. Battlefield forensics is a technique where small, but thorough, artifacts can be retrieved (often remotely) and quickly. Cases can often be solved using a small set of targeted artifacts. Super timelines are still very relevant, and analysts should understand how to do them.

Targeted Timelines

Targeted timelines are created by an analyst who knows what they want to look at. Each case may call for different artifacts to be analyzed. Different hosts in the same case may even have different requirements. Below are a list of artifacts and what they may offer an investigator during an investigation. A quick look at artifacts and what they offer are listed below. References for this table are the SANS Windows Forensics Posters, and “Incident Response & Computer Forensics” (Luttgens, Pepe, and Mandia).

Artifact NameEvidence of:DescriptionAnalyst NotesCollection/NotesLog2Timeline Parser
MFT
(Master File Table)
Data was present (Downloaded, Deleted, Etc.)NTFS Drives organize their stored data through a table. This table tracks information such as size, times associated, directory, and parent directory. A somewhat lower level view of the volume that takes a bit longer to parse. Present with every volume.Specialized software needed to retrieve from a live box.MFT
USN Journal
(Update Sequence Number Journal)
Data was present (Downloaded, Deleted, Etc.)NTFS Drives may include a USN Journal. This can be parsed to provide a high level overview of changes made to a volume. Data may include the change that occurred, when it occurred, the file name, the file attributes, and the MFT data.A somewhat higher view of the volume that is quicker to parse. Not required by NTFS.usnjrnl
Amcache.hveProgram Execution (and presence)ProgramDataUpdater tracks application experience data in a registry database in the users AppData Local folders. This info drives the `WIN+TAB` function in Win10.+ Every Executable run is tracked here
+ First Run Time = Last Mod Time of the Key
+ Full Path and SHA1 of EXE located here as well
+ Times are $StandardInfo
amcache
Prefetch+Program Execution (and presence)
+File/Folder Opening
CacheManager maps the path and executable name of every application that is run to a .pf file. Each pf will include last time of execution, number of times run, and device and file handles used by the program.+ Creation time of the pf is 10 seconds after the first time that program was run.
+ Modified time is the last time the file was executed.
+ If the PF is parsed: it contains timestamps for the last 8 times that file was executed.
C:\Windows\Prefetch. Can be viewed on a live system.prefetch
SRUM
(System Resource Utilization Manager)
+Program Execution
+Network Activity
The System Resource Manager records 30-60 days of data that tracks system performance. It tracks items such as:
+ Applications Run
+ User Account tied to the execution
+ Byes Sent/Received per app per hour
Enriches the timeline to assist in correlation and data missed in other sources.srum
EVTX Logs+ Account creation
+ Service (malware) installation
+ Remote logons
+ Exploits
+ PSExec usage
EVTX Logs are the system logs for windows. There are literally hundreds of event log files in Windows. Each corresponding to a different silo of events. For example, Security events are tracked in the Security EVTX file.These are must for analysis. If analyzing a dead box image Log2Timeline is a quick win. If extracting from a live box Eric Zimmerman's EVTX Explorer is a great tool to dump all the events into a single CSV for analysis.
Great Starter Logs: Security, System, Terminal Service Logs, Task Scheduler Maintenance Logs.
winevtx
Scheduled Tasks+Persistence
+Account Usage
Scheduled Show up in 2 EVTX Logs:
1. Security.EVTX
2. Microsoft-Windows-TaskScheduler Maintenance
Check both EVTX's for:
4698, 4702, 4699, 4700, 4701 from Security Evtx
106, 140, 141, 200, 201 from Task Scheduler Evtx
winevtx
windows_task_cache
Shellbags (aka BagMRU)File/Folder OpeningStores information about where users went locally and on the network.The last entry for a file is the last time it was opened.bagmru
Office MRUOffice Documents OpeningMS Office tracks lists of recent documents opened.The last entry for a file is the last time it was opened.microsoft_office_mru
Recycle BinFiles being deleted.C:\$Recycle.Bin
File deletion times and original filename contained within.
$I contains deletion date and time, original filename, original file size
$R contains copy of ORIGINAL FILE DATA

= "D" + + +

Example: Haxor deletes hacker.exe from the C drive using Joe's account:
C:\$Recycle.bin\\$RD2445.exe
Hidden C:\$Recycle.binrecycle_bin
recycle_bin_info2

Some examples of targeted timelines (don’t do these yet if you are following along):

A mixed search for evidence of execution, persistence, logons, network connections and data presence:

log2timeline.py --parsers="prefetch,amcache,winevtx,srum,usnjrnl" --status_view window dc01.dump ../E01-DC01/20200918_0347_CDrive.E01

A mixed search for evidence of execution, persistence, logons, network connections, deleted files and data presence:

log2timeline.py --parsers="prefetch,amcache,winevtx,srum,recycle_bin,recycle_bin_info2,usnjrnl" --status_view window dc01.dump ../E01-DC01/20200918_0347_CDrive.E01

A search for web traffic, persistence, logons, network connections, data presence:

log2timeline.py --parsers="webhist,winevtx,srum,usnjrnl" --status_view window dc01.dump ../E01-DC01/20200918_0347_CDrive.E01

Super Timelines (aka The Kitchen Sink)

The targeted timelines above are fine examples of super timelines. However, many would argue for a true super timeline you must have the maximum amount of residue captured from a system. It is truly up to the analyst. Again, a good technique is to first create a targeted timeline to gather the essentials. When the targeted timeline is complete the analyst should initiate the slower more thorough processing of the image.

Example of super timeline:

log2timeline.py --status_view window dc01-super.dump ../E01-DC01/20200918_0347_CDrive.E01

Simply leaving the parser list out of the command runs a pretty decent collection of options. However, it does not extract everything.

Running Log2Timeline, without any parsers listed, as of the time of this writing runs the parsers win_gen, winevtex, and olecf_automatic_destinations against a Windows Image. This is a very thorough timeline that will likely give investigators everything they need.

Processing the Images General Overview

The following is a rough guide to creating super timelines. It is not law.

  1. Process memory image with Volatility Timeliner, Shellbags, and MFT modules into a single memory timeline body file.
  2. Process E01 image timeline data with log2timeline into a plaso dump file with selected parsers.
  3. Process the memory body file into the plaso dump file with the mactime body parser.
  4. Sort the data with psort into a CSV.
  5. Filter the CSV to remove excess Windows noise if desired. A post processing filter is not demonstrated in this lab.

An illustrated look of the process is provided for additional clarity.

The commands will be broken down further in the lab.

Creating the Memory Body File

A body file is a format specific to The Sleuth Kit; an amazing set of opensource command line forensics tools. Volatility has 3 modules that will be used to parse key forensic materials from the DC’s memory image. The MFT module will carve out Master File Table residue that was in memory at the time of capture. The Shellbags mdoule will retrieve registry information regarding Windows GUI settings for Explorer that were stored in memory. Often things live in memory until they are written to disk, or simultaneously. Some artifacts are best collected from both memory and the disk.

Create a robust memory timeline body file that pulls the standard timeline data, shellbags, and MFT data from the memory image. Notice all three commands point to the same body file. Every command processes the memory for different data and adds it to the body file.

vol.py -f /cases/szechuan/dc01/memory/citadeldc01.mem --profile=Win2012R2x64 timeliner --output=body --output-file=./dc01-super-mem-time.body

vol.py -f /cases/szechuan/dc01/memory/citadeldc01.mem --profile=Win2012R2x64 shellbags --output=body --output-file=./dc01-shellbags.body

vol.py -f /cases/szechuan/dc01/memory/citadeldc01.mem --profile=Win2012R2x64 mftparser --output=body --output-file=dc01-mft.body

Case 001 Lab Note: At the time of this writing the Windows 10 image from the desktop was not easily parsed with Volatility. Analysts should check for updated profiles and tools. The Super Timelines created from either disk image will be sufficient to crack the case.

Combining the Body Files

cat dc01-shellbags.body >> dc01-super-mem-time.body

cat dc01-mft.body >> dc01-super-mem-time.body

Note the size of the shellbags output. Were any shellbags artifacts found in memory? No. Shellbags are not active in Windows 2012 by default.

Breakdown:

voly.py calls volatility.

-f designates the memory image file.

--profile tells volatility the Operating System profile to apply.

mft (or shellbags, or timeliner) designates the volatility module to run.

--output=body instructs the modules to export the data in a Sleuthkit format.

--output-file tells the module which file to output the data.

You will now have a body file containing a lot of useful collection of Window Events that were stored in memory. This data will be used to enrich the data pulled from the disk image by Log2Timeline.

Vol3

Volatility 2.x is coming to an end. Volatility 3.x is the newest version. The following commands are to help analysts get started on using the new version.

vol3 -f memory.mem timeliner.Timeliner --create-bodyfile

Note the size difference between artifacts extracted from memory when using Volatility 2.x vs 3.x? This could be due to deduplication efforts, or simply that there are not as many parsers for volatility 3 yet.

Desktop Image Notes for Case001

Remember, at the time of this writing, memory analysis for the Memory Image from the Desktop was not easily accomplished. A super timeline for that machine should be created without the addition of the memory body files.

Timeline Dump File Creation

The following section is meant for the analyst to follow along with. The analyst following this section should end up with three timelines. A triage timeline, a targeted timeline, and a super timeline.

Log2timline Commands Breakdown

The general breakdown of the Log2Timeline commands are:
Log2timeline.py Calls Log2Timeline

--parsers= Comma delineated list of parsers to apply. This option will be used in the following timeline creation examples.

--status_view window Brings up a nice windows of the progress of the timeline. The status window will look something like this when you run Log2timeline.

-f /usr/share/plaso/filter_windows.txt Designates the use of a filter. Filters can be used to instruct Log2Timeline to only extract certain files. This particular filter is included in the installation of Log2Timeline. It is intended for triage analysis. From the file itself:

Filter file for log2timeline for triaging Windows systems.
#
# This file can be used by image_export or log2timeline to selectively export
# few key files of a Windows system. This file will collect:
# * The MFT file, LogFile and the UsnJrnl
# * Contents of the Recycle Bin/Recycler.
# * Windows Registry files, e.g. SYSTEM and NTUSER.DAT.
# * Shortcut (LNK) files from recent files.
# * Jump list files, automatic and custom destination.
# * Windows Event Log files.
# * Prefetch files.
# * SetupAPI file.
# * Application Compatibility files, the Recentfilecache and AmCachefile.
# * Windows At job files.
# * Browser history: IE, Firefox and Chrome.
# * Browser cookie files: IE.
# * Flash cookies, or LSO/SOL files from the Flash player.

The first file designated after that is the destination followed by the source. In other words: Log2timeline.py destination.dump source.E01.

If an analyst wants to target a particular plugin within a parser they simply address it in a parser/plugin format. For example, --parsers="esedb/srum" would select only the SRUM plugin of the esedb parser.

The following methods are examples on how these timelines can be created. The final example is one of the most thorough timelines possible. The first two are examples of light and targeted timelines. Different cases will call for different targeting.

Creating a Light Targeted Timeline (Triage Heavy) Dump

The first timeline we are going to create will be a solid Triage timeline. This will take a bit longer than the FLS Triage timeline, but ultimately much shorter than a Super timeline below. Note: for this lab it won’t seem like a huge time difference. However, in real investigations dealing with large servers the difference will be large.

Creating a Triage style Super Timeline is easy. Simply using the premade filter included with Log2timeline will generate a great timeline to effectively triage a disk image.

The filter is located at /usr/share/plaso/filter_windows.txt and is designated with the -f switch. As stated above, this filter will filter on (extract) the following items:

  • MFT
  • NTFS LogFile
  • UsnJrnl
  • Recycle bin artifacts
  • Windows Registry files
  • Recent file activity
  • Jump List Files
  • Windows Event Logs
  • Windows Artifacts
  • Prefetch files
  • Browser History Artifacts

Further enrichment of this data with the memory artifact timeline will create a rather robust timeline in quick order. The commands to generate a triage timeline and add the memory data follows:

log2timeline.py --status_view window -f /usr/share/plaso/filter_windows.txt dc01-triage.dump ../E01-DC01/20200918_0347_CDrive.E01 --partitions "all"

log2timeline.py --parsers="mactime" --status_view window dc01-triage.dump ./dc01-super-mem-time.body

The results should be an approximation of the following results:

The triage timeline produced roughly 660 thousand events.

Creating the Targeted Timeline Dump

Parse winevtx, bagmru, usnjrnl, prefectch, amcache, winreg_default, SRUM. Add to memory with mactime parser. Run the commands in order.

log2timeline.py --parsers="winevtx,usnjrnl,prefetch,winreg,esedb/srum" --status_view window dc01-targeted.dump ../E01-DC01/20200918_0347_CDrive.E01 --partitions "all"

log2timeline.py --parsers="mactime" --status_view window dc01-targeted.dump ./dc01-super-mem-time.body

The following is a summary of what the targeted timeline produced. As always different analysts in the future may not have the exact same results.

The targeted timeline produced roughly 770 thousand events.

Creating the Super Timeline Dump

The Super Timeline is made up by parsing winevtx, MFT, prefectch, amcache, SRUM, win_gen, winreg, winreg_default, olecf_automatic_destinations, from the disk image and then adding in events found in memory using mactime parser against the memory image body file.

log2timeline.py --parsers="winevtx,mft,prefetch,esedb,win_gen,winreg,olecf/olecf_automatic_destinations" --status_view window dc01-super.dump ../E01-DC01/20200918_0347_CDrive.E01 --partitions "all"

log2timeline.py --parsers="mactime" --status_view window dc01-super.dump ./dc01-super-mem-time.body

The following is a summary of what the targeted Super Timeline produced. As always different analysts in the future may not have the exact same results.

The super timeline produced over 2.3 million events. A good analyst had better know how to pivot through the data.

Resultant Dump Comparisons

The following screenshot shows the size variations between the 3 dump files:

The Super Timeline is 3 times the size of the triage timeline, and roughly double the targeted timeline. Large production servers that with years of data can generate far larger dumps that take much longer to process.

Piling on the First Dump File

Analysts do not need a separate dump file for each timeline they create. Psort, the tool used to create the final timeline CSV, automatically deduplicates events. An analyst could create a light timeline into a single dump file to start. The light timeline could then be processed into a CSV. Log2timeline could begin processing the Super Timeline while the analyst is examining the light timeline.

Windows 10 vs Windows 2012 (Workstation VS Server) Artifacts

Servers are purpose built. They provide a service (hence the name). To maximize their capabilities in delivering these services some features in Windows that are not necessary are turned off by default. One of these items is the Prefetch feature of Windows. Prefetch is a performance enhancing feature that is not turned on by default on Windows Servers. Servers, and workstations, may not have the same forensic residue sources.

Working with the Dump Files

Dump files are created for the purpose of creating super timelines. Dump files are actual databases that are not human readable “flat” files like a CSV or text file. Comma Separated Value (CSV) files are easily analyzed in programs like Excel, Calc, and Eric Zimmerman’s Timeline Explorer. The following tools are dedicated to working with the dump files and generating CSV based timelines from them.

Pinfo

Pinfo is a great tool to understand the contents of a Log2timeline generated dump file. The general use of the tool looks like:

pinfo.py image.dump

For a more concise output:

pinfo.py --sections "events" image.dump

Psort

Psort is the tool used to generate a CSV timeline from a dump file. This tool has many options analysts should be familiar with. The ability to designate a particular time range of interest can be an invaluable asset to an analyst. As an example, if an analyst has processed an image for a server that had been running for 3 years, but knows the dates for which an incident occurred they can create a timeline for only those dates. This drastically reduces the amount of data that will end up in the timeline.

Analysts must beware of using time slicing! Time stamps for relevant events can be off! Time stamps can be manipulated in addition to computers often giving a time stamp of zero, or null. This can be interpreted as midnight at January 1st 1970! Time slicing could result in losing some key events. It is unlikely a case cracking event will be one of these “lost” events, but analysts should be aware of such a characteristic.

Timing Offset

In a previous section there was a discussion about the time zones of the victim systems being set incorrectly. The victim host machines were set to Pacific Time (GMT -7) rather than the appropriate Mountain Time (GMT -6). This results in host based timelines being off by 1 hour. The network telemetry (from the PCAP) was correctly set at GMT -6. All events should be represented in the same time zone to accurately understand how events lined up. Network and host events will be aligned later in this lab for a brief example. Adjusting the incorrect times is no problem! We can shift the timeline from each system by manipulating the time zone of the output! Analysts can simply make a note of the discrepancy in their report as well.

Recommended Method: Analysts that want a CSV with the times adjusted to match the correct time (same as the network time) can use the following command:

psort.py dc01-super.dump --output_time_zone "Atlantic/Cape_Verde" -o L2tcsv -w dc01-super-timeline.csv

The following command will generate a super timeline from the dc01-super.dump dump file created with UTC as the time zone:

psort.py dc01-super.dump --output_time_zone "UTC" -o L2tcsv -w dc01-super-timeline.csv

This will shift all of the event times “to the left” one hour. Now the times of events will accurately match the GMT time at the time of the incident.

Breakdown:

--output_time_zone: Time zone for the output
-o: Output format
-w: Output file

Lab Timeline

The Super Timeline created above with roughly 2.3 million events will be used for the following analysis labs. Analysts are encouraged to look at the triage timeline and see if enough significant events are present in the data.

The timeline CSV’s will be rather large when you create them. A CSV this size is a challenge for Libre Office Calc. An analyst will have better results using Microsoft Excel or Erik Zimmerman’s Timeline Explorer. Eric Zimmerman’s tool takes a while to load a large CSV; however it is well worth the wait. Transporting these files is much easier when they are compressed. CSV’s compress quite well, as demonstrated:


Analyzing the Timeline

Eric Zimmerman’s Timeline Explorer (The Recommended Method)

Eric Zimmerman is an amazing member of this great community. He provides an exceptional set of tools any serious Incident Responder should be familiar with. One of these tools is Timeline Explorer. Timeline Explorer is a great tool for examining any CSV file. It has many capabilities purpose built for analysts to quickly pivot through the data such as robust search and filter features.

A quick video overview of the tool is provided for free from some of the folks at SANS. It is intuitive and continuously maintained by Eric Zimmerman. He is also a very approachable person easily found on twitter at @EricRZimmerman.

Simply open the super timeline CSV file with Eric Zimmerman’s Timeline Explorer. Dragging the CSV and dropping it onto Timeline Explorer works well. Excel can also be a great tool for examining large CSV’s. Libre Office Calc is not recommended as it cannot easily handle large CSV files. Super Timeline CSV’s can quickly become too large in size for Libre Office Calc to handle.

General Approach

Find a starting point!

Keyword searches are way to find a pivot in the data. Keyword searches alone are not analysis. Analysts can use the keywords to find an event of interest. From that keyword the analyst should look at what lead up to the event and what followed it. Adam Johnston wrote a great article on how to fully understand an incident using his P2FUST framework. Analysts can use this framework as a guide to understanding the components of an incident. Analysts should be trying to determine what users, and processes etc., were involved in the event.

A brief list of example indicators to get started with are:

  • Names of known malware executables
  • Known malicious PID’s from memory analysis
  • Known bad IP Addresses
  • Known compromised user accounts

The goal with super timeline analysis is to bring together all the known facts to put the story of the incident together. The indicators found elsewhere in the investigation are the bones of the skeleton, and super timeline analysis adds flesh to the skeleton. Some examples of things to look for during super timeline analysis:

  • How/When did the adversary make contact with the system?
  • How/When did they make entry?
  • What did the attacker do once inside?
  • What username was the attacker using?
  • Did they create, delete, alter or access any sensitive files?
  • Did they hide any long hauls?  (Long hauls are slang for low and slow pieces of malware)
  • What persistence mechanism was used?

Windows Event ID’s common to security incidents are a great place start for analysts beginning an investigation with no known (or very few) indicators. They are also a great way to round out an investigation.

The following Windows Event ID’s are an example data set commonly seen during Digital Forensics and Incident Response:

Event IDDescription
4624Successful Logon
4625Failed Logon
4634/4647Successful Log-off
4720Account Creation
4776Local Account Auth (NTLM)

The attackers hostname can be found in these event ID's. Analysts should check for hostnames of remote systems in the 4776 events that immediately follow a successful RDP logon.
4672Privileged Acct Usage
4778RDP Session (Re)Connected
4779RDP Disconnected
4648Logon Using Explicit Creds (Runas)(originator)
4768Kerberos TGT Granted (Successful Logon)
4769Service Ticket Request (Access to Server resource)
4771Pre-authentication failed (failed logon)(Kerberos)
4798Users Local Group Membership Enumerated
4799Security-Enabled Local Group Membership Was Enumerated
5140Network Share was Accessed
5145Shared Object Accessed (Detailed File Share Auditing”
4688+ Process Anomalies (Evidence of Vulnerability Exploitation)
+ New Process Created
+ Process Exit
7045Service Installation
-Tsk Sched | Sec Log-3 Digit Codes in Task Schedule XML
106 | 4698Scheduled Task Created
140 | 4702Scheduled Task Updated
141 | 4699Scheduled Task Deleted
200 / 201Scheduled Task Executed
4700Scheduled Task Enabled
4701Scheduled Task Disabled
Pass The Hash aka
“Evil Trinity”
4776 + 4624 + Logon Type 3 All at The Same Time is a rare unicorn created by Pass The Hash
——- LOGON TYPES——-
2Logon Via Console
(Keyboard, Server KVM, Virtual Client)
3Network Logon
10Remote Interactive Logon (RDP)

Analysis of DC01 Super Timeline

  • The following contains spoilers for Case 001.
  • Analysts wanting to try solving the case purely based on timeline analysis should try searching the timeline for interesting Windows Event ID’s.
  • Analysts wanting to use indicators discovered previously in the investigation should begin with file names, IP Addresses and Windows Event ID’s.
  • The following walk through will use Erik Zimmerman’s Timeline Explorer.
  • View the color code legend under the “Help” menu in Timeline Explorer!

General Approach

Analysts should begin their search for events in the data with known indicators. These indicators could be known to the specific case they are working on, or common events seen in breaches. Analysts that simply start reading at the top of the timeline and begin scrolling down “looking for evil” will be highly ineffective and inefficient. This is not to say analysts should not explore the timeline at times. A great technique to effectively explore is to find events that are interesting and explore the time leading up to, and following, the event.

As analysts find interesting events they should “tag” the event. Analysts using Timeline Explorer can do this easily by selecting the “tag” box in the second column. Foreniscators should get into the practice of taking screen shots of events that are significant or answer key questions.

Many of the same questions apply to all security incidents. A sampling of these questions are included previously in this section.

“NTFS, NTFS…” (Ol’ NTFS 2 Times)

The New Technology File System (NTFS) is the file system used by modern Windows Operating Systems. The NTFS file system is a boon for digital forensic investigators. It tracks a lot of information regarding files and data on a hard disk. This information can be analyzed to understand when and where data lived on a disk. NTFS tracks data using two types of time. There is $Standard_Information and $Filename. The differences between the two are an important concept for the digital forensicator.

A quick comparison:

NTFS File System Time Attributes

$Standard_Information$Filename
+ Interacted with via Windows API (User-Mode Accessible)
+ Typical timestamps we all see and interact with
+ "higher level"
+ ID Codes
+ Flags
+ Sequence Numbers used by the OS
+ Not as easily interacted with
+ More directly related to the MFT itself
+ "lower level"
+ Filename
+ Size
+ Record Number
+ Parent Directory Record Number
Comparing NTFS File Time Attributes

MAC Times

MAC is an acronym that stands for Modified, Access, and Change time. Investigators should have an understanding of how different Operating Systems file metadata times work. Windows and Linux have some slight variations in how changes to the disk are tracked.

TimesWindows (NTFS)Linux
ModifiedAn event time where the file contents had data written to it (it doesn't have to be different data).An event time where the file contents had data written to it (it doesn't have to be different data).
AccessAn event time when a file was opened for reading.

*Turned off by default since Windows Vista.
An event time when a file was opened for reading.
Change

A big difference between Linux and Windows.

Linux will interpret the times of Windows files incorrectly and vice versa.
Change to MFT Entry Time.

An event time when the MFT Entry for a File was Changed.
Change to Metadata Time

An event time when the metadata of file was changed.
Born TimeBorn Time, or the time a file was created.N/A

MACB is the most common term when referring to Windows Time Stamps. MACB, pronounced MAC-B, is short for Modified (contents), Accessed (contents read), Changed (Metadata was changed), and Born (File creation). At times analysts may hear MACE time. MACE is just slightly re-aligned as: Modified, Access, Creation, Entry change for the MFT record for that file.

Inodes

The proximity of inodes can give investigators further insight into the data. Inodes, or Index Nodes, are data structures that represent a file system object such as a directory or file. As files, or directories, are created they are given a number to represent them. They are typically assigned the next integer in sequence and will not change as a file is moved around the system. This enables analyst to further zero in on when a file may have truly appeared. In other words, inodes will be roughly clustered together for artifacts generated by a single event, or closely timed events. Analysts can use their understanding of inode clustering to aid in identifying files which were time stomped. Time stomping is a technique that allows the $Standard_Information timestamps of a file to be manipulated.

Windows Time Rules from SANS Windows Forensics Poster

The following snippet from the SANS Windows Forensics Poster shows how different events affect the MACB Time stamps. White boxes mean that particular timestamp will light up for the given event. Black boxes mean that timestamp will not change and gray means the time stamp is inherited from the original file. Refer to this guide often. Text boxes showing how MACB aligns to the chart have been added to the following screen shot.

 

Following The Adversary

To follow an adversary through a network analysts must “stitch” together many events throughout many types of forensic residue, from multiple hosts. Analysts should also keep in mind that these activities are, at times, more like an archeological dig than live tracking prey. The tracks will not be complete at times, and there may only be enough to make a well educated guess. Analysts build experience over time that enables them to develop plausible theories as to how an adversary was moving through a network and what their likely objective was. Analysts can re-acquire an adversary’s trail when it goes cold using theories to help “jump” gaps in the data.

Examples to Get Started

MACB Analysis Explained by Example

(and proof of Memory Timeline Enrichment working!)

There are times when one event triggers multiple signatures in the forensic residue. This “one to many” relationship between events and residue creation means there will be times when multiple lines of the super timeline will share the exact same timestamp. Analysts must apply critical thinking into translating what they are seeing into a fact based story of what likely happened.

  1. The search window. “Sauce.txt” was used to find files with sauce.txt in the name.
  2. An MFT event from Memory (noted by the Mactime Bodyfile) shows the creation of a file. All 4 times (M, A, C, B) are noted as being modified in the MFT File Name records for Szechuan Sauce.txt at 2020-09-18 21:35:43. Referencing the SANS Poster above we can see the file was created around 21:35:43 UTC on September 18, 2020.
  3. An NTFS event from the disk image also shows the file creation of Szechuan Sauce.txt as seen by the NTFS entry extracted from the disk image. It further confirms that the Szechuan Sauce.txt file was created at 2020-09-18 21:35:43 with all 4 times (M, A, C, B) indicated.
  4. NTFS File Stats events occurring around the same time of the file being created. Analysts referring to the SANS Poster regarding Time Rules will see that “.A.B” indicates a Volume File Move CLI. A file was created and the Operating System is doing some operations and shuffling around that.
  5. Windows conveniently created a LNK file to go with the txt file upon creation. Analysts may notice there is no “MA . B” on the poster. Analysts can safely assume this is the creation of the LNK file due to a few more indicators. The first is that this is the first instance in the timeline a LNK file related to Szechuan Sauce.txt has occurred. Second, it occurred immediately after the related file was created. Third, the we see the LNK file with an inode of 87060. The related file creation was at the inode just prior at 87059. We also see an inode of 86968. Recall the discussion above regarding inode re-use by the operating system. Analysts should recall that inodes are not always assigned in a perfect order. To summarize, analysts need only take away that the LNK file was created at this time despite the MACB time not aligning to the poster.

Analyst may see file names like FileShareSecretSZECHU~1.TXT. These abbreviated names are how Windows Operating Systems handle “long” filenames. This abbreviated name is considered DOS friendly. This fact is important to recall when searching for filenames in the Super Timeline.

The big take away: Secret Sauce.txt was created at 2020-09-18 21:35:43! The other events are not necessary to completely decipher here.

Analysts going through this the first time will likely feel as though it is complicated; perhaps even frustrating. Keep in mind that the forensics the poster above is a guide. There will be times when forensic residue does not perfectly line up with what is expected. The key is finding enough factual information to develop and support theories as to what occurred on a machine. In this example the big take away was the file creation. The other events matter, but they are not of consequence at the moment. Analysts should get the take away and keep moving.

Known Malware: CoreUpdater.exe

Analysts must recall that long files names observed in the Operating System telemetry can be truncated. It was explained above how Windows will shorten names as needed. Analysts may find more results when searching for smaller substrings of the larger filename.

Malware named coreupdater.exe was found to be an indicator of interest for during the PCAP and Memory Analysis. Analysts may find more data from searching for substrings such as coreup, coreupdat, and updater.exe for example. Searches conducted in Super Timelines can take a long time. The benefits of searching for substrings are demonstrated in the following screenshots.

Double Clicking an Event

Double clicking an event will open a window where all the information for that event is displayed. Try this on Line 1376527.

The moment the attacker began the transfer of malware to DC01 is captured here in this event. Analysts can find this event by searching for the name coreupdat and locating the first event where macb is designated in the macb column. This event was captured in the memory where the system had stored this event before writing it to disk. Analysts will see how to confirm and correlate this event with other forensic data later in this lab.

Known Adversary IP Addresses

Memory analysis revealed an IP Address that is associated with adversary activity: 203.78.103[.]109 . This is the IP Address associated with the Command and Control that the malware was using. Searching for this IP Address finds nothing in the DC01 timeline. This simply means the host didn’t observe that IP address in a way for it be captured in the telemetry.

PCAP Analysis also revealed anomalous activity from 194.61.24.102. Conducting a search for this IP address returns 877 lines of results! The vast majority of these results were from EVTX files (Host Logs) on the system. Analyst should read through and understand these results. What does the data reveal about the attacker? Which logs captured the data?

SPOILER: The attacker was attempting a brute force! This is indicated from the high volume and high frequency of failed logon attempts to the RDP service. Were they successful? Notice the Security log is not the only log to track these RDP events. Terminal Services logs often retain RDP events longer than the Security EVTX logs. Check the 4776 events for the hostname of the attacking machine.

Known Friendly IP Addresses

Analysts should also check logs for activity with other hosts from the victim network in the logs. The friendly network is 10.42.85.0/24. Did the attacker make contact with that machine and access it?

Correlation Between Different Hosts

Determining how attackers moved through networks during incidents involving multiple hosts can be challenging. Understanding how, and when, they moved laterally requires an understanding in how residue is generated on the source and the destination for the same event. A great way in understanding this concept is demonstrated on the SANS “Hunt Evil” Poster. An excerpt of this poster is shown below:

Understanding how an adversary broke into a system is not enough. Analysts must hunt for any events where the adversary launched to another system. The example highlighted above shows the residue found in two different logs on a system when a user connects to a remote machine with the Remote Desktop Protocol (RDP). Searching for 4648 Security Events, 1024 RDP Client, and 1102 RDP Client Events can reveal when an attacker connects to another machine in the network. In other words, Analysts are attempting to find when an attacker on VictimA is attempting to VictimB. This is known as lateral movement, or pivoting. Analysts should try searching for 4776 /, 4648 /, 1024 /, and 1102 / in the Super Timeline for RDP based lateral movement. The SANS “Hunt Evil” poster has many more lateral movement techniques for analysts to study.

Network and Host Correlation

Analysts with the luxury of having full Packet Captures from a network at the time of incident should refer to this data often. Network telemetry enables analysts to confirm an event with multiple sources. Forensicators will also be able to bounce between the host and network data to gain new insights into an event, or enrich a previously discovered event with new information.

One example of this can be seen in this lab. The initial contact with the malware was shown through host logs as occurring at 2020-09-19 02:24:06z. Looking at the data in the Packet Capture at this same point in time confirms the finding.

Analysts can easily search for keywords in network traffic in Wireshark. Analysts simply select the magnifying class icon, select string in the window to the right, and Packet Bytes to the far Left.

  1. Search for coreupdater.
  2. Packet Bytes are selected.
  3. Event 238565 is the when the HTTP stream begins.
  4. Note the time 02:24:06 matches the time corresponding event in the logs.
  5. A GET request for the coreupdater.exe file was made.
  6. The site the attacker was coming from was http://194.61.24.102/.
  7. The agent-string indicates the attacker was using Internet Explorer.
  8. The host was 194.61.24.102.
  9. The fully assembled web request was http://194.61.24.102/coreupdater.exe

This is a great example of matching up events in different types of artifacts. Analysts have a lot of confidence in these findings. Both the Network and Host telemetry show coreupdater.exe was downloaded to the Domain Controller by the Administrator account at 02:24:06z on 19 September, 2020.

Go For It

Analysts now have the tools and knowledge to investigate the entirety of this incident! Analysts should concentrate on determining what the attacker did from the time they made contact to the time they left. Some key events analysts should look for are listed below.

Key Findings

Analysts hopefully found the following answers:

  • How/When did the adversary make contact with the system?
  • How/When did they make entry?
  • What did the attacker do once inside?
  • What username was the attacker using?
  • What was the name of the attackers machine?
  • Did they hide any long hauls?  (Long hauls are slang for “low and slow” pieces of malware. Low and slow meaning malware that only calls home on a rare occasion)
  • What persistence mechanism was used, if any?
  • Did the attacker move laterally to any other machines in the network?
  • Was any malware installed with persistence (is it able to survive a reboot)?
  • Did the attacker steal any data?
  • Did the attacker manipulate any data?
  • Did the attacker delete any files?

Conclusion and Recap for Super Timeline Analysis

This lab walked analysts through extracting memory observed events into a body file. Events were then extracted from the disk image and combined with the memory events. These combined events were sorted, deduplicated, and output to a CSV. This CSV file was then analyzed using Eric Zimmerman’s Timeline Explorer. Analysts were instructed to pivot through the data using key events and phrases. Additionally, analysts were shown how to cross correlate events observed on disk and in the captured network traffic.

Choose Your Next Move (These should have been done first)

Lets make a timeline in TimeSketch. (Coming later this year).

I want to look at the PCAP

I want to look at the AutoRuns

I want to examine the memory image

What are the answers?! Keep in mind these need some fine tuning. They will be updated for more accuracy as I have time to process the artifacts a bit deeper.

Don’t forget to leave any thoughts or questions you have on Super Timelines in the comments.

Leave a Reply

Additional Resources

Archives