Containerize Your IR Timelining
September 21, 2021
September 21, 2021
Here at Accenture Security, we believe in finding ways to enable our people with tools and processes needed to accomplish objectives, while also lowering the barrier to entry. At its core, Incident Response (IR) success hinges upon solving an objective placing events at specific times. Artifacts come in an endless variety of source types, and each may be critical to build a complete picture. One such technology solves the ability to process an artifact from an enormous list of source types, ordering data chronologically. Using a complete timeline, we can start at a known event and walk forwards and backwards through time – a critical capability for our Cyber Investigations, Forensics & Response (CIFR) team during intrusion analysis efforts. Due to the size of timeline outputs, it can often be difficult to peruse the timeline having no pivot event or timeframe to start with.
Timeline analysis provides critical investigative insight into many Incident Response engagements for our Cyber Investigations, Forensics and Response (CIFR) team. Plaso provides a high fidelity, chronological timeline of events used to pivot around noted findings or a specific time frame. Since incident response is an iterative process, being able to quickly apply new facts to collected artifacts and metadata is crucial. We use the open-source tool Plaso to parse and timeline disk images, logs, and a slew of individual OS artifacts.
Docker is used to deploy consistent versions of Plaso as a container image to any environment without the need to consider version sprawl or memorize command arguments. We remove all extraneous packages and their dependencies to reduce the size and attack surface of the container image.
Install docker.io to ready the runtime and tools.
Clone our repository and trigger the Dockerfile build
git clone https://github.com/Accenture/docker-plaso.git
Make build
<<< Start >>>
<<< End >>>
Successful output should contain these output elements.
<<< Start >>>
<<< End >>>
View your new local image. Example output is included.
<<< Start >>>
<<< End >>>
If you want to use our published container image without building a local version, use the Docker Hub image named accenturecifr/plaso.
We’ll use the docker-plaso/Makefile targets for this so that we don’t have to assemble the command-line manually. Within the docker-plaso path, add or edit make_env to set static variables that overwrite those within Makefile so that your local artifact directory, the artifact name, and your output directory are known by Makefile targets.
<<< Start >>>
<<< End >>>
With variables set, pick from the following list of targets.
Make target | Description |
build | build the docker image |
build-nocache | build the docker image without use of docker build caching |
log2timeline | run log2timeline against $EVIDENCE_FILE |
psort-analysis | Enrich an existing plaso file using the psort analysis plugin |
psort-csv | run psort against $EVIDENCE_FILE.plaso in csv format |
psort | run psort against $EVIDENCE_FILE.plaso in json format |
pinfo | run pinfo against $EVIDENCE_FILE.plaso |
First start with
<<< Start >>>
<<< End >>>
Check your chosen output directory set within the OUTPUT variable for new Plaso output. If it is zero byte or non-existent, check for output errors written to stdout.
Next, run psort, and check for a new json format file. Switch to psort-csv if you are using a tool that requires CSV structured input.
<<< Start >>>
<<< End >>>
Pinfo produces statistics concerning the Plaso output file and is useful for debugging and determining if a particular artifact was not processed.
<<< Start >>>
<<< End >>>
The above methods are examples of "do all the things" analysis which is not always necessary or prudent. On larger volumes of data, it is advisable to reduce the volume of data for time efficiency and storage costs reduction. Often the output file size is too large to be brought into desktop tools such as Excel or Timeline Explorer.
One method to accomplish this is to restrict the parser types targeting the artifacts that provide the greatest investigative value. For Windows based investigations, we often use the parsers below to reduce the data set while still providing high fidelity answers.
To the log2timeline.py argument in Makefile, add:
<<< Start >>>
<<< End >>>
Another method is to limit which files are collected. To use against a Windows disk image, Plaso ships with a default filter. Edit these if they exclude files of your interest. The additional argument for log2timeline is:
<<< Start >>>
<<< End >>>
Once the Plaso body file is generated, what gets written to your output file can be restricted to a specific date and time range. To the psort.py arguments, add this as an example.
<<< Start >>>
<<< End >>>
The use of the Makefile and make_env allows for variable setup and simple make <command> based execution. Using this method forces timeline consistency, simplifies command-line preparation, and increases readability. We hope you find it useful.
Disk Performance: use disk caching (only needed on Macs and Windows machines) when processing disk images such as VMDKs and E01s.
<<< Start >>>
<<< End >>>
<<< Start >>>
<<< End >>>
Memory allocation: we suggest increasing the memory that docker can use. By default, on MacOS, docker only allocates 2GB to the VM. It’s a quick change via the gui. https://docs.docker.com/docker-for-mac/
Speed through limiting datasets: there is a Windows filter file in the container image that reduces the dataset to the list below (add “-f /usr/share/plaso/filter_windows.txt” to the log2timeline.py commandline).
Having this as an easy button lowers the bar to entry for timeline newcomers. Docker-plaso also help to provides a consistent platform the entire team can use. Point this supertimeliner to your big folder of random artifacts, grab a coffee, and let the code extract and sort the events.
Happy hunting!
The information in this blog post is general in nature and does not take into account the specific needs of your organization, which may vary and require unique action. Accenture makes no representation that it has vetted or otherwise endorses any tools referenced and Accenture disclaims any liability for their use, effectiveness or any disruption or loss arising from use of these tools.
Accenture Security is a leading provider of end-to-end cybersecurity services, including advanced cyber defense, applied cybersecurity solutions and managed security operations. We bring security innovation, coupled with global scale and a worldwide delivery capability through our network of Advanced Technology and Intelligent Operations centers. Helped by our team of highly skilled professionals, we enable clients to innovate safely, build cyber resilience and grow with confidence. Follow us @AccentureSecure on Twitter or visit us at www.accenture.com/security.
Accenture, the Accenture logo, and other trademarks, service marks, and designs are registered or unregistered trademarks of Accenture and its subsidiaries in the United States and in foreign countries. All trademarks are properties of their respective owners. All materials are intended for the original recipient only. The reproduction and distribution of this material is forbidden without express written permission from Accenture. The opinions, statements, and assessments in this report are solely those of the individual author(s) and do not constitute legal advice, nor do they necessarily reflect the views of Accenture, its subsidiaries, or affiliates. Given the inherent nature of threat intelligence, the content contained in this article is based on information gathered and understood at the time of its creation. It is subject to change. Accenture provides the information on an “as-is” basis without representation or warranty and accepts no liability for any action or failure to act taken in response to the information contained or referenced in this report.
Copyright © 2021 Accenture. All rights reserved.