- Shopping Bag ( 0 items )
* Winner of Best Book Bejtlich read in 2008!
* Authors have investigated and prosecuted federal malware cases, which allows them to provide unparalleled insight to the reader.
* First book to detail how to perform "live forensic" techniques on malicous code.
* In addition to the technical topics discussed, this book also offers critical legal considerations addressing the legal ramifications and requirements governing the subject matter
Solutions in this chapter:
* Building Your Live Response Toolkit
* Volatile Data Collection Methodology
* Current and Recent Network Connections
* Collecting Process Information
* Correlate Open Ports with Running Processes and Programs
* Identifying Services and Drivers
* Determining Scheduled Tasks
* Collecting Clipboard Contents
* Non-Volatile Data Collection from a Live Windows System
* Forensic Duplication of Storage Media on a Live Windows System
* Forensic Preservation of Select Data on a Live Windows System
* Incident Response Tool Suites for Windows
This chapter demonstrates the value of preserving volatile data, and provides practical guidance on preserving such data in a forensically sound manner. The value of volatile data is not limited to process memory associated with malware, but can include passwords, Internet Protocol (IP) addresses, Security Event Log entries, and other contextual details that can provide a more complete understanding of the malware and its use on a system.
In a powered-up state, a subject system contains critical ephemeral information that reveals the state of the system. This volatile data is sometimes referred to as stateful information. Incident response forensics, or live response, is the process of acquiring the stateful information from the subject system while it remains powered on. As we discussed in the introductory chapter, the Order of Volatility should be considered when collecting data from a live system to ensure that critical system data is acquired before it is lost or the system is powered down. Further, because the scope of this chapter pertains to live response through the lens of a malicious code incident, the preservation techniques outlined in this section are not intended to be comprehensive or exhaustive, but rather to provide a solid foundation relating to malware on a live system.
Often, malicious code live response is a dynamic process, with the facts and context of each incident dictating the manner and means in which the investigator will proceed with his investigation. Unlike other forensic contexts wherein simply acquiring a forensic duplicate image of a subject system's hard drive would be sufficient, investigating a malicious code incident on a subject system will almost always require live response to some degree. This is because much of the information the investigator needs to identify the nature and scope of the malware infection, resides in stateful information that will be lost when the computer is powered down.
This chapter provides an overall methodology for preserving volatile data on a Windows system during a malware incident, and uses case scenarios to demonstrate the collection process as well as the strengths and shortcoming of the data acquired in this process.
Building Your Live Response Toolkit
When conducting Live Response Forensics it is paramount to implement known trusted tools to acquire data from the target system. Because a target system has been potentially compromised, we cannot rely upon the native programs, dependency and system files to conduct our examination, as the attacker may also have modified these files. As a result, we need to select the tools we intend to implement during live response and determine the linked libraries and other modules that each tool invokes. Through this method we can copy all the required dependencies to our live response CD in the respective directories, with the associated tools to potentially reduce system interaction and limit invoking potentially compromised files, tainting the reliability of our examination. We need to emphasize that this may only potentially reduce interaction with the operating system; although most executables will seek dependencies from the same directory in which invoked, executables from newer versions of the Windows operating system (XP and newer) look to specified locations on the operating system.
In addition to potentially reducing interaction with the host system, it is helpful to identify and document the dependencies of the tools for the purpose of determining files accessed and system changes made as a result of using the tools. You can identify the file dependencies of a tool by loading it into a Portable Executable file analysis tool like Dependency Walker (depends.com) or PEView, as shown in Figure 1.1.
Since many of the tools used for incident response may also be used by attackers, it is necessary to mark our tools in some way to differentiate them. An obvious approach is to change the names of the executables, but it is also recommended to insert some data, such as your initials, in each executable. This can be achieved using a hex editor and adding the text to an area of the header that will not impact the operation of the tool. For instance, to differentiate a digital investigator's PRCView utility discussed later in this chapter, open the executable in a hex editor, and add a few distinctive bytes at offset 600 immediately following the PE header. Running the tool after this modification will ensure that the marking process did not break the executable. For each tool, keeping a note of the mark that was entered, the original filename (pv.exe) and hash (5daf7081a4bb112fa3f1915819330a3e), along with the new filename (ec-pv.exe) and hash (88a2cacaa309bcc809573a239209e2a6) allows for later identification.
Once you've selected your tools, obtained the required dependencies, and marked the binaries with a distinctive signature, you'll need to choose the appropriate media to copy your toolkit to and deploy from. Many malware analysts and first responders choose to keep their trusted tools on a CD to minimize interaction with the system and to ensure that the tools themselves do not become infected with any malware that may be on the system being analyzed, whereas others prefer to deploy the tools from a thumb drive or external hard drive, because the media will also serve as the repository for the collected results. For instance, a high volume thumb drive (4 to 8 gigabytes) or external hard drive for live response data acquisition can serve as practical receptacle for the data, including a full system memory dump image.
Much of this decision will come down to whether you intend to collect the live system data locally or remotely. Collecting results locally means you are connecting a storage media to the subject system and saving the results to the connected media. Conversely, remote collection means that you are establishing a network connection, typically with a netcat or cryptcat listener, and transferring the acquired system data over the network to a collection server. The later method reduces system interaction but relies on the ability of being able to traverse the subject network through the ports established by the netcat listener. The following pair of commands send the output of PRCView from a subject system to a remote IP address (172.16.131.32) and saves the output in a file named "pv-e-20080430-host1.txt" on the collection system. The netcat command must be executed on collection system first so that it is ready and waiting to receive data from the subject system.
Remote forensics tools are also available that enable digital investigators to obtain volatile data from remote systems, as discussed later in this chapter.
In some instances the subject network has rigid firewall and proxy server configuration, making it cumbersome or impractical to establish a remote collection repository. Further, acquiring an image of a subject system's physical memory during live response may entail several gigabytes of data over the network (depending on the amount of random access memory (RAM) in the system), which can be time and resource consuming. The best bet in this regard is to design your Live Response toolkit with flexibility so that you can adjust and adapt your acquisition strategy quickly and effectively. Throughout this chapter we will discuss the implementation and purpose of numerous tools that can be used for live response data collection through the lens of a malicious code case scenario. After learning about the value and shortcomings of these individual tools, at the end of the chapter, we will explore Incident Response Tools Suites.
Testing and Validating your Tools
After selecting the tools that you will incorporate in your live response toolkit, it is strongly recommended that you implement the tools on a test system to identify the data the tools will collect, and just as important, identify the artifacts, or "digital footprint" the tools make on the system. Identifying and documenting the data that the tools acquire along with the artifacts that the tools leave behind, is important for explaining time stamp or system modification identified during your post-mortem analysis of the subject system. Similarly, when using netcat or remote forensics tools to acquire data, documenting the clock offset between the subject and collection systems will help correlate acquisition events with any changes on the subject system.
Perhaps the most efficient means to create a testing and validation system for your toolkit is through a virtual system, such as VMWare or VirtualBox, as this software allows the user to make "snapshots," so that the system can be reverted to its original prestine state after being modified. Using this method, the system can be reused during the tool testing and validation process.
Once you have established your baseline testing environment, consider implementing system monitoring tools to identify system changes that occur as a result of deploying your trusted incident response tools. To accomplish this, there are a variety tools that help monitor system behavior.
System/Host Integrity Monitoring
One consideration is to implement system integrity monitoring software such as Winalysis (as depicted in Figure 1.2) or InstallSpy, which allow the investigator to take a snapshot of the target system, establishing a baseline system environment, and notifying the system user of any subsequent system changes. Winalysis is a program that allows you to save a snapshot of a subject system's configuration, and then monitor for changes to files, the registry, users, local and global groups, rights policy, services, the scheduler, volumes, and shares resulting from software installation or unauthorized access. Similarly, InstallSpy is a system integrity monitor that tracks any changes to the registry and file system and also records when a program is installed or run. We'll revisit the uses of Installspy, Winalysis and other system integrity monitoring tools in Chapter 9, where we discuss creating a baseline environment for dynamic analysis of malware specimens.
For more granular control over observing system changes, such as file system and registry changes that occur as a result of running tools from your live response toolkit, both File Monitor (FileMon), and Registry Monitor (RegMon), shown in Figure 1.3, can be implemented to capture a real-time file system and registry system changes. Similarly, Process Monitor (for Windows XP SP2 and above), depicted in Figure 1.4, combines the capabilities of FileMon and RegMon and displays real-time file system, Registry, and process activity.
After creating and validating your live response toolkit, we need to examine the methodology in which data will be collected off of a subject system during live response.
As previously mentioned, the methodology and techniques outlined in this section are not intended to be comprehensive or exhaustive, but rather to provide a solid foundation relating to malware on a live system.
Volatile Data Collection Methodology
As discussed in the Introduction chapter, data should be collected from a live system in the order of volatility. The following guidelines are provided to give a clearer sense of the types of volatile data that can be preserved to gain a better understanding of the malware.
* On the compromised machine, run trusted command shell from an Incident Response toolkit
* Document system date and time, and compare it to a reliable time source
* Acquire contents of physical memory
* Gather hostname, user, and operating system details
* Gather system status and environment details
* Identify users logged onto the system
* Inspect network connections and open ports
Excerpted from Malware Forensics by James M. Aquilina Eoghan Casey Cameron H. Malin Copyright © 2008 by Elsevier, Inc.. Excerpted by permission of Syngress Publishing, Inc.. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
Posted April 25, 2011
Posted November 21, 2011
No text was provided for this review.