Monday, January 22, 2007

Notes During the Investigation....

So I am just sitting down to start an investigation and get out my notebook and pencil so I can jot down any notes when suddenly the lightbulb goes off and I wonder why not try TiddlyWiki. I blogged about this a little while ago about using it to keep specific information in so you would not have to search for it later, I was not thinking about using it during an investigation to keep my notes in.

For the reports I write I have 4 sections: Results/Things found, Opinion, Steps Taken and Technical Explainations. So what I started to do was instead of writing things down in my notebook I started typing in TiddlyWiki. I make each thing I found it's own tiddler with a tag of what section in the report I would put it in. As I started to do this I found out how much simpler it is then writing it down in a notebook. I can easily cut and past things as well as keep everything in a time line so I know when I ran accross it. I can search and make references to other sections as well. Also I can use this as a template for the next investigation that I have, it can be a start of a very detailed and comprehensive checklist.

Now I know some people will argue the need for a check list but I think it is a good idea to have one, I don't know about you but there is so much information out there, and to remember it all I think is just too much sometimes. I think it would be better to have tiddlers of things to look at and if it does not apply to the case then say so and move on (with the many flavors of OS's there will be things that you will do for some OS's and not others), if a lawyer wants to know why you did not do a specific step then your notes should say why (The OS was win98 so that is why I did not search the restore Point directory). In a former life when I had some pretty big system implementations to do I always had a checklist to follow, it made me make sure I did not forget anything and also I could use it for documentation then next time I did an upgrade to the system since upgrades were a few years apart.

Questions/Comments/Suggestions???

Friday, January 12, 2007

To DB or Not To DB The Report

As requested I have uploaded a sample of a program (create_report.zip) that will create a comma delimited file from executing a sql statement. The program expects an argument of the a ini type file to be passed to it. The program create-report.pl is the program and the sql-report.txt is the ini file. The create-report.pl program reads the file (sql-report.txt) that is passed to it, each line has 3 parameters in it (database file, output file and sql text). Each sql statement gets parsed and executed and written to a file. I chose to create a comma delimited file because that is the easiest, you could create any type of output you would want. By using this program all you have to do is edit/create new ini files for each database you have.

Questions/Comments/Suggestions?

Thursday, January 11, 2007

To DB or Not To DB.........

Man do I really love to use databases. When you have a decent database and a good design there is nothing that you can not accomplish. Now when I say databases you are probably thinking Oracle, DB2, Sql Server, Mysql, etc... Those are all great databases with rich features but I am thinking more along the lines of an embedded database. What I usually use is Sqlite, it is a embedded relational database that is small and fast and supports most of SQL92. By combining Sqlite and perl I can do many things. Some examples of what I can do are as follows:

Store data from log files and report on them based on different criteria.

Load data and use sql to generate commands, ie: load up file names and then use sql to generate rename commands for the files.

Load multiple log files and types and correlate the data into a comprehensive report.

I will now show you what I am talking about. I will use one of Harlan Carveys   Cpan scripts that reads the event logs. I will use the lsevt3.pl program and make a few modifications to insert the records into a Sqlite database. The initial program looks like this:

#! c:\perl\bin\perl.exe

use strict;
use File::ReadEvt;

my $file = shift || die "You must enter a filename.\n";
die "$file not found.\n" unless (-e $file);

my $evt = File::ReadEvt::new($file);
my %hdr = ();
if (%hdr = $evt->parseHeader()) {
# no need to do anything...
}
else {
print "Error : ".$evt->getError()."\n";
die;
}

my $ofs = $evt->getFirstRecordOffset();

while ($ofs) {

my %record = $evt->readEventRecord($ofs);
print "Record Number : ".$record{rec_num}."\n";
print "Source : ".$record{source}."\n";
print "Computer Name : ".$record{computername}."\n";
print "Event ID : ".$record{evt_id}."\n";
print "Event Type : ".$record{evt_type}."\n";
print "Time Generated: ".gmtime($record{time_gen})."\n";
print "Time Written : ".gmtime($record{time_wrt})."\n";
print "SID : ".$record{sid}."\n" if ($record{sid_len} > 0);
print "Message Str : ".$record{strings}."\n" if ($record{num_str} > 0);
print "Message Data : ".$record{data}."\n" if ($record{data_len} > 0);
print "\n";

# length of record is $record{length}...skip forward that far
$ofs = $evt->locateNextRecord($record{length});
# printf "Current Offset = 0x%x\n",$evt->getCurrOfs();
}
$evt->close();


One of the programs I use to create the database is SqliteSpy. This is a nice gui to create and view the data that you load into the database. What I did was create a table with the following definition:

CREATE TABLE events
( file_name text,
Record_Number number,
Source text,
Computer_Name text,
Event_ID number,
Event_Type text,
Time_Generated text,
time_generated_unix number,
Time_Written text,
time_written_unix number,
SID text,
Message_Str text,
Message_Data text);

You can compare this definition to the $record in the lsevt3.pl script. I have added 3 extra columns to make the table more flexible, they are:

file_name which is the event file name that is being loaded, this allows for multiple event logs to be inserted into the database.

time_generated_unix and time_written_unix were added to allow for easier selecting and sorting of timestamps.

The following is the changed lsevt3 program that does the inserts into the database (Added lines in Bold):

#! c:\perl\bin\perl.exe

use strict;
use File::ReadEvt;

use DBD::SQLite;

# Attributes to pass to DBI to manually check for errors
my %attr = (
PrintError => 0,
RaiseError => 0
);

# Create the connecton to the database
my $dbh = DBI->connect("dbi:SQLite:events.db3","","",\%attr);


my $file = shift || die "You must enter a filename.\n";
die "$file not found.\n" unless (-e $file);

my $evt = File::ReadEvt::new($file);
my %hdr = ();

my $sid = "";
my $message = "";
my $data = "";


if (%hdr = $evt->parseHeader()) {
# no need to do anything...
}
else {
print "Error : ".$evt->getError()."\n";
die;
}

my $ofs = $evt->getFirstRecordOffset();

# Make it so Inserts run in a batch mode
$dbh->do("Begin Transaction");


while ($ofs) {

my %record = $evt->readEventRecord($ofs);

# Convert data and check type to be inserted
if ($record{sid_len} > 0) {
$sid = $record{sid};
} else {
$sid = "";
}
if ($record{num_str} > 0) {
$message = $record{strings};
} else {
$message = "";
}
if ($record{data_str} > 0) {
$data = $record{data};
} else {
$data = "";
}

# Insert statement for the data into the events tables Use prepate and execute to handle quotes in the string fields
my $sql_stmt = qq{Insert into events values ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)};
my $sth = $dbh->prepare( $sql_stmt);
$sth-> execute( $file, $record{rec_num}, $record{source}, $record{computername},
$record{evt_id}, $record{evt_type}, $time_gen, $record{time_gen},
$time_wrt, $record{time_wrt}, $sid, $message, $data);

# Check for any errors in the insert statement
my $err_desc = $dbh->errstr();
if (($err_desc =~ m/not\sunique/) || ($err_desc eq "")) {
} else {
print "Error in Database $err_desc\n";
print "loading Record ".$record{rec_num}."\n";
}



# length of record is $record{length}...skip forward that far
$ofs = $evt->locateNextRecord($record{length});
# printf "Current Offset = 0x%x\n",$evt->getCurrOfs();
}

# Commit the Batch
$dbh->do("Commit");


$evt->close();


By running this program from the command line, lsevt3_db.pl Sysevent.evt, the events will now be loaded into the Sqlite database. You can then load multiple event logs into the table and report on them through sqlite.

The following is an example of a query to show when the Removable Storage Service wrote to the event log:

select * from events where source like 'Remov%';

or

To show the when the computer was started and stopped.

select * from events where event_id in (6009, 6006) order by time_generated_unix desc;

If you were to add the application events then you can see everything that happened during a specific time period as well (now you will see why the unix time is important to have since it is much easier to use and sort by).

select * from events where time_generated_unix between 1168484317 and 1168516719 order by time_generated_unix desc;


Now if you use x-ways forensics you can define the perl script under the external viewer programs and when you select a file you can have it run this program and it will load up the database as if you were running the program from the command line.

If there is interest I can post a generic perl script to print out reports from the database, just leave some comments and I will put one out there.

Hopefully I did not confuse you to much, if I did then let me know and I will try and make it less confusing.

Tuesday, January 9, 2007

A Tiddly Wiki Travel Notebook

How many times have you been on site somewhere and not had access to the Internet and wanted to get some small piece of information that you can't quite remember but know where to look for it on the net. Well TiddlyWiki can come to the rescue. Here is a excerpt from there website:

a free MicroContent WikiWikiWeb created by JeremyRuston and a busy Community of independent developers. It's written in HTML, CSS and JavaScript to run on any modern browser without needing any ServerSide logic. It allows anyone to create personal SelfContained hypertext documents that can be posted to a WebServer, sent by email or kept on a USB thumb drive to make a WikiOnAStick. Because it doesn't need to be installed and configured it makes a great GuerillaWiki. This is lastest version is 2.1.3, and is published under an OpenSourceLicense.

I have added some information to a TiddlyWiki to get anyone who downloads it started. I tried to enter some tiddlers (name given to a unit of microcontent) with examples of how you can use it to try and give you a leg up. It can be saved from the following link

http://RedWolfComputerForensics.com/downloads/Computer_Forensic_Tiddly_Wiki.htm

Now for the challenge. How much information do you think that we can put in this wiki and help spread our knowledge to each other. If you would like to help out on this little project you can email me @ Mark.McKinnon@sbcglobal.net (Put "Forensic Wiki" in the subject) with your entries and I will put them in the wiki with the proper credit to you.

Questions/Comments/Suggestions/Help?

Monday, January 8, 2007

No This Is Not Mork From Ork.

Ok so I watched the original series when it came out, but I am not that old. What I plan to enlighten you about today is the Mork database file format. This file is mainly used in Firefox for Internet History, there are a few more files that use this format but we will concentrate on the History.dat file. Now there are numerous programs that will read this file Mandiant Web Historian, Digital Detective NetAnalysis and even a perl script by Jamie Zawinski , the problem is what if the file is broken. When the file is broken it cannot be processed by any of the above programs. A friend of mine recently had this problem and was unable to parse the history.dat file by any of the above programs. By understanding how the database worked I was able to lend him a hand.

Below is a simple file that I have of a history.dat file. I will try and take it apart and show how to hand parse the file. If anything this will allow you to eyeball the file to see if there is anything that would keep one of the above programs from parsing it. The file I will use is as follows, please note the first line is somewhat edited to make it show up in the posting.

// < !-- < mdb : mork:z v="1.4" > -->
< <(a=c)> // (f=iso-8859-1) (8A=Typed)(8B=LastPageVisited)(8C=ByteOrder) (80=ns:history:db:row:scope:history:all) (81=ns:history:db:table:kind:history)(82=URL)(83=Referrer) (84=LastVisitDate)(85=FirstVisitDate)(86=VisitCount)(87=Name) (88=Hostname)(89=Hidden)>
<(80=LE)(8B=http://redwolfcomputerforensics.com/)(9F=1166463003773295) (9A=1166448674185405)(8D=redwolfcomputerforensics.com)(8E =C$00o$00m$00p$00u$00t$00e$00r$00 $00F$00o$00r$00e$00n$00s$00i$00c$00s$00/\$00U$00n$00l$00o$00c$00k$00 $00P$00a$00s$00s$00w$00o$00r$00d$00s$00/$00E$00l$00\e$00c$00t$00r$00o$00n$00i$00c$00 $00D$00i$00s$00c$00o$00v$00e$00r$00y$00) (A0=3)(8F=http://www.certified-computer-examiner.com/)(9E =1166462906212309)(9B=1166448699473785)(91 =certified-computer-examiner.com)(92 =I$00S$00F$00C$00E$00 $00-$00 $00C$00e$00r$00t$00i$00f$00i$00e$00d$00 $00C\$00o$00m$00p$00u$00t$00e$00r$00 $00E$00x$00a$00m$00i$00n$00e$00r$00) (9D=2)>
{1:^80 {(k^81:c)(s=9)[1(^8C=LE)]} [A(^82^8B)(^84^9F)(^85^9A)(^88^8D)(^87^8E)(^86=3)] [B(^82^8F)(^84^9E)(^85^9B)(^83^8B)(^88^91)(^87^92)(^86=2)]}
@$${1{@
<(A1=1166463169292586)(A2=4)(A3=http://www.google.com/)(A4 =1166463174778175)(A5=google.com)(A6=1)(A7=G$00o$00o$00g$00l$00e$00)>
{-1:^80 {(k^81:c)(s=9)1 } [-A(^82^8B)(^84^A1)(^85^9A)(^88^8D)(^87^8E) (^86=4)]B [-C(^82^A3)(^84^A4)(^85^A4)(^88^A5)(^8A=1)(^86=2)(^87^A7)]}@$$}1}@
@$${2{@@$$}2}@

Kinda ugly when you first glance at it but once you understand it is not so bad.

File Header: // < !-- < mdb :mork:z v="1.4"> -->

Fields and Descriptions for the database, not all fields will be used

< <(a=c)> // (f=iso-8859-1) (8A=Typed)(8B=LastPageVisited)(8C=ByteOrder) (80=ns:history:db:row:scope:history:all) (81=ns:history:db:table:kind:history)(82=URL)(83=Referrer) (84=LastVisitDate)(85=FirstVisitDate)(86=VisitCount)(87=Name) (88=Hostname)(89=Hidden)>

Actual history data. Note that the last three sections are all delimited by <>

<(80=LE)(8B=http://redwolfcomputerforensics.com/)(9F=1166463003773295) (9A=1166448674185405)(8D=redwolfcomputerforensics.com)(8E =C$00o$00m$00p$00u$00t$00e$00r$00 $00F$00o$00r$00e$00n$00s$00i$00c$00s$00/\$00U$00n$00l$00o$00c$00k$00 $00P$00a$00s$00s$00w$00o$00r$00d$00s$00/$00E$00l$00\e$00c$00t$00r$00o$00n$00i$00c$00 $00D$00i$00s$00c$00o$00v$00e$00r$00y$00) (A0=3)(8F=http://www.certified-computer-examiner.com/)(9E =1166462906212309)(9B=1166448699473785)(91 =certified-computer-examiner.com)(92 =I$00S$00F$00C$00E$00 $00-$00 $00C$00e$00r$00t$00i$00f$00i$00e$00d$00 $00C\$00o$00m$00p$00u$00t$00e$00r$00 $00E$00x$00a$00m$00i$00n$00e$00r$00) (9D=2)>

Cross Reference of the actual history to the fields. Note this section is delimited by Curly Braces ({}). This is the important part and I will try and give as much detail as I have found out.

{1:^80 {(k^81:c)(s=9)[1(^8C=LE)]}
[A(^82^8B)(^84^9F)(^85^9A)(^88^8D)(^87^8E)(^86=3)]
[B(^82^8F)(^84^9E)(^85^9B)(^83^8B)(^88^91)(^87^92)(^86=2)]}

The following should always be in this section, not sure what it is but it has been in every file I have looked at : 1:^80 {(k^81:c)(s=9)[1(^8C=LE)]}.

The rest is the actual mapping in brackets ([]) for each site visited, each pair in parenthesis is a mapping of the field and the actual data, ie: ^82 = URL and ^8B = http://redwolfcomputerforensics.com. The mapping of the first record (A) would look like this

(^82^8B) = (URL=http://redwolfcomputerforensics.com)

(^84^9F) = (LastVisitDate=1166463003773295 - First 10 digits is Unix time)

(^85^9A) = (FirstVisitDate=1166448674185405 - First 10 digits is Unix time)

(^88^8D) = (Hostname=redwolfcomputerforensics)

(^87^8E) = (Name=Computer Forensics/Unlock Passwords/Electronic Discovery) - this data field actually needs to have all the $00 removed to make it readable.

(^86=3) = (VisitCount = 3)

If we look at Record B then we can see one more database field that is being used

(^82^8F) = (URL=http://www.certified-computer-examiner.com/)
(^84^9E) = (LastVisitDate=1166462906212309 - First 10 digits is Unix time)
(^85^9B) = (FirstVisitDate=1166448699473785 - First 10 digits is Unix time)
(^83^8B) = (Referrer = http://redwolfcomputerforensics.com)
(^88^91) = (Hostname=certified-computer-examiner)
(^87^92) = (Name=ISFCE - Certified Computer Examiner) - this data field actually needs to have all the $00 removed to make it readable.
(^86=2) = (VisitCount = 2)

You can now see that field ^83 was added which shows that the http://www.certified -computer-examiner.com site was referenced from a link on http://redwolfcomputerforensics.com.

2 fields that have not been mentioned above are following.

8A - Whether url was typed into address bar will have a value of 1
89 - Whether hidden data was passed in url will have a value of 1

A couple of things to note that I have observed:

When you exit firefox it may have multiple cross references sections delimited by @$${X{@ type of characters. This appears to be the last browsing session, each time the firefox program loads it reads the history.dat in and consolidates the file back into main 4 sections.

In each multiple cross reference section you may have updated data ie: (LastVisitDate or VisitCount) that appears there as well, this will get consolidated as noted above.

Hopefully this helps and I did not confuse everyone.

Questions/Comments?

Friday, January 5, 2007

Printing Restore Point Information From Another Computer

Since Harlan Carvey gave me an intro I felt I had to give up something else in order to make you want to come back.

Looking at the restore points you may wonder what all those files actually are and what they relate to in each RPXXX directory. Now if you are like me you will start to poke around and see if you can figure it out. At some point you may see that in the change.log.x there is a reference from the file found in the restore to another file located else where. Now what all the other information in the file means who knows since MS does not divluge that information.

Now MS has a nice little tool in the %SYSTEMROOT%\system32\restore directory called srdiag.exe. What this program does is to parse the restore point directory and give you all kinds of information about your restore points. Now you are probably asking how this will help me since when I run srdiag it will only produce the reports (it creates a cab file with all the info stored in it) for the restore point on my analysis computer.

Here are the steps to get restore point information from an xp image that you are analyzing (do the following steps putting your information in replace of mine):

1. Make sure Restore Points have been turned on for your analysis machine.

2. Make sure you have access to your "System Restore Directory" - Use the following command to get the access cacls ":\System Volume Information" /E /G :F

3. On the xp image you are analyzing copy the restore point directory in the "System Volume Information" directory to the "System Volume Information" on your analysis machine. At this point you should see 2 directories like _restore. One will be your analysis machine guid and the other will be from the image.

4. You will now need to edit your registry. Go to the following entry HKEY_LOCAL_MACHINE\Software\Microsoft\WindowsNT\CurrentVersion\SystemRestore\Cfg and rename the following field from MachineGuid to MachineGuid_old. Next create a new String Value of MachineGuid, edit this field and put the GUID that you copied from your image, use the MachineGuid_old as a template if you need to, the format of the 2 entries should be similar.

5. Now run the srdiag.exe from the %SYSTEMROOT%\system32\restore directory. Once the program has completed you should see a cab file with your machine name on it. In the cab file there will be all kinds of good information for you to look at.

6. Finally delete or rename the MachineGuid registry entry and rename the MachineGuid_old back to MachineGuid and remove the directory from your "System Volume Information" directory.

That is it in a nut shell. Enjoy looking at all the infromation provided to you by srdiag.

A New Beginning

Well he is my first post. I was reading Harlan Carveys lastest windowsir Blog post and took his advice and I am starting this blog. I know I do not have as much knowledge as others in the field and I am still constantly learning but who knows maybe I might be able to help one or two individuals, at least I hopefully get better at writing.

What I would like to accomplish with this blog is to pass along knowledge that either I or someone else has gained. If someone else passes the info along to me expect to get credit, if there is nothing more that I hate then people passing along an idea and not getting credit for it. I will try to post a couple times a week but will not make any promises.

How I came up with the title cfed-ttf. I was reading Jesse Kornblum's Blog's lastest entry about naming tools and had to come up with something so cfed is Computer Forensics/Electronic Discovery and ttf is Tips/Tricks and inFo. I tried to be creative but sometimes it is hard.

Now on to the show ( The reason we are here):

Ever wonder what hard drives have been attached to an xp machine. Well if restore points have been enabled then wonder no more. There is a file called drivetable.txt under the root restore point directory. This file contains a list of hard drives that are attached when the computer boots up (from what I can tell so far). Now the cool thing about this is that under each restore point directory there is also a copy of the drivetable.txt file at the time the restore point was taken. Now hopefully you can see where I am going with this. Since each restore point is a point in time you should be able to see when a hard drive was attached and not attached based on date/time of the restore and be able to create a time line of attached hard drives to the computer. This works with USB hard drives as well.

Feedback? Good or Bad who cares I know I am not always right and I will admit it. If I have to be wrong to learn something then I can eat a little humble pie.