Monday, December 31, 2007

Another Year Come and Gone.......

As another year vanishes and a new one starts I would like to take the time to wish every reader a happy new year. With the new year I will try and post more often and keep you all comming back for more information. I have a few things in the works and I am always looking for more projects. If anyone has any ideas drop me a line (mark dot mckinnon at sbcglobal dot net) or if anyone has a good idea for a utility that can be written let me know.

Wishing you all a safe and joyful New Year

Mark

Sunday, December 30, 2007

ISystemWiper Analysis

I ran into this program during an examination. The examination was on site and I could not take the image with me so I could not boot it up in vmware and check out the program to see what the settings were. What I did do was take a copy of the directory where the program was installed, the registry keys for the program and the download install file. From there I was able to bring it back into my lab and install it and figure out what settings the user had enabled. I have created a PDF of my notes and it can be downloaded here.

The program is pretty interesting in that it allows you create your own custom plugins to delete user definable items. There are also quite a few plugins that come with the product and by going through the files you can actually learn something about those products. If you check out the program you will see for your self. I did not go through all the plugins as I will leave that up to you if you are curious.

Now is this something that you all would like to see more of? If so then let me know and I can try and create some more. If anyone out there has done any analysis on any programs and would like to share please let me know and I can make you a guest blogger.

As always questions/comments/thoughts/improvements?

Saturday, December 29, 2007

Vinetto - A Thumbs DB Parser/Viewer

A while ago I blogged about a program to view the contents of a thumbs.db file. In the comments Christophe Monniez AKA d-fence (who created the FCCU GNU/Linux boot CD) brought to my attention the open source project Vinetto which is a forensics tool to examine Thumbs.db files written by Michel Roukine. It is a command line python script that works on Linux, Mac OS X and Cygwin(win32). Now I tried it on Cygwin and thought it was a great tool to have in the old tool belt, you can never have enough tools. Since I do not know python I thought it would be a good time to learn it. Well I am still trying to learn it and hopefully in the future will be providing so new tools in it.

Now as most of you know most of my tools will run on Linux (command line) and windows (command line and gui) and I try and strive to make sure that they will work on both (some will only work on windows because that is where the libraries are). Now I saw that Vinetto would work under Cygwin but not win32 natively. So I thought I would see what it would take to make it so it would work natively under win32. Those that just want to use the program and not worry about what I changed can skip to the bottom to the link where the program is (I have compiled the program so there is no need to have python on your system to use the program).

After downloading it and making sure that I had the pre reqs ( Python-2.3 or later and PIL (Python Imaging Library) 1.1.5 or later) installed I opened up the files and looked at what would have to change. Here is all that had to change.

Changes to program vinetto

Line 1 change #!/usr/local/bin/python to #!c:\python25\python
Line 160 chage /usr/share/vinetto/header to ./res/header
Line 161 change /usr/share/vinetto/quantization to ./res/quantization
Line 162 change /usr/share/vinetto/huffman to ./res/huffman
Line 320 change open(outputdir + "/" + NUMBERED_THUMBS_DIR + "/" + TNfname(SIDstr, "2") + ".jpg", \
to open(outputdir + "/" + TNfname(SIDstr, "2") + ".jpg", \


changes to vinreport.py program

Line 62 change /usr/share/vinetto/HtRepTemplate.html to ./res/HtRepTemplate.html


changes to the setup.py program

replace everything with the following

from distutils.core import setup
import py2exe

setup(console=['vinetto'])



run the following to create the executable

python setup.py py2exe

Once the executable has been created (if you already have python and PIL then you do not need to create the executable) you just need to copy the res directory underneath the dist directory (if you are lost here do not worry I have everything compiled for you and if you have done this before you will understand). I then tested it out and it works great (there is one error that states the number of arguments are not correct that I have not looked into) and outputs the files and created the html report.

So future changes/additions for this I think will be to add a autoit gui front end for the windows users who are command line adverse and an option to scan a directory (top most directory) to find all the thumbs.db files. Any other additions I would have to use the program so more

For more information about this program go to the website http://vinetto.sourceforge.net/.

To download my changes and an executable copy of the program go here

Friday, November 30, 2007

Registery Repository Project....

Well now that I am back from Holiday and waded through all the e-mails and voice mails I can finally try and get something out here.

For anyone who has not followed the comments on Harlans blog for Pimp my Registry I have volunteered to create a database for a registry repository. I have created an initial ERD diagram and was wondering if all you readers out there would take a look at it and see if there is any information that I have missed. I tried to keep the names informative so that is why they seem long. The pdf can be found here. A description of the fields can be found here

The group_app table will define what type of investigation you may want to do, ie: CP, Fraud, IR, etc.. The category_table will define the type of categories the apps are, ie: P2P, Internet, Security, ETC.. I also tried to think ahead and added the tables to be used for Parameter files (INI and config files) and any notable files that might be used within an application. I have also added a user table because I think it is important that who ever submits entries to be added should be able to be contacted to ask questions about them. This will also provide some ownership to the data as well.

I have also thought of a few other things to add but I would like the public's opinion. Do the following fields add value to the Registry_Info table?

Key_created_on_Install - was the key created on installation of the app or created later

Format of data - Unicode, ROT13, etc..


Are there any other additions anyone thinks should be added?

The main goal of this project will be to collect this information into 1 source and then from that source export the information into usable files (parameter files, xml, html, csv, etc..) that can be used with other programs as well as the programs that I have written to read/parse the registry into a database and report on it.

Hopefully this will all makes sense to you.

As always Questions/Comments/Thoughts/Modifications?

Friday, November 2, 2007

Vista Recycle Bin Names in X-ways.....

For all you X-Ways forensics users out there here is a script/executable that you can define to x-ways that will copy to the clipboard the actual name of the $R file based on the $I file. You can then add the file name to the comments section in the directory browser.

To use in define the file to x-ways as in callable executable program. In the Recycle.Bin directory right click on the $I file and call the executable program and it will copy the actual file name to the clipboard for you so you can just paste it in your directory browser.

Questions/Comments/Suggestions/Improvements????

Dumpster Diving with Ovie.....

On the Oct 15 Cyberspeak Podcast Ovie Carroll talked about Vista Recycle bin forensics. Based on Ovie's chat I have created a program that will read the $I files and create a simple report. The report consists of the $I file name, the actual filename with directory, the date/time the file was deleted and the file size. I have also added the functionality to copy the $R (actual data file that was deleted) to the actual name into a directory specified by you.

So what does the prorgam do? Once you fire up the gui you need to provide a filename for the database that is created that will store the data that is read. Provide a direcotry where the $I files are, if you want to copy the $R files to there original names then they need to be in the same directory. Optionally you need to provide an output directory where you want to write out the deleted files to with there actual names. Once that is done then press the buttons and watch it go to work. When you are ready to run the report you can either sort the data in ascending or descending order based on the deletion date and show the report in either excel or your favorite web browser.

If you want to see the gory details the code is provided. As always this script can be run on OS's other then Windows (the report piece will have to be modified some).

The programs can be found here. As always Questions/Comments/Improvements let me know.

Tuesday, October 30, 2007

Interest in Making Other tools X-ways Forensics Friendly...

The thought just occurred to me to see if there is any interest in my making more of the tools I have put out there callable from x-ways. If there is interest in this let me know and as I develop them I will add this capability as well. If you would like one of the older tools to be callable from x-ways then let me know and I can try and accommodate it. Leave a comment or shoot me an email mark dot mckinnon at sbcglobal dot net.

What's that time Zone....

A few weeks ago I was asked to image a couple of laptops by a global company. The laptops had been previously deployed at 2 of there overseas sites. After imaging the drives I went to look at the bios for the machines so I could document the settings and the date/time. After looking at the date/time I wondered what time zone it was. Now since I am lazy and really only want to do this once I came up with this little autoit gui program that will tell me what time zone a specific date/time is from compared to my time zone.

For example if the my current date/time is 10/30/2007 8:30:00 and the bios date/time setting 10/30/2007 19:00:00 then the time zone setting is GMT+5:30. Possible areas that may be in this time zone are Chennai, Kolkata, Mumbai, New Delhi, Sri Jayawardenepura.

The program can be found here. Once you start up the program it will put the current date/time in the 2 fields, You will have to make the change to the date/time to figure out field then click on the "get time zone information". It will then bring up a box with the potential cities for that time zone (based on windows time zones).

Questions/Comments/Suggestions?

Calling Thumbcache Parser from X-Ways Forensics...

I saw a post on the x-ways forums about carving out data from the thumbscache and thought to myself now why did I not think of making my thumbcache parser able to be called from x-ways. Well now you can. I made a few small modifications to the program and you can now call if from x-ways forensics by right clicking on one of the thumbcache files and picking an external program.

To install it download the zip file from here . Unzip in to the directory of your choice. Take the headersig.txt and put that in the temp folder you have defined in x-ways forensics (this is under options=>general, if you do not do this the program will not work and will just hang). Now define the EXE or perl script (your choice) in the external programs definition section (options=>external programs). That is all that is needed to set it up. To run it right click on one of the thumbcache_??.db files and pick the external program to run. The program will then ask you where you want to put the jpg/bmp/png files that will be exported from the thumbcache file. Once the program has finished you can then import the files into your case.

As always I hope you find this useful. Questions/Comments/Suggestions?

Monday, October 15, 2007

Thumbs Up To Ovie......

On the Sept 23 podcast of Cyberspeak Ovie Carroll talked about the thumbs cache that is new in Windows Vista. In response I have created a perl script with a autoit gui front end that will parse all 4 of the thumbcache files.

The base program is based on the sigs.pl script originally written by Harlan Carvey. What the perl script does is open the specified thumbscache files and then scans for file header signatures. Once it finds a jpg, png or bmp file header it then backs up and reads what I will call the file header record of that image file. In this record is the size and internal name of the file. I have not figured out how it gets that particular name but if someone knows please let all of us know. The thumbcache_32 and 96 files appear to only contain bmp files while the thumbcache_256 and 1024 contain png and jpg's. For all the gory details see the perl code.

Since the thumbcache files I had were very limited this is about as much as I know. As for the gui just pick the file you want to parse, input the directory with a "\" as the end where the thumbcache files are and input a directory to write all the images to and click on the parse button and watch it go.

Now since this does not use any windows specific perl modules there is no reason that you cannot run it on Linux or a Mac. The code and executable can be found here.

Thanks to Ovie for the idea for this program. Ovie and Bret keep up the great work on the podcast.

As always questions/comments/thoughts/problems let me know. My eyes and ears are always looking for great new projects.

Wednesday, October 3, 2007

Database Security.....

I was just catching up on some reading and came across this article about securing the database in eWeek.

Now as I read this i have to shake my head and wonder why all they mention is the DBA that is in charge of this. In my experience the DBA usually has the database pretty secure. It is when you introduce the applications that will use the database that it becomes insecure. For those who do not know in an Oracle database the one of the highest permissions to grant is DBA in SQL Server it is SA and in DB2 it is Sysadm. Now for quite a few installs that I have been involved with using Oracle and SQL Server datbases the installation needs either and account created with DBA or SA or they need the actual SA account. Now as far as I am concerned this is just pure laziness on the application side, I know it is easier to just grant DBA/SA as you do your development, which is fine because that is usually a test/development environment, but before you release it to prime time take the 10-15 minutes to figure out the access you actually need. I just love it when the user actually has access to drop and create users, tables, tablespaces, etc.. becuase the application says they need the access.

The next thing I really love is all the applications that leave user names and passwords in plain text in there configuration files. Talk about insecure what is better then having a web server out on the DMZ that has a user name/password in plain text in an XML configuration file. Now if the DBA was involved in the installation of this and is aware of this then something can be done to minimize the impact of this, (figuring out the maximum access that is actually needed and only granting that access) but usually the application folks are in charge of this so the DBA does not know that the account that has DBA rights is sitting out on the DMZ in plain site.

Now the last thing I really love is when you get those application developers demanding DBA access. Now I don't know if it is because they can't have that access that they want it or what but they always want it. Here is a conversation between myself and a developer about this:

Developer: I need DBA access.

Mark: Why do you need DBA access.

Developer: Becuase I need to access things.

Mark: What things? Do you need to create tablespaces?

Developer: No I don't need to create tablespaces, but I need DBA Access.

Mark: Do you need to create users, profiles, switch log files, create rollback segments, etc....

Developer: No, No nothing like that but I need DBA access.

Mark: Well why don't you figure you the actual access you need and I will grant it to you, I don't have a problem granting access to you if you need it but you do not need DBA.

Manager: Well isn't it just easier to grant DBA then figure out the access.

Now this is where the conversation just went over the cliff, along with the manager and the developer.


So now that I am done ranting about this Thoughts/Questions/Comments?

Tuesday, October 2, 2007

Help Wanted....Lurkers Apply within

I am looking for a few good lurkers. In the comming months I will have some new tools to test and I would really love to have a few lurkers out there test them for me. It is always good to get a different perspective on things and different views and different data. I can send them to some of the people I know but thought this would be a good opportunity for some lurkers. If you are out there and want to get involved but do not think that you can contribute then this opportunity is for you. I do not care what your level is from beginner to expert, everyone can contribute, I will just need some of your time to test somethings that I am working on before I release them here. If you feel this opportunity is for you send me an email at Mark dot McKinnon at sbcglobal dot net with a subject of "Help Wanted...Lurker Applying".

Monday, September 17, 2007

CSC/Offline File Parser/Copier

Addendum, Mar 22, 2008: Look at the March 22 2008 blog Entry as it has a newer version of the software. That post can be found here


As promised here is a link to the CSC/Offline File parser Copier. There is some more work that needs to be done on this but it does work pretty well.

In the zip file you can either run the csc_parser_gui.exe (autoit program) if you want to run in windows. Here is an explanation of the fields on the screen:

csc base file to parse : This is either 00000002 or csc1.tmp (backup of 00000002). Without this file your CSC is useless.

Database file to create: This is the sqlite database file that will be created and read from.

Base CSC Directory: Where you saved the CSC directory to. The default CSC directory is C:\Windows\CSC

Type of program to run: You can either run the perl scripts (.pl) or the executable (.exe). I did this encase you want to change the perl scripts and still want to use the gui.

Program to open the report in: Which program do you want to open the report in Excel or a web browser.

Once you fill in the fields you want then just press the button of the action you want, if you do not fill in one of the fields required for that action it will let you know.

Now if you want to run the perl scripts on a platform other then windows here is the sequence to run them in and parameters needed:

read-csc-dir.pl <base file to parse , with 00000002 or csc1.tmp> <DB File Name if it does not exist then it will be created> <Base CSC Directory, ie: c:\Windows\CSC>

read-csc-file.pl <DB File Name created in previous step>

To get a report of the files run this program, (Note: this will create a temp file and will try and open excel or a web browser so you may want to modify this program to your needs):

print-csc-files.pl <DB File Name created in previous step> <Directory to copy files to> <A or U for Allocated files or Unallocated files>

Sqlitespy.exe has been included in case you want to look at the database.

Any feedback would be appreciated. One thing that probably needs to be done is to be able to parse the sqlite database and recreate the directory structure so you can copy into the correct directories. If I get some more time I will try and document the file structures but in the mean time look at the code and you should be able to figure it out. Hope this helps someone out.

As always Comments/Questions/Suggestions are always welcome.

Addendum, Dec 6, 2007: After several questions I realized I failed to mention that you have to read/parse the csc file before you can report on it, so hit the read/parse buttone before any other button Also if you have spaces in any of the file/direcotry names then you will have to put double quotes around the whole field. If I get time I will try and make the change in the code to allow for this.

Thursday, August 30, 2007

Offline Folders

Offline folders use to go by the name of Client Side Cache. This is evident with the directory this information is stored in C:\Windows\CSC, this directory is still there even if you do not use offline folders. You will find Offline folders more in a corporate environment and mainly on laptops. The thoughts behind this is that you want to store your data on a network drive but also have access to it when you are not on the network. There is a synchronization process that happens between your computer and the network drive where your data is stored. Depending on what your settings are is when the synch will happen.

One of the interesting things about this is that if you login into a laptop that is not yours at your company, your files on the network drive will start to synch to that laptop. After the synch your files should now be on that laptop. Now lets say you are looking at leaving the company and decide to remove all your files from the network drive and then resynch on your laptop, all the data is then removed from the offline folder on your laptop and is gone. Now what about that other laptop you logged into, guess what your files are still on that one and they can be potentially harvested. Now all you E-Discovery folks should be drooling at the mouth right about now since files that were deleted may be found somewhere else (especially if the backup tapes of the network drive are no good, lost, etc..). You just have to find out where you logged into besides your own laptop.

Now one downside to this is that your cube mate is an idiot and stores his porn on the network drive. He decides to login to your laptop and his files are now on your laptop. There is an investigation and they take both yours and his laptops. Without understanding Offline folders you may get accused of having porn on your laptop when you never put it there, your idiot cube mate did.

Now lets take a high level look at the offline folders (I am still gathering information so there may be some holes in it). Under the C:\Windows\CSC directory you will the following:

Directories named d1 to d8 - these hold all the files used for offline folders, the file names are system generated.

file 00000001 - this points to the network drive that you will synch to

file 00000002 - this files holds all the references to what directories your files are stored in and what there names are.

file 00000003 - Don't know have not figured this out yet (I did say this was a work in progress and any help would be appreciated)

file csc1.tmp - this appears to be a copy of file 00000002

Now in each directory (d1..d8) you will find 2 types of files, ones that have a first character of 0 or 8. The ones with a first character of 8 are the actual files that you stored there. The files that start with 0 hold the information/cross reference between the generated name and what their actual names are as well as size of the file and the date that the file was created (this is another place where I am still figuring it out but I do have some of the information).

In the next post I will dive deeper into the format of the files that start with 0 and provide some Perl programs that will be able to read those files and provide some useful information.

Now hopefully I was clear in what I just stated if not hopefully you will let me know.

Questions/Thoughts/Comments????

It's been a while

It has been quite a while since I last posted something. I hope to soon rectify this and start to post a few things. Some of the things that I hope to talk about will be Offline folders, a few informational postings on different programs, and other things.

Tuesday, May 22, 2007

Comparing Large Hash sets Against NSRL.......

I recently saw a post on a list I belong to asking about DeDuplicating and DeNSRLing some files. He was trying to do this in a very popular forensic product and after 4 days he still had nothing. Someone replied (I had thought the same thing) about using a SQL Server database to do this. Now if you are not that familiar with using databases then this would not be an easy task. Thinking about this I thought it would make a good project. To start off you first need to accommodate a large amount of data and it should perform well (that is a bigger challenge then you may think).

The parameters for the project are:

1. The NSRL reference table will only hold 1 set of hash values (I chose MD5 to use but you could choose SHA1 or CRC).

2. Load the NSRL data in a timely manner.

3. Be able to add my own hash sets to compare against as well.

4. Use as much free software as possible.

5. Load my hashs to compare in a timely manner.

6. Compare my hashs in a timely manner.

7. Be able to easily report and extract knowns and unknown hash sets from what I loaded.

8. Work on both Windows and Linux (Sorry Mac)

I started off by using SQLite with a perl script to load the NSRL data. I was able to load the NSRL data in aprox 1 hour which for the amount of data and an embedded database I thought was pretty good as well as you would only do this task possibly once a quarter. The problem came next when I tried to create an index on the table and it went out to lunch. After a couple of hours I knew I would have to come up with a different database solution. I then looked at the free version of Oracle (I am pretty familiar with this database and it also has a Linux version, that is why I chose it over SQL Server), now here is where it starts to get hard since I am limited to only having 4GB of data in the free version. I installed it without a problem and started it up. It was using aprox 300M of memory so for anyone out there wanting to do this you should probably have 1gb of memory on your machine.

I next started to create some tablespaces, users and tables. I then used Oracle's SQL Loader product to load the data into the database and then indexed the table. This took about 3.5 GB between the index and table (40,000,000+ rows). I then created a list of hashs from a previous examination that using x-ways forensics version 13. I then loaded this data into the database (600,000+ rows) and then created a table of known and unknown hashs for the examination. After trying many different things to make it fast and small I finally came up with the following:

NSRL table is deduplicated from 40,000,000 rows down to 14,000,000+ rows and from 3.5 GB (table and index) down to 1.2gb (table and index) with a load time of aprox 36 minutes.

My hash set was smaller then 500m and took aprox 5 minutes to load the 660,000+ rows and create 2 tables (known hash set and unknown hash set). The known hashs table has aprox 46,000 rows with the unknown hashs tables having 604,000+ rows.

Now I have uploaded the scripts here (sql and sqlload) and batch files to run to create your own little hash comparison system. There is a install.txt file to help you get started. Once you install Oracle Express and download the NSRL data you should be able to get started.

If you don't want to use the MD5 that I did then just change the MD5 references to SHA1 or CRC and then the load cards to only load what you want. You can also change the hash set tables to what ever you want to load. Just use what I supplied as a template to make your modifications. With a little creativity you can also create your own list of knowns and unknowns and use these to compare against as well, just use the nsrl schema as a template.

Now looking back I feel I accomplished everything I set out to. It is fast, 41 minutes from start to finish if I do not have the NSRL already loaded, otherwise it takes roughly 5 minutes for 660,000+ rows. It is a free solution. I can now export the rows, create reports as well. Using Oracle Express I can run it on either Windows or Linux platform and since I do not use any gui tools there are not too many modifications to make it work on either platform. I would love to hear your experiences with using this and what timing's you get with your hash set comparisons.

Questions/Comments/Thoughts?

Monday, May 7, 2007

Thumbs DB Files

I received a email about a new product from InfinaDyne. It is called ThumbsDisplay and you can display the contents of the Thumbs.db file. It will also do the following:

Cut and paste the picture to another application

Print 3 types of report (Contacts Sheet with all the pictures displayed, Picture with date and time, Full Size picture with date and time).

Scan the drive for all thumbs.db files.

You can also call the program with a thumbs.db file as a parameter and it will load that file into the viewer. This is really nice since you can then use it to view thumbs.db files from within other forensics programs, ie: X-Ways Forensics. One of the best things about this program is the price, only $29.99. If you want to test drive it before you buy they also have a demo version you can download.

The only draw back I see right now is that you can only print the reports, you can't save them. You need someting installed like cutePDF to print the file to a PDF file. Maybe in a future release they will add this feature. Otherwise it seems like a great inexpensive tool to keep in the toolbox. And in case you are wondering I did pay for my own copy of the program I am not getting anything free here.

Thoughts/Comments/Questions.

Monday, April 23, 2007

Registry Files in the Restore Point.

Your in the middle of an examination of an Windows XP machine and your wondering what some registry settings were during a specific time and you think to yourself, why don't I look in the System Restore Point. As you navigate to the restore point directory all of a sudden you see 20+ restore points and you think "Oh ????? (insert word here)". As you look at all the restore points you start to think how are you going to get all that information out and not take forever. You only want to look at 5 different registry keys over some time period that resides within those 20+ restore points. Don't despair I have a solution.

What I have done is taken Harlan Carvey's regp.pl program and modified it to scan a directory and read the raw registry files and insert the entries into a SQLite database (of course). I then created a program to read the database and output registry keys in chronological order so you can see the dates and times of what the entries are along with the restore point they belong to in a comma separated file. For example here is a sample of the output looking at the following registry keys.

Registry File Name, Registry Key, Last Write Date Time, Registry Key Name, Data Type, Registry Value, Registry Value, File Location
_REGISTRY_MACHINE_SOFTWARE,\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Prefetcher,Wed Apr 18 20:55:01 2007,StartTime,2007/04/18-16:55:01, //-:U:,c:/mark/restore/RP603/snapshot,
_REGISTRY_MACHINE_SOFTWARE,\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Prefetcher,Tue Apr 17 23:29:36 2007,StartTime,2007/04/17-19:29:36, //-:):6,c:/mark/restore/RP602/snapshot,
_REGISTRY_MACHINE_SOFTWARE,\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Prefetcher,Mon Apr 16 22:57:56 2007,StartTime,2007/04/16-18:57:56, //-:W:V,c:/mark/restore/RP601/snapshot,
_REGISTRY_MACHINE_SOFTWARE,\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Prefetcher,Sat Apr 14 21:10:18 2007,StartTime,2007/04/13-13:41:27, //-:A:,c:/mark/restore/RP600/snapshot,


_REGISTRY_MACHINE_SOFTWARE,\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Prefetcher,Sat Apr 14 21:10:18 2007,ExitTime,2007/04/13-12:22:04, //-:":,c:/mark/restore/RP600/snapshot,
_REGISTRY_MACHINE_SOFTWARE,\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Prefetcher,Mon Apr 16 22:57:56 2007,ExitTime,2007/04/16-16:05:08, //-::,c:/mark/restore/RP601/snapshot,
_REGISTRY_MACHINE_SOFTWARE,\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Prefetcher,Tue Apr 17 23:29:36 2007,ExitTime,2007/04/17-16:33:14, //-:3:,c:/mark/restore/RP602/snapshot,
_REGISTRY_MACHINE_SOFTWARE,\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Prefetcher,Wed Apr 18 20:55:01 2007,ExitTime,2007/04/18-15:51:38, //-:Q:8,c:/mark/restore/RP603/snapshot,

Pretty cool.

Now for the gory details. The main program takes as input a directory (where you exported the restore point to) and a database file name that you want to create. It scans the directory recursively until it finds a file named _REGISTRY (the beginning name of all the registry files in the restore point) . It then opens that file and parses it and inserts the records into the database. As it inserts the records it will take anything with a record type of binary (with a length less then 2000 bytes) and convert it to ascii so it is potentially readable. The report program takes a database file name and output file name as parameters. It reads a txt file that specifies what registry entries will be outputted. I have also included a autoit gui front end for the command line averse folks. The gui front end end will ask for the restore point directory and database file name for reading the registry and the database file name and output directory for the report. You can specify a verbose mode which will tell you what files you are currently processing. There is one more option to choose on the gui and that is the file extension to run, I did this in case you want to run either the .pl (perl source) or the .exe (executable version of the perl source).

One small problem with the program is that reading the registry files is pretty slow. In my testing I had a total directory size of aprox 250M (only counting the registry file sizes) which included 4 restore points and it took about 20 minutes to parse all of them. I have looked at the program at most of the time is in reading the registry files themselves not inserting into the database. The report runs pretty quickly though. One thing to note I felt it was quicker to get everything versus looking for what I want since what you want may change during the exam or overtime and the only thing you would then have to change is the report ini file.

Hopefully I have not confused everyone. Some of the code is ugly and all the comments may not be there so I apologize for that. As always report problems and so forth back to me and hopefully it helps out, saves time and gets you the data you need.

The zip file with all the goodies can be found here.

Questions/Comments/Suggenstions?

Sunday, April 8, 2007

U3 Smart Technology.......

Man do I love technology sometimes. What is great about U3 smart technology is that as long as autorun is enabled for the cd you can potentially tell when one of these USB devices has been plugged in. By looking in the windows prefetch directory all you have to look for are these files, Launchu3.exe, Launchpad.exe and cleanup.exe. The 2 launch programs are run when ever the USB drive is plugged in (assuming autorun is enabled). The cleanup program is run whenever the USB drive is ejected using the launchpad.

Now if you are lucky you may see multiple entries for these files in the prefetch or you can see different create and modified dates for them as well. Now you may also notice that these files may have multiple different dates and times. Here is an example from the prefetch directory of the multiple dates and times.

Filename Created Modified Accessed
LAUNCHU3.EXE-XXXXXXXX.pf 2/5/2007 13:56 2/13/2007 5:52 2/13/2007 5:52
LAUNCHPAD.EXE-XXXXXXXX.pf 2/5/2007 13:57 2/13/2007 5:52 2/13/2007 5:52
CLEANUP.EXE-XXXXXXXX.pf 2/12/2007 21:54 2/13/2007 7:01 2/13/2007 7:01

Looking at these entries in the prefetch it tells me that the USB drive was attached on February 5, 2007 and also February 13, 2007. The drive was then removed on February 12 2007 and February 13, 2007. Pretty cool that I can tie the USB device to being used on 3 separate occasions. Also by looking in the setupapi.log file ypu can see when the drive was first attached which potentially adds a 4th time the drive was attached. Now you see why I love technology sometimes.

Thoughts/Comments/Questions?

Thursday, April 5, 2007

URL History.

I wrote this program back in December 2005, what it does is to read in a ie or mozilla history file and will output it to a comma delimited, tab delimited or html file. You can also open it in Excel or a browser and sort the records in ascending or descending order. I know there are many programs that will do this but this program has one special feature that I added, you can make it output url records between specific dates so you can narrow down your search of url records.

When I created this I modeled it after pasco. It is a gui program and that is why it is so large, this would probably be a good candidate for an autoit front end instead of perl. One thing I did find out about pasco is that it looks in the index.dat file for the size of the file and only reads until the file size. What I found is that file size stored in the index.dat files is not always kept up to date. My program just reads until the end of the file so it will always get all the records.

The code and executable can be found here. As always comments, suggestions, improvements to the program are always welcome.


Thoughts, Comments, Suggestions?

Monday, March 26, 2007

Acquiring a Forensic Copy of a Floppy Disk Checklist For Peer Review

Here is a checklist for Acquiring and Creating a Forensic Copy of a Floppy Diskette. It can be found here. Give it the once or twice over and let me know how it looks and if any changes should be made to the doc. Enjoy.

Comments/Thoughts/Questions

Reviews

Over on Hogfly's Forensic Incident Response blog he has a great entry about peer reviews. I agree with everything he says and support it. One thing I was thinking about was by publishing this information you are letting every Tom, Dick and Harry have the information, they would then throw out there own shingle and state that they are a computer forensics professional because they know how to acquire a drive. Now this may be true but as you questions these individuals and talk to them at length you will then realize that they are no better then a 1st line of support. You know what I am talking about, you call support and they run you through every step you have also run through before calling them, that is why you are calling them. What I am getting at is the process/procedure is as only as good as the person who understands it and can explain it. After talking to some just going through the steps of the procedure you can ask why they did step 6. If you get the "Deer in the headlights" look you know you can question them further and that they do not understand the peer reviewed process that is published on the Internet. So I guess the previous line of thought should now be a moot point.

Now that Hogfly has thrown down the gauntlet I guess it is time to polish up those procedures and get a peer review or 2.

Comments/Thoughts/Questions?

Tuesday, March 20, 2007

Remote Caputure Solution Posted On X-ways Capture Site

The solution I posted earlier about using X-Ways Capture for remote imaging has been posted on there site.

Monday, March 19, 2007

Mention on Cyberspeak

If you were listening to the end of Cyberspeak then you might have heard my company mentioned as well as this blog (not in name though, hopefully in a future podcast). Hopefully I can live up to keeping the topics flowing and providing information that is useful and helpful. As always any comments, tips, topics, help you may need are always welcome. You can reach me at mark[dot]mckinnon[at]sbcglobal[dot]net.

Thanks for the mention Bret and Ovie.

Questions/Comments/Thoughts?

Reading Apache Access Logs

There are many scripts out there that read the Apache access log. More recently Jesse Kornblum posted his script for parsing the logs for search queries. Well here is my attempt at doing this, as always there is a database involved.

All this script does is read in the apache log file, parse it and save it to the database. You can then write sql to get back the data for you, IE:

select * from apache_log where access_dttm = '10/Mar/2007';

Now to run the program just type read_apache_log.pl access_log. The program and table creates can be downloaded here.

For users of X-ways Forensics you can define this program as an external program and load the database right from x-ways as you are doing your analysis. Just make sure you change the spot where your database points to.

Thoughts/Questions/Comments?

Tuesday, March 13, 2007

Your Local Public Library.

If you are not aware your local public library more then likely has software that you can check out and install (both kids software and adult software, not porn). One good thing about this is that you can create a virtual machine and install the software you checked out and start creating some hash sets. Some library's will probably have quite a list of software to check out. At my local library there are approx 30 titles for adult software and approx 50 children's titles and it is not a very large library. So happy hash set creation.

Thoughts/Questions/Comments?

Monday, March 12, 2007

Imaging that remote PC/Server.....

So what better thing to do on a Monday morning then go through all the e-mails, blogs and news that has piled up this weekend, especially on a time change weekend. So I will try and keep this lite but I am sure it will raise questions. What I have for you today is a way I have found to do a remote image of a machine. The tools I will use are a simple batch file, Autoit, psexec and X-Ways Capture (Capture being the only non free tool but well worth the money). I will not go into very much detail about Capture except for just doing the image of the machine, it is worth looking at though as it has many features for live imaging and incident response as well.

I have uploaded a zip file with my autoit script and executable and a couple of batch files and it can be found here. What I do in a nut shell is psexec a batch file to the remote machine and execute it. I use the copy flag on psexec which copies the file to the machine to run it. From what I have tested, and I still need to do more but wanted to introduce this to everyone, this is what I have seen being changed:

1. Entry in $MFT for batch file and file stored in $MFT (file is only 111 bytes)
2. On Xp systems prefetch files are created for psexec.exe, batch file, capture.exe, net.exe.
3. Registry is updated.

Now for what I did. In the autoit script Remote_capture.exe I ask for the following fields to be filled in:

1. Remote computer's Name - Defaults to current machine name and will be name of machine to image.
2. Domain\Username - Domain (if any) and username to log onto, must be a administrator on that machine.
3. Password - Password of the account to login.
4. Capture Drive Mapping - Drive and unc path to where the capture software is.
5. Output Drive Mapping - Drive and unc path to where the output (image and logs) will go.
6. Capture executable directory - Directory on drive where X-Ways Capture Resides.
7. Capture output directory - Directory on drive where output will go.

There are 2 buttons to push, one is to show the mapped drives on the machine you are going to image which is helpful to make sure that you do not try and map the wrong drive, the other button is to start the process. Once all the information is filled in and you start the process here is what happens.

1. Batch file is executed to run psexec and pass it all the fields above as parameters which executes another batch file on the machine to acquire.
2. Batch file is copied to the remote machine and executed and does the following:

      1. Map the drive for the capture software.
     2. Map the drive for output to go to.
     3. Change directory to where the capture software is.
     4. Execute the X-Ways Capture and image the drive.
     5. Delete both drive mappings.

3. Batch file is executed to show drive mappings of the remote machine to show that they have been deleted.

That is it in a nut shell. I have tested this on a VM server, a remote pc and citrix and I have successfully imaged each machine and was able to import the image into X-Ways Forensics.

A few neat features of this are:

1. Autoit script and batch file can be give to administrator and shows that you are not doing anything out of the ordinary.
2. The passwords do not echo back so an administrator can type the password in for you so you do not need to know it (yes I know you can change the batch file to echo it but we have no need to do that).
3. When scripts run on remote machine no windows are opened and the only indication that anything is running is a couple of extra processes in the task manager and lots of disk activity.
4. If you really want to be slick you can rename the capture.exe program to svchost.exe (or something along that line) so if a user does look or the program abends it will look like a normal running program (I did abend the program and saw a error message pop up on the remote machine saying capture.exe abended).


Hope this helps. If it is not clear let me know and I will try and explain further.

Thought/Questions/Comments?

Monday, March 5, 2007

Service and Process Information For IR

Over at Harlan Carvey's blog he talks about getting the service information during a incident response. Well lets take it a step further by collecting this information before the incident and storing it into a database. By doing this we can then compare the data when in incident does happen or if were lucky and have added monitoring to the processes we may catch it.

What I have put together is a program that will read the database to get a list of servers that you want to get the process and services information for. I have also included web pages that you can view the data with and update the known process and services information. If you constantly run the batch program you can see if there are any unknown processes added to the servers. If you want to take it a step further you could check the database after the batch run and send a message if there are any unknown services/processes that are found (assumes that you have gone through every service/process on each server which if you have a large server farm may take awhile).

The zip file for these programs is here. There are 3 directories,

SQL - Has the create statements for the database
batch_update - Program that reads the servers from the database and updates the current processes/services in the database. I did not write this program just extended one that I had found. The original author was Thomas Berger.
web_pages - The web pages for data entry and showing what service/process is running on what servers.

As you get it and check it out I am sure you might find a few mistakes and possible extensions to the programs as well. If you extend it further then shoot me an email and let me know what you did, it is always interesting to see how ideas can grow.

Questions/Comments/Suggestions?

Friday, March 2, 2007

Autoit and Things to Come...

No I have not fallen off the face of the earth, between kids mid winter break (I don't remember this when I went to school) and work I have been a little busy. I have a few things I am working on which I hope you will like. In up coming posts I will chat about Remote Acquisitions, Offline Folders/CSC and anything else I can come up with or anything anyone else wants to mention. I am always looking for good topics to research and share with everyone. If you don't want to post a comment then just shoot me an email (mark dot mckinnon at sbcglobal dot net).

A colleague of mine showed me this nifty little free windows script automation tool called Autoit. It is pretty simple to use and you can make nice GUI front ends for many command line tools. It can be compiled into a stand alone executable and even comes with a editor and build environment. The biggest struggle I had was getting the screens formatted that I had created (my problem not that of the language), once I overcame that hurdle it is a pretty slick tool. You can easily provide a nice GUI wrapper for your command line programs to give them a more professional polished look. You can also make it easier for users who are not as command line savvy as others able to use the command line programs. In the near future I will have a sample program that I have written with Autoit.

Thinking out loud maybe one project for this would be a wrapper around Brian Carrier's Sleuth Kit. Since there is really no native port for Brian's Autopsy Forensic Browser for windows it might be a cool project to start.

Thoughts/Comments/Suggestions??

Monday, February 19, 2007

Ever need to know what words were in what emails? Ever need a cross reference for those words to what emails they came from. Don't want to spend a lot of money to get this done but want to be able to do this with many mailbox types and do it quickly? Well do I have some good news, with some Perl scripting, a sqlite database (I told you I love databases) and 2 programs from Fookes Software, Aid4mail and MailBag Assistant (both are also part of Paraben's email examiner).

So here is what you need to do. I will use a Outlook pst as an example. First open up Aid4Mail and export your pst file to a directory into eml format (make sure you recreate the directory structure of the mailbox). Next open up Mailbag Assistant and import all the eml files including the subdirectories. You will need to create the following script and template to use (I will put all the files in a zip archive and put them on my webserver for you).

Script: Save_Body_As_Text

IfEmpty End
MergeData Save_Body_As_Text

Template: Save_Body_As_Text

>>>Files ?\{Mailbox}\{Subject}.txt
{Body}

The script will take all the selected emails (alt-a) from the "Grid View - Main" and run the template unless no emails were selected. The template will save the text body of the eml file to a directory you will be prompted for with a structure of <Directory Specified>\<Mail Box, IE: Inbox, Deleted, etc..>\<subject line>.txt. Once all the files have been extracted, run the get-word.pl Perl program passing the top level directory of where the email bodies were extracted to you will extract all the words and put them into the database ( I am not include a listing of the program but will have it available for download). Now you can run sql against the database to find the keywords that you want, you can also run the following sql against the database to create copy statements for you so that you can copy the emails you want out to another directory (If you want to get even fancier then include a table with the keywords you are looking for and add a subselect to the query, if you don't know what that is email me and I will explain it further)

select 'copy "'||directory_found_in||'/'||filename_found_in||
'" "c:/stuff/test/test/'||filename_found_in||'"'
from word_file_xref a, words b
where b.word_seq_num = a.word_seq_num and word = 'Oracle';

You can also make a slight modification and add a table with words you do not want to see (IE: and, if, or, not, etc..).

I will package all the code and database create statements up and also include a exe of the Perl program in case you do not have Perl but still want to test out the program (I know the code is not the neatest but it is functional). It can be found here.

One interesting thing to note is that this could be the beginning of an open source e-discovery email production package. Any takers for a project like this?

Questions/comments/suggestions?

Friday, February 9, 2007

Incident Response Hash Set Creation....

I use x-ways forensics as my main tool and I am pretty impressed with the product and support you get from the vendor. One of the things that I have been doing is creating my own hash sets. X-ways allows you to create the hash sets using many different methods (sha1, md5, sha256, etc..). Since x-ways is very light I thought I would try a little experiment. Using version 13.0 I installed it on my Hard Drive (no registry settings needed and weighs just over 4M with the external viewer and hash database). I then RDP'd to a QA server and mapped a drive back to my machine. I then fired up x-ways and examined the drives on the QA machine. I was then able to create a sha256 hashset of each drive of the server (4 seperate hashsets at this point for 4 drives). I then exported the 4 hashsets into a directory and reimported the directory naming the hashset the same name as the server (aprox 78,000 hashs created). I then waited 4 hours and rehashed all the drives on the QA server and compared it to what I created earlier. I was left with aprox 150 files that I had to look at, makes life a lot easier during a incident response. This is one of the many features X-ways has that can be used to help during Incident response.

Tuesday, February 6, 2007

Posting of Sample Notes

As requested I am putting up a sample of the information I have (it has been sanitized) of some notes I recently took during an investigation. The file is here. In the future when you leave comments if you can let me know who you are I would greatly appreciate it. If you don't feel comfortable leaving your name then just shoot me an email Mark.McKinnon@sbcglobal.net, I like to know who is requesting things and commenting.

I know I have not blogged lately and I am getting some stuff ready to share with everyone so be patient. If anyone has something they want passed along let me know and I will pass it along. You can contact me at the above email address. Make sure you put something in the subject relating to the blog.

Anyone will to share any file hashs that they have built? I have some hashs that I am putting together and will try and get them out within the Month.

Sorry this is short but more will be comming.

Monday, January 22, 2007

Notes During the Investigation....

So I am just sitting down to start an investigation and get out my notebook and pencil so I can jot down any notes when suddenly the lightbulb goes off and I wonder why not try TiddlyWiki. I blogged about this a little while ago about using it to keep specific information in so you would not have to search for it later, I was not thinking about using it during an investigation to keep my notes in.

For the reports I write I have 4 sections: Results/Things found, Opinion, Steps Taken and Technical Explainations. So what I started to do was instead of writing things down in my notebook I started typing in TiddlyWiki. I make each thing I found it's own tiddler with a tag of what section in the report I would put it in. As I started to do this I found out how much simpler it is then writing it down in a notebook. I can easily cut and past things as well as keep everything in a time line so I know when I ran accross it. I can search and make references to other sections as well. Also I can use this as a template for the next investigation that I have, it can be a start of a very detailed and comprehensive checklist.

Now I know some people will argue the need for a check list but I think it is a good idea to have one, I don't know about you but there is so much information out there, and to remember it all I think is just too much sometimes. I think it would be better to have tiddlers of things to look at and if it does not apply to the case then say so and move on (with the many flavors of OS's there will be things that you will do for some OS's and not others), if a lawyer wants to know why you did not do a specific step then your notes should say why (The OS was win98 so that is why I did not search the restore Point directory). In a former life when I had some pretty big system implementations to do I always had a checklist to follow, it made me make sure I did not forget anything and also I could use it for documentation then next time I did an upgrade to the system since upgrades were a few years apart.

Questions/Comments/Suggestions???

Friday, January 12, 2007

To DB or Not To DB The Report

As requested I have uploaded a sample of a program (create_report.zip) that will create a comma delimited file from executing a sql statement. The program expects an argument of the a ini type file to be passed to it. The program create-report.pl is the program and the sql-report.txt is the ini file. The create-report.pl program reads the file (sql-report.txt) that is passed to it, each line has 3 parameters in it (database file, output file and sql text). Each sql statement gets parsed and executed and written to a file. I chose to create a comma delimited file because that is the easiest, you could create any type of output you would want. By using this program all you have to do is edit/create new ini files for each database you have.

Questions/Comments/Suggestions?

Thursday, January 11, 2007

To DB or Not To DB.........

Man do I really love to use databases. When you have a decent database and a good design there is nothing that you can not accomplish. Now when I say databases you are probably thinking Oracle, DB2, Sql Server, Mysql, etc... Those are all great databases with rich features but I am thinking more along the lines of an embedded database. What I usually use is Sqlite, it is a embedded relational database that is small and fast and supports most of SQL92. By combining Sqlite and perl I can do many things. Some examples of what I can do are as follows:

Store data from log files and report on them based on different criteria.

Load data and use sql to generate commands, ie: load up file names and then use sql to generate rename commands for the files.

Load multiple log files and types and correlate the data into a comprehensive report.

I will now show you what I am talking about. I will use one of Harlan Carveys   Cpan scripts that reads the event logs. I will use the lsevt3.pl program and make a few modifications to insert the records into a Sqlite database. The initial program looks like this:

#! c:\perl\bin\perl.exe

use strict;
use File::ReadEvt;

my $file = shift || die "You must enter a filename.\n";
die "$file not found.\n" unless (-e $file);

my $evt = File::ReadEvt::new($file);
my %hdr = ();
if (%hdr = $evt->parseHeader()) {
# no need to do anything...
}
else {
print "Error : ".$evt->getError()."\n";
die;
}

my $ofs = $evt->getFirstRecordOffset();

while ($ofs) {

my %record = $evt->readEventRecord($ofs);
print "Record Number : ".$record{rec_num}."\n";
print "Source : ".$record{source}."\n";
print "Computer Name : ".$record{computername}."\n";
print "Event ID : ".$record{evt_id}."\n";
print "Event Type : ".$record{evt_type}."\n";
print "Time Generated: ".gmtime($record{time_gen})."\n";
print "Time Written : ".gmtime($record{time_wrt})."\n";
print "SID : ".$record{sid}."\n" if ($record{sid_len} > 0);
print "Message Str : ".$record{strings}."\n" if ($record{num_str} > 0);
print "Message Data : ".$record{data}."\n" if ($record{data_len} > 0);
print "\n";

# length of record is $record{length}...skip forward that far
$ofs = $evt->locateNextRecord($record{length});
# printf "Current Offset = 0x%x\n",$evt->getCurrOfs();
}
$evt->close();


One of the programs I use to create the database is SqliteSpy. This is a nice gui to create and view the data that you load into the database. What I did was create a table with the following definition:

CREATE TABLE events
( file_name text,
Record_Number number,
Source text,
Computer_Name text,
Event_ID number,
Event_Type text,
Time_Generated text,
time_generated_unix number,
Time_Written text,
time_written_unix number,
SID text,
Message_Str text,
Message_Data text);

You can compare this definition to the $record in the lsevt3.pl script. I have added 3 extra columns to make the table more flexible, they are:

file_name which is the event file name that is being loaded, this allows for multiple event logs to be inserted into the database.

time_generated_unix and time_written_unix were added to allow for easier selecting and sorting of timestamps.

The following is the changed lsevt3 program that does the inserts into the database (Added lines in Bold):

#! c:\perl\bin\perl.exe

use strict;
use File::ReadEvt;

use DBD::SQLite;

# Attributes to pass to DBI to manually check for errors
my %attr = (
PrintError => 0,
RaiseError => 0
);

# Create the connecton to the database
my $dbh = DBI->connect("dbi:SQLite:events.db3","","",\%attr);


my $file = shift || die "You must enter a filename.\n";
die "$file not found.\n" unless (-e $file);

my $evt = File::ReadEvt::new($file);
my %hdr = ();

my $sid = "";
my $message = "";
my $data = "";


if (%hdr = $evt->parseHeader()) {
# no need to do anything...
}
else {
print "Error : ".$evt->getError()."\n";
die;
}

my $ofs = $evt->getFirstRecordOffset();

# Make it so Inserts run in a batch mode
$dbh->do("Begin Transaction");


while ($ofs) {

my %record = $evt->readEventRecord($ofs);

# Convert data and check type to be inserted
if ($record{sid_len} > 0) {
$sid = $record{sid};
} else {
$sid = "";
}
if ($record{num_str} > 0) {
$message = $record{strings};
} else {
$message = "";
}
if ($record{data_str} > 0) {
$data = $record{data};
} else {
$data = "";
}

# Insert statement for the data into the events tables Use prepate and execute to handle quotes in the string fields
my $sql_stmt = qq{Insert into events values ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)};
my $sth = $dbh->prepare( $sql_stmt);
$sth-> execute( $file, $record{rec_num}, $record{source}, $record{computername},
$record{evt_id}, $record{evt_type}, $time_gen, $record{time_gen},
$time_wrt, $record{time_wrt}, $sid, $message, $data);

# Check for any errors in the insert statement
my $err_desc = $dbh->errstr();
if (($err_desc =~ m/not\sunique/) || ($err_desc eq "")) {
} else {
print "Error in Database $err_desc\n";
print "loading Record ".$record{rec_num}."\n";
}



# length of record is $record{length}...skip forward that far
$ofs = $evt->locateNextRecord($record{length});
# printf "Current Offset = 0x%x\n",$evt->getCurrOfs();
}

# Commit the Batch
$dbh->do("Commit");


$evt->close();


By running this program from the command line, lsevt3_db.pl Sysevent.evt, the events will now be loaded into the Sqlite database. You can then load multiple event logs into the table and report on them through sqlite.

The following is an example of a query to show when the Removable Storage Service wrote to the event log:

select * from events where source like 'Remov%';

or

To show the when the computer was started and stopped.

select * from events where event_id in (6009, 6006) order by time_generated_unix desc;

If you were to add the application events then you can see everything that happened during a specific time period as well (now you will see why the unix time is important to have since it is much easier to use and sort by).

select * from events where time_generated_unix between 1168484317 and 1168516719 order by time_generated_unix desc;


Now if you use x-ways forensics you can define the perl script under the external viewer programs and when you select a file you can have it run this program and it will load up the database as if you were running the program from the command line.

If there is interest I can post a generic perl script to print out reports from the database, just leave some comments and I will put one out there.

Hopefully I did not confuse you to much, if I did then let me know and I will try and make it less confusing.

Tuesday, January 9, 2007

A Tiddly Wiki Travel Notebook

How many times have you been on site somewhere and not had access to the Internet and wanted to get some small piece of information that you can't quite remember but know where to look for it on the net. Well TiddlyWiki can come to the rescue. Here is a excerpt from there website:

a free MicroContent WikiWikiWeb created by JeremyRuston and a busy Community of independent developers. It's written in HTML, CSS and JavaScript to run on any modern browser without needing any ServerSide logic. It allows anyone to create personal SelfContained hypertext documents that can be posted to a WebServer, sent by email or kept on a USB thumb drive to make a WikiOnAStick. Because it doesn't need to be installed and configured it makes a great GuerillaWiki. This is lastest version is 2.1.3, and is published under an OpenSourceLicense.

I have added some information to a TiddlyWiki to get anyone who downloads it started. I tried to enter some tiddlers (name given to a unit of microcontent) with examples of how you can use it to try and give you a leg up. It can be saved from the following link

http://RedWolfComputerForensics.com/downloads/Computer_Forensic_Tiddly_Wiki.htm

Now for the challenge. How much information do you think that we can put in this wiki and help spread our knowledge to each other. If you would like to help out on this little project you can email me @ Mark.McKinnon@sbcglobal.net (Put "Forensic Wiki" in the subject) with your entries and I will put them in the wiki with the proper credit to you.

Questions/Comments/Suggestions/Help?

Monday, January 8, 2007

No This Is Not Mork From Ork.

Ok so I watched the original series when it came out, but I am not that old. What I plan to enlighten you about today is the Mork database file format. This file is mainly used in Firefox for Internet History, there are a few more files that use this format but we will concentrate on the History.dat file. Now there are numerous programs that will read this file Mandiant Web Historian, Digital Detective NetAnalysis and even a perl script by Jamie Zawinski , the problem is what if the file is broken. When the file is broken it cannot be processed by any of the above programs. A friend of mine recently had this problem and was unable to parse the history.dat file by any of the above programs. By understanding how the database worked I was able to lend him a hand.

Below is a simple file that I have of a history.dat file. I will try and take it apart and show how to hand parse the file. If anything this will allow you to eyeball the file to see if there is anything that would keep one of the above programs from parsing it. The file I will use is as follows, please note the first line is somewhat edited to make it show up in the posting.

// < !-- < mdb : mork:z v="1.4" > -->
< <(a=c)> // (f=iso-8859-1) (8A=Typed)(8B=LastPageVisited)(8C=ByteOrder) (80=ns:history:db:row:scope:history:all) (81=ns:history:db:table:kind:history)(82=URL)(83=Referrer) (84=LastVisitDate)(85=FirstVisitDate)(86=VisitCount)(87=Name) (88=Hostname)(89=Hidden)>
<(80=LE)(8B=http://redwolfcomputerforensics.com/)(9F=1166463003773295) (9A=1166448674185405)(8D=redwolfcomputerforensics.com)(8E =C$00o$00m$00p$00u$00t$00e$00r$00 $00F$00o$00r$00e$00n$00s$00i$00c$00s$00/\$00U$00n$00l$00o$00c$00k$00 $00P$00a$00s$00s$00w$00o$00r$00d$00s$00/$00E$00l$00\e$00c$00t$00r$00o$00n$00i$00c$00 $00D$00i$00s$00c$00o$00v$00e$00r$00y$00) (A0=3)(8F=http://www.certified-computer-examiner.com/)(9E =1166462906212309)(9B=1166448699473785)(91 =certified-computer-examiner.com)(92 =I$00S$00F$00C$00E$00 $00-$00 $00C$00e$00r$00t$00i$00f$00i$00e$00d$00 $00C\$00o$00m$00p$00u$00t$00e$00r$00 $00E$00x$00a$00m$00i$00n$00e$00r$00) (9D=2)>
{1:^80 {(k^81:c)(s=9)[1(^8C=LE)]} [A(^82^8B)(^84^9F)(^85^9A)(^88^8D)(^87^8E)(^86=3)] [B(^82^8F)(^84^9E)(^85^9B)(^83^8B)(^88^91)(^87^92)(^86=2)]}
@$${1{@
<(A1=1166463169292586)(A2=4)(A3=http://www.google.com/)(A4 =1166463174778175)(A5=google.com)(A6=1)(A7=G$00o$00o$00g$00l$00e$00)>
{-1:^80 {(k^81:c)(s=9)1 } [-A(^82^8B)(^84^A1)(^85^9A)(^88^8D)(^87^8E) (^86=4)]B [-C(^82^A3)(^84^A4)(^85^A4)(^88^A5)(^8A=1)(^86=2)(^87^A7)]}@$$}1}@
@$${2{@@$$}2}@

Kinda ugly when you first glance at it but once you understand it is not so bad.

File Header: // < !-- < mdb :mork:z v="1.4"> -->

Fields and Descriptions for the database, not all fields will be used

< <(a=c)> // (f=iso-8859-1) (8A=Typed)(8B=LastPageVisited)(8C=ByteOrder) (80=ns:history:db:row:scope:history:all) (81=ns:history:db:table:kind:history)(82=URL)(83=Referrer) (84=LastVisitDate)(85=FirstVisitDate)(86=VisitCount)(87=Name) (88=Hostname)(89=Hidden)>

Actual history data. Note that the last three sections are all delimited by <>

<(80=LE)(8B=http://redwolfcomputerforensics.com/)(9F=1166463003773295) (9A=1166448674185405)(8D=redwolfcomputerforensics.com)(8E =C$00o$00m$00p$00u$00t$00e$00r$00 $00F$00o$00r$00e$00n$00s$00i$00c$00s$00/\$00U$00n$00l$00o$00c$00k$00 $00P$00a$00s$00s$00w$00o$00r$00d$00s$00/$00E$00l$00\e$00c$00t$00r$00o$00n$00i$00c$00 $00D$00i$00s$00c$00o$00v$00e$00r$00y$00) (A0=3)(8F=http://www.certified-computer-examiner.com/)(9E =1166462906212309)(9B=1166448699473785)(91 =certified-computer-examiner.com)(92 =I$00S$00F$00C$00E$00 $00-$00 $00C$00e$00r$00t$00i$00f$00i$00e$00d$00 $00C\$00o$00m$00p$00u$00t$00e$00r$00 $00E$00x$00a$00m$00i$00n$00e$00r$00) (9D=2)>

Cross Reference of the actual history to the fields. Note this section is delimited by Curly Braces ({}). This is the important part and I will try and give as much detail as I have found out.

{1:^80 {(k^81:c)(s=9)[1(^8C=LE)]}
[A(^82^8B)(^84^9F)(^85^9A)(^88^8D)(^87^8E)(^86=3)]
[B(^82^8F)(^84^9E)(^85^9B)(^83^8B)(^88^91)(^87^92)(^86=2)]}

The following should always be in this section, not sure what it is but it has been in every file I have looked at : 1:^80 {(k^81:c)(s=9)[1(^8C=LE)]}.

The rest is the actual mapping in brackets ([]) for each site visited, each pair in parenthesis is a mapping of the field and the actual data, ie: ^82 = URL and ^8B = http://redwolfcomputerforensics.com. The mapping of the first record (A) would look like this

(^82^8B) = (URL=http://redwolfcomputerforensics.com)

(^84^9F) = (LastVisitDate=1166463003773295 - First 10 digits is Unix time)

(^85^9A) = (FirstVisitDate=1166448674185405 - First 10 digits is Unix time)

(^88^8D) = (Hostname=redwolfcomputerforensics)

(^87^8E) = (Name=Computer Forensics/Unlock Passwords/Electronic Discovery) - this data field actually needs to have all the $00 removed to make it readable.

(^86=3) = (VisitCount = 3)

If we look at Record B then we can see one more database field that is being used

(^82^8F) = (URL=http://www.certified-computer-examiner.com/)
(^84^9E) = (LastVisitDate=1166462906212309 - First 10 digits is Unix time)
(^85^9B) = (FirstVisitDate=1166448699473785 - First 10 digits is Unix time)
(^83^8B) = (Referrer = http://redwolfcomputerforensics.com)
(^88^91) = (Hostname=certified-computer-examiner)
(^87^92) = (Name=ISFCE - Certified Computer Examiner) - this data field actually needs to have all the $00 removed to make it readable.
(^86=2) = (VisitCount = 2)

You can now see that field ^83 was added which shows that the http://www.certified -computer-examiner.com site was referenced from a link on http://redwolfcomputerforensics.com.

2 fields that have not been mentioned above are following.

8A - Whether url was typed into address bar will have a value of 1
89 - Whether hidden data was passed in url will have a value of 1

A couple of things to note that I have observed:

When you exit firefox it may have multiple cross references sections delimited by @$${X{@ type of characters. This appears to be the last browsing session, each time the firefox program loads it reads the history.dat in and consolidates the file back into main 4 sections.

In each multiple cross reference section you may have updated data ie: (LastVisitDate or VisitCount) that appears there as well, this will get consolidated as noted above.

Hopefully this helps and I did not confuse everyone.

Questions/Comments?

Friday, January 5, 2007

Printing Restore Point Information From Another Computer

Since Harlan Carvey gave me an intro I felt I had to give up something else in order to make you want to come back.

Looking at the restore points you may wonder what all those files actually are and what they relate to in each RPXXX directory. Now if you are like me you will start to poke around and see if you can figure it out. At some point you may see that in the change.log.x there is a reference from the file found in the restore to another file located else where. Now what all the other information in the file means who knows since MS does not divluge that information.

Now MS has a nice little tool in the %SYSTEMROOT%\system32\restore directory called srdiag.exe. What this program does is to parse the restore point directory and give you all kinds of information about your restore points. Now you are probably asking how this will help me since when I run srdiag it will only produce the reports (it creates a cab file with all the info stored in it) for the restore point on my analysis computer.

Here are the steps to get restore point information from an xp image that you are analyzing (do the following steps putting your information in replace of mine):

1. Make sure Restore Points have been turned on for your analysis machine.

2. Make sure you have access to your "System Restore Directory" - Use the following command to get the access cacls ":\System Volume Information" /E /G :F

3. On the xp image you are analyzing copy the restore point directory in the "System Volume Information" directory to the "System Volume Information" on your analysis machine. At this point you should see 2 directories like _restore. One will be your analysis machine guid and the other will be from the image.

4. You will now need to edit your registry. Go to the following entry HKEY_LOCAL_MACHINE\Software\Microsoft\WindowsNT\CurrentVersion\SystemRestore\Cfg and rename the following field from MachineGuid to MachineGuid_old. Next create a new String Value of MachineGuid, edit this field and put the GUID that you copied from your image, use the MachineGuid_old as a template if you need to, the format of the 2 entries should be similar.

5. Now run the srdiag.exe from the %SYSTEMROOT%\system32\restore directory. Once the program has completed you should see a cab file with your machine name on it. In the cab file there will be all kinds of good information for you to look at.

6. Finally delete or rename the MachineGuid registry entry and rename the MachineGuid_old back to MachineGuid and remove the directory from your "System Volume Information" directory.

That is it in a nut shell. Enjoy looking at all the infromation provided to you by srdiag.

A New Beginning

Well he is my first post. I was reading Harlan Carveys lastest windowsir Blog post and took his advice and I am starting this blog. I know I do not have as much knowledge as others in the field and I am still constantly learning but who knows maybe I might be able to help one or two individuals, at least I hopefully get better at writing.

What I would like to accomplish with this blog is to pass along knowledge that either I or someone else has gained. If someone else passes the info along to me expect to get credit, if there is nothing more that I hate then people passing along an idea and not getting credit for it. I will try to post a couple times a week but will not make any promises.

How I came up with the title cfed-ttf. I was reading Jesse Kornblum's Blog's lastest entry about naming tools and had to come up with something so cfed is Computer Forensics/Electronic Discovery and ttf is Tips/Tricks and inFo. I tried to be creative but sometimes it is hard.

Now on to the show ( The reason we are here):

Ever wonder what hard drives have been attached to an xp machine. Well if restore points have been enabled then wonder no more. There is a file called drivetable.txt under the root restore point directory. This file contains a list of hard drives that are attached when the computer boots up (from what I can tell so far). Now the cool thing about this is that under each restore point directory there is also a copy of the drivetable.txt file at the time the restore point was taken. Now hopefully you can see where I am going with this. Since each restore point is a point in time you should be able to see when a hard drive was attached and not attached based on date/time of the restore and be able to create a time line of attached hard drives to the computer. This works with USB hard drives as well.

Feedback? Good or Bad who cares I know I am not always right and I will admit it. If I have to be wrong to learn something then I can eat a little humble pie.