Monday, October 5, 2009

Certifications are Evil.....By John McCash

Note: The following does not represent the opinion of Mark McKinnon. He merely had the good grace to allow me a forum in which to post it after it was respectfully declined (for obvious reasons) by the SANS Institute's Forensic Blog. I wrote it chiefly because I hadn't seen anything recently, or as I recall, ever, that so much as acknowledged any downside to certification. I respect the pro-certification viewpoint, but I do disagree with it. And so, without further ado...

Certifications are Evil

by John McCash

Folks, this is an opinion piece, and it's going to be a controversial one. Some of you started composing a scathing rebuttal to it as soon as you read the title. Normally I restrict myself to what I hope are useful technical tidbits, but like most of you out there, I'm a forensic practitioner, and I have little patience for time sinks which provide no benefit (no I'm not including the training in that category, save your flames for the end). I've always begrudged the time commitment (over and above what's required to actually take the training and learn the included material) required to attain certifications, despite which I'm in possession of five, soon to be six, not counting my master's degree, so I like to think I speak from some degree of experience.

I do understand the arguments used by the proponents of certification. In essence, they allow people who have no understanding of a technical discipline to discriminate between other people who do and don't have that understanding. At least that's what they're supposed to do. Let me list two of the most egregious counterexamples that I have found in my own personal experience (with no disrespect intended to either Microsoft or the International Information Systems Security Certification Consortium). I have met, in my career, an extraordinarily large number of clueless CISSPs and MCSEs. These are people who were apparently able to pass the test, but who were unable to, respectively, secure or administer their way out of wet paper bags. To state it in more general/inflammatory terms, one problem with certifications is the number of idiots who are in possession of them. On the flip side of this, I personally oversaw the hiring of a system administrator back in 1996 who had nothing but a High School Diploma and a clue. I still work with him on occasion, and his hiring was one of the smartest decisions I ever made.

One logical response to this issue is simply to make certifications more difficult to get, but there we run into a second fundamental problem. When a certification raises its difficulty in order to exclude a certain percentage of unqualified people, they also exclude a certain percentage of qualified people. As the difficulty raises more and more, the incremental number of unqualified people being excluded gets smaller, and the incremental number of qualified people being excluded becomes larger. The amount of work required in order to to pass increases substantially as well. Qualified people get excluded for several reasons. For one, the more difficult a certification, the more training is typically required before attempting the exam. One forensic certification I heard about last week, the one which finally prompted me to write this posting, requires six months of training and six exams. That's a tremendous amount of time committed to obtaining a fancy certificate and some alphabet soup to put on your resume. Don't get me wrong, I'm not saying that training is useless. But what do you do if you're already in possession of 75% of the knowledge this training is intended to pass on? It's in the financial interest of the certification providers to make it more difficult to pass the certification if you haven't attended their custom-designed training program. Review guides may be available, but typically cover more material than the certification vendor's training, without the subtle emphasis often provided by that training. The practical upshot of this is that an individual who may know 75% of the material on the exam off the top of his head, substantially better than a graduate of the certification course will be (probably) after six months or so, may still have to complete a long and expensive training course just to get to a point where he can reliably pass the certification exam. For many of us, it's simply not worth it. We resign ourselves to being filtered out because we don't have the requisite alphabet soup, even though we're otherwise qualified.

You'd think that at some point, an exam would filter out all the idiots, but that's much harder than you'd think. That's why IQ tests have fallen out of vogue, and why an actual interview is still the best way to select a new employee. This brings me to the third reason certifications, or more specifically certification exams are bad. Many standardized tests consist of simple regurgitation of facts. They don't require that the subject really be able to think, just memorize. Personally, I believe that any idiot can pass such a test if they put sufficient time into preparation. It's possible to design questions to test problem solving ability, but it's difficult. One tactic that's often resorted to, and this is a personal hot button of mine, is to provide the subject limited information, allow him to assume the rest, and make him pick the 'most reasonable' or 'best' solution from the list. The problem with this occurs when the test subject is smarter or knows more than the individual who designed the question. I personally have run into this several times on various certification exams (I got a couple of the questions changed), and I find it intensely frustrating.

Finally, certifications are bad because they provide lazy people with a tool that can be easily misused. Rather than read 100 resumes to determine the 15 most qualified for a particular position (which he may lack the expertise to do anyway), an HR person can simply filter out all those lacking a specific certification. If this still results in a number of resumes that is too large, he can filter on another certification. This sort of data reduction can easily remove more qualified people than unqualified. In my opinion, it's better to pass all 100 resumes down to the hiring manager.

Certifications are bad for hiring managers, because they reduce their pool of qualified candidates, and they're bad for the candidates, because they enable those candidates' resumes to be filtered out before the manager sees them. In the end, they provide the most benefit to the vendors who provide them and their associated training, and to HR organizations, who are able to get by with fewer and less expert people.

Once a certification is accepted as required in a certain area, this fact can be used by people who lack training in that area to obtain it. The downside of this is that people who are already qualified sometimes must forgo more advanced training to take training just to get the certification. I'm not suggesting they don't learn anything in this training, but typically it will be much less than they could have learned had they been able to attend training of their choice.

So, you might ask, what's the alternative? Isn't there some other low-overhead way to reliably tell if a candidate knows anything about a given specialty without actually reading his resume or interviewing him? Well, I have a suggestion. Maybe somebody out there can make it work. It's based on word of mouth, and the PGP web of trust. Basically, there are a number of people who's word I trust if they say somebody has a clue. If everybody had one or more PGP keys with a comment that said "I am an expert in X", then people could sign that key, and the subject could publish the result. If Rob Lee, Ed Skoudis, & Josh Wright all say I'm an Uber Geek (and I'd like to think they might), I tend to think most people would buy into it. Maybe we could call this the web of cluefulness.

As always, please feel free to leave commentary if you liked this article or want to call me on the carpet for some inaccuracy.

Let the flames commence!

John

Tuesday, August 25, 2009

Decoding the DateCreated and DateLastConnected SSID values From Vista/Win 7

This information was provided to me by Longshot (Just passing this great information along).

Decoding the DateCreated and DateLastConnected registry values from the registry keys

SOFTWARE\Microsoft\Windows NT\CurrentVersion\NetworkList\Profiles\{GUID}

In Vista and Windows 7

The DateCreated and DateLastConnected are binary values that can be broken up into 4 byte parts, with 1 part left over. Each 4 byte part corresponds to a value of a date. The order of the values are as follows:

Year
Month
Weekday
Day
Hour
Minutes
Seconds

Each of these 4 byte parts is in little endian. Using the following data that was unpacked from binary and converted to hex we get the following translation:

d9070200020018001700140025000001

d907 0200 0200 1800 1700 1400 2500 0001


Year = h4 = d907 = 07d9 = 2009

Month = h4 = 0200 = 0002 = Month {Jan = 1, Feb = 2, etc....}

Weekday = h4 = 0200 = 0020 = Weekday {Sunday = 0, Monday = 1, etc...}

date = h4 = 1800 = 0018 = 24

hour = h4 = 1700 = 0017 = 23

minutes = h4 = 1400 = 0014 = 20

Seconds = h4 = 2500 = 0025 = 37

The Month and Weekday fields have to be converted to their proper Month and weekday name.

which would yield the following:

Date First Connected: Tuesday, 24 February 2009 23:20:37


Here is the perl code to do the above, I only include the $data as a place holder that would need to get data fed to it:


use strict;

# This is the binary data that would be read from the registry file
my $data = "";

my %month_type = (1 => "January",
2 => "February",
3 => "March",
4 => "April",
5 => "May",
6 => "June",
7 => "July",
8 => "August",
9 => "September",
10 => "October",
11 => "November",
12 => "December");

my %dayofweek_type = (0 => "Sunday",
1 => "Monday",
2 => "Tuesday",
3 => "Wednesday",
4 => "Thursday",
5 => "Friday",
6 => "Saturday");


my ($year, $month, $weekday, $date, $hour, $minute, $second ) = unpack("h4 h4 h4 h4 h4 h4 h4", $data);

#This part converts the year
my $finalyear= hex(reverse $year);

#Now we convert the month
my $monthnumber=hex(reverse $month);
my $finalmonth = $month_type{$monthnumber};

#Now we convert the weekday
my $weekdaynumber=hex(reverse $weekday);
my $finalweekday = $dayofweek_type{$weekdaynumber};

# This converts the date
my $finaldate=hex(reverse $date);

#This converts the hour
my $finalhour=hex(reverse $hour);

#This converts the minute
my $finalminute=hex(reverse $minute);
my $howlongisfinalminute=length($finalminute);
if ($howlongisfinalminute == 1){
$finalminute="0$finalminute";
}
if ($finalminute eq "0"){
$finalminute='00';
}

#This converts the second
my $finalsecond=hex(reverse $second);
my $howlongisfinalsecond=length($finalsecond);
if ($howlongisfinalsecond == 1){
my $finalsecond="0$finalsecond";
}
if ($finalsecond eq "0"){
$finalsecond='00';
}

my $ssidtimestamp= "$finalweekday, $finaldate $finalmonth $finalyear $finalhour:$finalminute:$finalsecond";
if ($n =~ /Created/){
$finaln="Date First Connected:";
} else {
$finaln="Date Last Connected:";
}

print "$finaln $ssidtimestamp\n";

Friday, August 7, 2009

Update Skype Log Parser..........

I know this whole blog has gotten pretty stale as there have not been any posts in a loooong time. Well I am going to try and remedy that with some good posts in the coming weeks.

Well the skype log parser, which is my most downloaded tool, has gone through a few updates since I last posted at the end of last year/beginning of this year. The current version is 1.7. A few of the notable changes are (for full list see change_log.txt):

1. Ability to search for the log files from the gui.
2. Skype 4.x is now supported.
3. The ability to merge 2 cases into 1 report to compare the reports.
4. Ability to cancel the program at anytime.
5. Ability to parse the iTunes iPhone/iPod Touch backup files and get the skype log files if skype is installed.
6. If record in UserXXXXX.dbb file was truncated would throw program into infinite loop and this has been fixed.

The new program can be found here. I have also created a new email account that I would like to use for support and also to send out email's to users when I update the program. If you would like to receive updates about the skype log parser send an email to skype-parser at redwolfcomputerforensics dot com. Comments are encouraged good or bad and requests for enhancements.

As always Thoughts/Comments/Questions.........

Tuesday, April 14, 2009

Sans "WhatWorks in Forensics and Incident Response Summit" in July

The agenda is out and it looks to be a fantastic lineup of expert briefings and panels. The summit will be in Washington DC July 7 and 8, 2009. I was lucky enough to be chosen to be on the "Essential Forensic Tools" panel. With me on the panel are some of the big names in the Forensic/IR community, they are:

Jesse Kornblum who has made significant contributions with the free tools (MD5Deep, SSDeep, and Miss Identify and others) he has provided as well as the excellent papers he has written ("Using Every Part of the Buffalo in Windows Memory Analysis" and "Implementing BitLocker Drive Encryption for Forensic Analysis" as well as others), Jesse also has a blog that can be found here.

Troy Larson who is the Senior Forensic Investigator with Microsoft’s IT Security Group. Troy has presented my times at different conferences (Recovering Information from Deleted Security Event Logs, Vista Shadow Volume Forensic, etc.. and is a coauthor of the Handbook of Computer Crime Investigation: Forensic Tools and Technology.

and finally

Lance Mueller of the blog Computer Forensics, Malware Analysis and Digital Investigations. Lance has provide many enScripts on his blog to be used by all. I do not use Encase but I have learned many things by looking at the enScripts that Lance has developed, they have provided me insights into many areas of computer forensics.

I look forward to joining this panel of experts who have distinguished themselves in the field of Computer Forensics and Incident Response as well as meeting quite a few people who I have had the privilege of trading ideas and email's with.


As always Thoughts/Comments/Questions.........

Wednesday, February 18, 2009

Gmail offline...

Not to long ago someone brought to my attention that Gmail was offering to be able to have your gmail account offline. What this means is that you can look at all your e-mail that had been synched even if you are not connected to the net, sorry to say you cannot send e-mails or save them at this time that I can tell. I have come up with a parser for your gmail offline account. It only does the basics right now but we will look to add more in the future. Some of the sample reports are:

1. Contact information
2. Email conversations with hyperlink to email
3. Word Xref to email

That last one has 2 flavors depending on the option you pick when you run the parser. In the gui if you choose not to create the e-mail xref report then you will only get a report with all distinct words in the emails. If you choose to create the e-mail xref report then it will create the report with distinct words and those words will be hyperlinked to a report that will show each e-mail that the word appears in. This may take a while depending on how big the mailbox is, but it is pretty cool.

The program can be found here.

As always Questions/Comments/Suggestions.

Monday, February 9, 2009

Updated Prefetch Parser......

I have updated the prefetch parser so it will now read all the prefetch files in a directory. It will produce a main report that will show each prefetch file, the actual file name, the number of times run and the embedded date/time. You can also click on the prefetch file name and see the dll/files that were loaded when the program was run. This will work for XP, 2003 and Vista. The new program can be found here. I have left the old program out there as well in case you still want to parse a single file.

As always Questions/Comments/Thoughts?

Tuesday, January 27, 2009

Internet Parser Update

In honor of Randy G (see this post) and the fact that I found a new browser, I have updated the Internet parser program to now include the Flock browser, which is based on the Mozilla framework. Now I have not done an extensive analysis on yet but I have done enough to know it fits right in with Firefox 3.x and Google Chrome as it uses SQLite to store its history and other files. One thing to note is that there is a new report added called Form History. This is a new database that flock uses that keeps data that was entered into any forms. That is about all I know about the forms at this point. There are quite a few new databases that Flock uses and I will have to test them out to see what data points can be pulled out.

So in honor of Randy G. here is the download.

Questions/Comments/Suggestions?