Kevin Townsend posted a blog in response to a piece by Mike Rothman at Securosis. Mike’s piece on “The Death of Product Reviews” makes some pretty good points about security product reviews in general. Kevin’s piece is more specific to anti-malware. He too makes some useful discussion points about the value or otherwise of the WildList, but despite his experience of anti-malware testing, the landscape he describes is not quite the one I work in.
He suggests that magazine testing has been replaced by certification, but I’m not convinced that’s true. There are as many magazines doing comparative reviews as ever, though the ways in which they’re doing them has certainly changed: there are more reviews based on some form of dynamic detection testing, and the actual testing process is often outsourced. However, Kevin has chosen to focus on WildList testing, and suggests that “Anything less than 100% success should be seen as incompetence.” Well, given that there aren’t many mainstream companies that don’t have access to the WildCore collection that underpins the WildList, I can’t say that he’s entirely wrong.
However, if it were really that simple, those AV companies who have dropped out of Virus Bulletin tests because they couldn’t maintain a consistent VB100 score would still be submitting. No-one drops out of a widely recognized certification because it’s too easy.
The problems with WildList testing are well known, and some mainstream testers have effectively abandoned it. Most have supplemented or replaced it some form of dynamic testing as recommended by AMTSO, which is fine in principle: sound dynamic testing should be a better reflection of today’s threatscape. Unfortunately, validation – sample validation, that is, not marketing validation as described by Kevin - has, in so many instances, gorn down the plughole along with the bathwater.
WildList testing survives because it is still a differentiator (mainstream products shouldn’t usually fail a WildList test, as long as it is a genuine WL test, but they do). It also retains value because, in theory at least, it’s based on a validated test set (though it’s always been expected that WildCore recipients would do their own re-validation). If there’s one thing that the Kaspersky Lab experiment has unequivocally proved, it’s that pseudo-validation based on reputation source is not a substitute for the real thing. Testers who aren’t aware of that are likely to do their audience a real disservice.
David Harley CISSP FBCS CITP
Director of Malware Intelligence
ESET Threatblog (TinyURL with preview enabled): http://preview.tinyurl.com/esetblog
ESET Threatblog notifications on Twitter:
ESET White Papers Page: http://www.eset.com/download/whitepapers.php
Securing Our eCity community initiative: http://www.securingourecity.org/