Greetings, my loyal readers. How are you both? Have you noticed that I’ve been uncharacteristically quiet for the past month or two? A combination of sheer overwork (are you listening, boss? ), a much needed holiday, and some fairly serious surgery, has prevented me from sharing my prejudices with you. And look at all the things that have been going on… You’re no doubt well aware of the Race to Zero competition that’s on the menu for the Defcon 16 conference in August, where contestants will be given samples of malware and compete to be the first to make enough modifications to them to get their sample set undetected past a range of AV products. Thus proving that if you take a sample of known malware and spend some time modifying it, eventually you’ll get something that anti-virus products don’t recognize. Well, I bet you never realized that…
There’s no big surprise, either, in an idea like this coming out of Defcon. While the range of people who run and attend the event is wider than you might think, it does attract a good few security fringe people who assume that the anti-malware industry is populated by dummies who know less about malware than anyone else, and will take any opportunity to rub our noses in the fact that we don’t have the 100% cure for malicious code. (Oddly, I don’t see many people complaining that no-one has a 100% cure for other security problems, or, come to that, cancer, crime, or world hunger). You might think, though, that readers of the world’s only significant specialist anti-malware publication might have a more balanced view. However, Virus Bulletin have run a survey on their web site on "Are virus-writing contests a good idea?" and it appears that most respondents don’t see a problem with them. (Of course, people who visit the web site are not necessarily readers of the magazine.) A quote or two from Graham Cluley of Sophos summarizes some of the reasons the anti-virus community objects on principle to malware creation for test purposes.
"Ah," you may say, "but this isn’t a test like that Consumer Reports fiasco: it’s the skills of the contestants that are being tested." Sorry, but it doesn’t work quite like that. It’s inevitable that conclusions will be drawn about individual products according to how well they survived the accelerating modifications thrown their way. At least some of those conclusions will be inappropriate because they’ll assume (1) the competence of the assessment of the malware (2) the incompetence of the anti-virus products.
Anyway, Graham made some trenchant points, though he might not agree with every nuance of my interpretation:
Something he didn’t directly comment on is the fact that a startling 46% of respondents apparently believe that virus-writing contests are a useful way of "highlighting issues with antivirus." Well, I guess that depends on whose attention the "issues" are being brought to. If there are still companies who are totally reliant on malware-specific signature detection, or end users who think that an anti-virus product can protect them from all known and unknown viruses so they don’t have to think about or contribute to their own security, then perhaps they are. However, if they think that this particular contest will add much in the way of instant enhancement to anti-malware technology, I’m afraid I don’t share their optimism.
[3rd bullet point slightly edited for grammar and clarity after the blog was published. Sorry for any confusion I may have generated.]
David Harley, ESET Research Author
Author David Harley, We Live Security