Negative Values: Racing Past Zero

Well, there’s not much doubt about the SecurityFocus view of the Race to Zero event. A report by Robert Lemos is festooned with advertising that states “If you want to stop a hacker…you have to act like one.” Perhaps Symantec, who own SecurityFocus, can afford to be relaxed about the event, since their scanners weren’t represented in the test panel. All that apart, what are we actually learning as we pass zero?

Well, according to organizer Simon Howard we’ve learned that pattern-based detection isn’t working. Well, no, Simon: signatures actually work very well against known viruses. Do you really think it’s unnecessary to detect old viruses (maybe you should read our earlier blogs on Angelina and Helkern, or Kurt Wismer’s comment piece in the August issue of Virus Bulletin), or are you insisting that we should detect them heuristically? Given ESET’s expertise in heuristics, we’re not going to deny its importance, but in some contexts, signature detection can actually save a lot of processing time, depending on how it’s implemented. And who on earth told you that server-hosted anti-malware doesn’t use behavior analysis?

We’ve also learned that antivirus researchers started using “behavioral detection” in 2006. Except that some of us have been using proactive techniques for many more years than that.

Furthermore, we’ve learned that there’s a certain amount of confusion out there about what constitutes a variant, what is meant by “in the wild”, and what the differences are between an exploit, a vulnerability, and a worm. (I think I’ll tackle those issues in other blogs: this is already taking rather too much of a Sunday afternoon when I’d rather be watching the Olympics).

So, so far, we’ve learned that if you modify malware – whether it’s an old soldier like the Stoned boot sector infector or a recent troublemaker like Virut (a polymorphic which even without handcrafting still causes some products real difficulty) – you can very quickly tweak it enough to hide it from any scanner you target. (Actually, you don’t even have to modify it by hand, but we already knew that, and the chances are that you did, too.)

Still, as Simon says (you have no idea how careful I’m being not to take a cheap shot here), the whole exercise is worth it if all those Moms and Dads who avidly follow SecurityFocus (apparently) change their anti-virus settings to activate the “behavioral features”.  Except, of course, that it’s rather unusual to find a mainstream anti-virus scanner that uses only signature detection, even in its least paranoid settings.

Well, that’s enough goading of the anti-AV crowd who are, no doubt now queueing up to heap abuse upon me. Here’s a wholly serious thought.

When this contest was first publicised, I said that no mainstream anti-virus company would take part directly because of the possible damage to their reputation as an ethical organization. That seems to have been largely true, but one of the teams represented in the contest turns out to have consisted of researchers from a security firm that does operate on the fringes of the anti-malware community, though they don’t actually have their own AV product. I find that a little sad that they’ve endorsed this competition to the extent of actually participating, without at least acknowledging the legitimate concerns of the anti-malware community as a whole. Well, perhaps they did, but those comments weren’t quoted. From some other comments that were made, though, I suspect that they simply weren’t aware of them. :(

David Harley
Malware Intelligence Team

Author David Harley, ESET

  • Matt


    Having been a direct participant in the event I’d like to address your post. First I’d like to say that I recognize your concerns however I do not think you did enough research into the contest or results. I also find the singling out of our team a bit unfair given that 3 out of 4 teams that completed the contest were “on the fringes of the anti-malware community” and don’t actually have an AV product.

    There are two fundamentally different camps with regards to the ethics of writing or enhancing malicious code for fun, profit or malice. The debate boils down to the commandment “Thou shall not create new computer viruses”. As you will see in my last few sentences, this was not done and the furor over the contest is mainly unfounded. Many other talks at Defon were more damaging to AV companies, their revenue streams and consumers than Race to Zero.

    In one camp are the “old school” anti-virus researchers who feel that under no circumstance should anyone for any reason attempt to write a virus, modify malicious code or perform any other task that otherwise creates new malicious code. In his classic book on the topic, Peter Szor condemns the act of writing malicious code even if the motive is to improve products. This argument has merit, in the context of AV companies inventing new malicious threats as a competitive advantage.

    On the other side are more progressive elements that believe there are valid reasons for engaging in the authoring of malicious like codes. Several examples such as the 2006 consumer reports study, two colleges offering virus writing courses and the recent race to zero have shown that AV vendors still get pissed off about any violation of the commandment.

    From my perspective the AV community is the only one that finds the use of “offensive” or “think like the enemy” learning tactics to be taboo. For example, IDS vendors are not up in arms about metasploit and APWG is yet to complain about the new training tool.

    Anyone who looks closely at both the content and outcome of the race to zero should realize two things: a) the contest was based on old viruses and posed almost no threat to AV vendors even after successfully rendering them undetectable and b) the app armoring talk by the mandiant fellow did far more to damage the AV industry than did race to zero.

    As stated by Simon (the coordinator), most of the teams chose to use “droppers” rather than “packers”. Droppers do not effectively defeat anti-virus, rather they “drop” another piece of code which can then be detected by AV. In the contest we created 0 new strains of any virus though we did arguably create a new variant of SQL slammer by modifying the shellcode (exploit not a virus). Our technique left all samples detectable by AV products after being dropped on the system.



  • Thanks, Matt. It’s a pleasant change to see the other side of the argument in a civilized manner.

    I actually spent a great deal of time researching the competition initially, but if you’re pointing out that I wasn’t there, I have to plead guilty: I can’t be everywhere! As the blog makes clear, I was commenting here on a specific report by SecurityFocus, though I’ve talked to a number of people who were present at the event. Until the organizers issue their promised paper/presentation, I’m not sure what else you’d like me to do. However, if I gave the impression that I was blaming iDefense for issues that should have been attributed elsewhere, let me apologise unreservedly. You were only “singled out” because you were the only team that was named and quoted in the Lemos report.

    Let me address some of your specific points.

    1) The point is not that you don’t have an anti-malware product (except in so far as a service is a product): there’s nothing intrinsically blameworthy or praiseworthy in that. The point is that you are seen in some respects as part of the anti-malware community, but have acted in a way that mainstream antivirus companies generally wouldn’t. Of course, there’s no law that says you have to. But since you -were- named and quoted, I don’t see it as unreasonable to point out that you didn’t.

    2) I see you’ve read the gospel according to Juergen Schmidt ;-) I don’t agree with many of his points, but they’re worth reading at first hand (–/features/77440). However, there is no commandment that says “Thou shalt not create new viruses, but taking old viruses and modifying them is perfectly OK.”

    A modified virus may or may not be, technically, a variant, but it’s almost certainly going to be regarded as unethical by most people in this industry. If a modification is intended to circumvent anti-virus detection, it’s going to be regarded as “bad”. You’re perfectly entitled to disagree with that viewpoint, but not to tell me that there isn’t a problem. :)

    (3) I don’t remember Peter’s book being nearly so prescriptive, but it’s been a while since I read it right through. Perhaps you could provide a page reference?

    That said, I don’t think the book encapsulates the ethical standpoint of the entire anti-malware industry, any more than mine do. :) There are indeed many researchers who wil not create replicative code under any circumstances, and, contrary to what is often suggested, that group includes some of the best minds in the business. There are others who are less dogmatic, and admit the possibility of writing potentially malicious code for constructive purposes. However, if mainstream researchers are creating code that could be used maliciously, they aren’t doing it publicly and they do it under tightly controlled conditions.

    Your characterization of AV vendors is at best incomplete. There are actually several reasons why the research community doesn’t like the unrestricted and inappropriate creation of replicative or (potentially) malicious code:
    * The strong ethical objection many people hold
    * The safety issue – and I do get the impression that the competition organizers, while taking that issue on board, did not take particularly stringent precautions, even compared to one of the academic sites teaching virus writing that you mention.
    * The many technical problems that can arise when new malware is created that makes unwarranted assumptions about how security programs work.
    * The socialization issue: every time malware is created unnecessarily, publicly, and/or inappropriately, it opens the door a little wider for others to do the same, possibly with less care and reason.
    * The informational issue: it seems that whenever an event, test, or academic course involves writing malicious code, it also generates misinformation about what researchers think, how the technology works, and the implications of the event for the world at large. You, for instance, have not presented a partial and misleading picture of the view from AV. There’s a big difference between being reluctant to write unnecessary malcode and being reluctant to “think like an attacker”: researchers spend a lot of time doing the latter, but writing actual malcode is not a necessary component of that mindset. You’ve also suggested that your approach to repackaging has no dangerous implications for end users. This is manifestly untrue: that any attack that subverts the scanner’s expectations of how a known attack will be delivered is potentially dangerous. If you don’t understand why this is the case, you haven’t read Peter Szor’s book carefully enough, and you aren’t aware of the continuing impact of obsolescent malware on present-day systems.

    I would agree, as it happens, that the anti-malware industry should take its share of responsibility for not communicating its concerns properly. I’m hoping that forthcoming documentation from AMTSO ( will go some way to addressing this problem.

    So how is your assertion that “Our technique left all samples detectable by AV products after being dropped on the system” compatible with the concept behind the contest? From the Race to Zero web site: “The first team or individual to pass their sample past all antivirus engines undetected wins that round.” It seems to me that you’re saying that:
    * Your team didn’t meet the terms of the challenge
    * The methodology for assessing success in meeting the terms of the challenge was ineffective
    * That the report by Robert Lemos was inaccurate in more ways than I’d realized.

    Yes, there’s still a disparity between the mindsets of AV traditionalists and the wider security research community. But comparing the creation and modification of malcode to the use of a training awareness tool strikes me as a little desperate. ;-)

    Strangely enough, I’m not particularly worried about the potential for damage of the contest: I’m fairly sure the industry will survive it, and I long ago learned to live with the fact that there are a lot of people out there who would like to cause damage. I think it’s naive not to notice an anti-AV agenda behind the contest, but more so to over-estimate its likely impact. The industry has been misunderstood, misrepresented and misliked for as long as I’ve been watching it, and long before I joined it. The fact that it’s still here suggests to me that for all its faults, an awful lot of people prefer mainstream AV technology to other forms of malware management, or, better, still see a place for it in a multilayered defence strategy. That preference may partly derive from the fact that the industry has adapted and evolved far more effectively than you give it credit for.

    David Harley
    Malware Intelligence Team

  • Having had time to consider this (and having had extensive discussions in various forums) I’m inclined to think I may have been a little too harsh, here. I can see the attraction of subverting the aims of the contest, and I suppose it’s not incompatible with the Defcon ethos. As someone working for an anti-malware vendor I would still have problems in participating in a contest like this personally, as would most of my peers, but I realize that not everyone feels bound by the same ethical or Code of Conduct constraints that many researchers of my generation are.

    I still feel that the message most people will carry away from this event is that “Anti-virus doesn’t work.” We can debate how much truth there is in this statement – certainly AV is at best a part of the solution – but it seems a pity that people will see this as vindication of a contest which actually failed to make even that point effectively.

Follow us

Copyright © 2017 ESET, All Rights Reserved.