Hybrid Detection: I have seen the future…

…and it’s still hybrid. Or multi-layered, if you prefer. What anti-malware companies (and malware authors, if it comes to that) are constantly doing is revisiting concepts that have worked before so that they fit the current environment better: there’s nothing wrong with an evolutionary approach, but changing the terminology doesn’t make it revolutionary. So what Larry Seltzer is describing in a recent eWeek article isn’t exactly groundbreaking technology, it’s what all anti-malware companies originating in traditional AV are doing, to a greater or lesser extent. "File reputation" is pretty much what we used to call integrity checking, and is close to a limited application of whitelisting that’s been in common use since the 1990s. The main difference is that whereas earlier incarnations of anti-virus tended to bundle an integrity checker as a separate application along with a known-virus (signature) scanner, it’s now common to whitelisting or a near equivalent into the main application. And in those days, no-one described their own server networks as a cloud. ;-)

What’s more interesting is Larry’s critique of various "classic methods" of malware scanning.

  • Clearly, we aren’t about to argue that it’s feasible to have a signature for "every new variant": that’s a model we moved away from many years ago.
  • I would argue, though, that "true heuristics" is an odd term to use for what we describe as "passive heuristics", where an object is scanned for malicious characteristics statically – that is, looking at the code without executing it. (We have a fairly comprehensive, vendor-agnostic review of heuristic analysis on our white papers page, by the way.) The term heuristics has a far wider range of applications than that, even within the anti-malware industry, which uses it in quite a specialized sense. Clearly, there is still a place for this analogue to static analysis, as the term is used in forensics, but it isn’t nearly as effective as it used to be, because the bad guys use a variety of obfuscatory techniques to hide malicious code from signature and basic heuristic detection. There’s nothing "untrue" about heuristic analysis when it analyses code when it’s actually running (an analogue to dynamic analysis): the point Larry seems to have missed is that when products like ours run that code, it’s in a protected environment, so (assuming a sound implementation of the product) the code being run shouldn’t present a risk to the system. (By the way, this is the second time in two days I’ve talked about static versus dynamic…)

What interests me most, however, is his yearning for "a simple solution like absolute whitelisting." It does seem that we’re always looking for the 100% solution that will render current anti-malware solutions unnecessary. The way that firewalls, IDS, IPS, reputation services, NAC and a dozen other panaceas du jour were once seen as The Answer. But the fact is that whitelisting itself is hybrid (by which I mean that you can’t whitelist an application without using other technologies to confirm that it’s what AMTSO like to call "innocent". And it works best as one layer of a defensive strategy, at any rate in the version of the internet in which we currently find ourselves.

David Harley CISSP FBCS CITP
Director of Malware Intelligence

Author David Harley, ESET

  • http://anti-virus-rants.blogspot.com kurt wismer

    i don’t think i’d call reputation systems equivalent to integrity checking… hashing is used to uniquely identify the file in order to make sure it’s the same file you purport to know the reputation of (which is also true for any whitelist worth it’s salt), but you aren’t actually checking the file’s integrity since you don’t necessarily have the file’s original hash for comparison (such comparison is the foundation of integrity checking)..

  • http://www.smallblue-greenworld.co.uk David Harley

    I take your point, but I don’t altogether agree. Integrity checking as it was implemented in the early 90s usually started with the generation of a new checksum or hash for an object on a protected machine, not against an authoritative original fingerprint. (OK, there’s an assumption that a known-something scan hadn’t found that it contained malicious code.) Whether it’s an on-demand or realtime integrity scan, or even whether or not you integrate it into the main scanner, doesn’t affect that. You can certainly argue that integrity checker was therefore a misnomer for that type of software, but the term was too commonly used for me to apologise for using it here. :)

  • http://anti-virus-rants.blogspot.com kurt wismer

    i guess i don’t consider the creation of the comparative baseline to be the actual act of integrity checking, but rather a prerequisite…

    either way, hash comparison in integrity checking answers the question “is this file the same as it was before?” whereas hash comparison in whitelists or reputation systems is used more for answering the question “is this file one that i know?”… they both use some of the same underlying primitives, but how they use them/what they use them for is different (ie. change detection vs. indexing a collection)… at least that’s my interpretation, though i suppose those two questions could be rephrased to be a lot more similar…

  • http://www.smallblue-greenworld.co.uk David Harley

    I don’t disagree with you. I just think that both questions have a very narrow view of integrity. Which could get us into the “what is the value of whitelisting debate”, but I’m probably not going to have time to get into that, due to imminent travel. :)

    Ah, shades of alt.comp.virus….

Follow Us

Automatically receive new posts via email:

Delivered by FeedBurner

1 article related to:
Hot Topic
12 Nov 2008
ESET Virus Radar

Archives

Select month
Copyright © 2014 ESET, All Rights Reserved.