Comments on: Carbon Dating and Malware Detection http://www.welivesecurity.com/2012/08/23/carbon-dating-and-malware-detection/ News, Views, and Insight from the ESET Security Community Mon, 03 Feb 2014 08:49:00 +0000 hourly 1 http://wordpress.org/?v=3.7 By: David Harley http://www.welivesecurity.com/2012/08/23/carbon-dating-and-malware-detection/#comment-1087 Fri, 24 Aug 2012 11:45:15 +0000 http://blog.eset.com/?p=14913#comment-1087 I’m afraid you misunderstand me. What I’m saying is not that we can’t detect anything we haven’t already seen, but that we can’t detect what isn’t covered by an existing detection. An indeterminate percentage of unknown malware is detected by existing signatures. As for reliability, you’re preaching to the converted. I’ve never said that AV is anything like 100% reliable, even in the days when malware was much rarer and detection rates were much higher. The problem is that I have yet to see an alternative that is 100% reliable, or anything like, without hampering business processes. I do think that AV is a useful option as part of a multilayered defence strategy in the enterprise. (Home users may find it more convenient to use an internet security suite, though they can also mix and match components if they know what they’re doing.) AV isn’t the only option, and it’s a long way from being complete protection, but if you’re going to give up on it, you need to suggest a viable – in fact, better – alternative.

In fact, vulnerability details from a vendor are a more reliable source of data than samples picked up in the field, and there’s a channel of communication between MS and the mainstream security industry for that purpose. Of course we can and do base detections on exploitative malware found in the wild, but that’s haphazard and can be resource-intensive.

How would EPP make the samples more representative? (Achieving that is the whole problem with AV testing in a nutshell, and I have yet to see a fully satisfactory methodology that works in a 21st century threatscape.) Aiming to replicate the approach without addressing that is a blind alley.

And I’m not sure that my view of what constitutes heuristics and mine coincide. The days when aggressive heuristic scanning was cautious and strictly optional because we were terrified of FPs are pretty much behind us. Cloud-based detection is heavily reliant on generics. Not that FPs aren’t still a problem…

]]>
By: Brian http://www.welivesecurity.com/2012/08/23/carbon-dating-and-malware-detection/#comment-1086 Thu, 23 Aug 2012 17:30:45 +0000 http://blog.eset.com/?p=14913#comment-1086 "Well, by definition we can’t detect a sample we haven’t developed a signature/detection for." <- this is exactly the problem and the reason that people often complain about the effectiveness of anti-malware products.  Most malware that gets on to a machine (based on my experience) is "new".  Also in most cases, if it is detected at all, it will be after several days, not when the malware was dropped as I think many people expect.
 
You sometimes blog about people questioning the usefulness of anti-malware software.  I think there is good reason to question it.  It doesn't detect exploits reliably, it doesn't detect new samples very well and many vendors say they detect and protect against a threat like Zeus or Poison Ivy but what they really mean is that they can only detect the samples that they have collected which is somewhat misleading.  On the other side of the argument are AV software advocates that might suggest that becauase it is sometimes successful, it is worth having (and investing time and money into).
 
"I don’t think you can compare AV to a specialist vulnerability scanner." <- I don't expect an anti-malware solution to detect vulnerabilities but I think it is reasonable to expect it to detect malicious files like pdf, office and java exploits.  I don't think it is necessary to have vulnerability details from the vendor of the vulnerable software – it would be far easier to collect samples of the exploits and develop detection that way (which is how I guess most vendors do it).
 
So the file-based scanning that VT uses may not be entirely representative.  How different would the results be if someone were to conduct the same type of test with a bunch of Windows desktops running full versions of the EPP software?  Based on your response that you can't detect a sample that you don't have a detection signature for, I'm inclined to believe that the results would be pretty similar (or at least not substantially different).  Maybe some solutions might have better heuristic detection but I doubt that would be typical in a default configuration. 
 
Interesting challenge for sure.  I think it would be worth investing some time into this as welll.  Thanks a lot for your thoughtful response.

]]>
By: David Harley http://www.welivesecurity.com/2012/08/23/carbon-dating-and-malware-detection/#comment-1085 Thu, 23 Aug 2012 16:03:31 +0000 http://blog.eset.com/?p=14913#comment-1085 Well, by definition we can’t detect a sample we haven’t developed a signature/detection for. :) We do have generic/heuristic detections that detect some samples we’ve never seen, but we can never detect all the malicious binaries we’ve never seen. Or anything like all of them. That’s the name of the game I’m afraid: if a security vendor tells you that its product detects all known and unknown samples, shake your head pityingly and walk away.

Exploits are a different ballgame. AV – well, some AV – often detects exploits. Especially Microsoft exploits, since MS is pretty good nowadays about sharing info about known vulnerabilities ahead of patches being available. Other vendors, obviously, are more problematical. However, I don’t think you can compare AV to a specialist vulnerability scanner. And if you’re thinking about a certain bungled test, think again. :)

You’re right up to a point, though: when a vendor says ‘we detect threat-X’ they mean they detect all the samples they currently know of. They can’t promise to detect all obfuscated or tweaked samples of the same base code.

The term signature is misleading (and always has been) because it encourages people to think in terms of static strings and/or static analysis. Actually, even those approaches are algorithmic: it’s just that looking for a static string isn’t a very sophisticated algorithm. :) Frankly, I don’t know what the overall proportion of detected to non-detected samples is for a given product, let alone for the whole gamut of AV products. And for the reasons discussed in the blog, it’s even harder to assess whether it really matters for a given example.

VT is not suitable for research like this, and I don’t think this qualifies as a test. It gives me a headache thinking about how you could conduct a realistic longitudinal research project that would give you accurate data. I suspect that if you could adjust realistically for prevalence, appropriate classification, risk and impact and so on, products would tend to do better, but proving it would be an interesting challenge.

]]>
By: Brian http://www.welivesecurity.com/2012/08/23/carbon-dating-and-malware-detection/#comment-1084 Thu, 23 Aug 2012 13:43:32 +0000 http://blog.eset.com/?p=14913#comment-1084 OK so maybe VT results aren't an accurate way to measure detection.  From my observation though it seems that AV vendors are still having a very difficult time detecting "new" samples for which they have not yet analyzed or developed a signature for.  AV vendors also seem to have a very difficult time detecting exploit files.
I think this is troubling partly because of warnings and notices that come from AV vendors that might say something like "We detect threat-X in update yyzz" but what that really means is that they detect the samples that they have collected and for the most part, the malware just needs to be repacked or rebuilt to evade detection.
I'm intrigued by your last paragraph that mentions detection algorithms and that the term signature may be misleading.  I feel that in spite of the capabilities of modern products, they tend to fail more often than they succeed when it comes to new samples.
I' understand that VT may not be the best tool to conduct research like this.  How do you think ESET and competitors would have performed in a more representative test though?

]]>
By: David Harley http://www.welivesecurity.com/2012/08/23/carbon-dating-and-malware-detection/#comment-1083 Thu, 23 Aug 2012 11:51:56 +0000 http://blog.eset.com/?p=14913#comment-1083 Thanks, Tommi. Indeed it wasn’t. Moving to a different machine has turned up a few interesting quirks. :-/

]]>
By: Tommi http://www.welivesecurity.com/2012/08/23/carbon-dating-and-malware-detection/#comment-1082 Thu, 23 Aug 2012 11:43:36 +0000 http://blog.eset.com/?p=14913#comment-1082 Hi David,
very interesting article, but I'd assume the link to owa… is not intended in the first quote "AV results vary based on configuration", is it?

]]>