Kaspersky, Virus Total, and Unacceptable Shortcuts

Larry Seltzer posted an interesting item yesterday.  The article on "SW Tests Show Problems With AV Detections " is  based on an "Analyst's Diary" entry called "On the way to better testing."

Kaspersky did something rather interesting, though a little suspect. They created 20 perfectly innocent executable files, then created fake detections for ten of them. Then they uploaded (actually, the blog says re-uploaded) the files to Virus Total. They claim that after ten days "up to" 14 other vendors also had detection for the files for which Kaspersky had a fake detection.

(No, ESET wasn't one of them! We don't have a marketing axe to grind here.)

In fact, several vendors only detected one or two of those files, not the whole set. Furthermore, in at least one case all the files were detected as "suspicious" rather than malicious, and only by that company's online scanner: in that context, it's possibly defensible, certainly as a short-term flag.

So is there really a problem here? When Virus Total sends a sample of what may be a missed detection to the company whose scanner missed it, they probably don't expect the company simply to add detection without validating the sample: otherwise, a false positive could cascade through the industry in a very short time. [Update: there's an interesting related blog from Hispasec, who provide the Virus Total service, at http://translate.google.com/translate?u=http%3A%2F%2Fwww.hispasec.com%2Funaaldia%2F4013&sl=es&tl=en&hl=&ie=UTF-8 - I'm afraid it's an automatic translation, but you get the idea... (Thanks, Pedro, for calling my attention to it.)]

But as Larry suggests, there are other factors at play there, rather than a simple mindset of "if Kaspersky detects it, it must be malicious". With all due respect to the company, no-one in this business has escaped problems with false positives, as John Leyden pointed out today in an article in the Register.

In the Kaspersky blog, Magnus suggests that this is really a problem with testing. Well, he has a point. Where tests are carried out with huge sample sets and minimal time and financial resources, it's inevitable that some testers will rely on "source reputation and multi-scanning" rather than manual validation. In fact, Ján Vrabec recently cited problems with a test where a much smaller test set was "validated" according to whether four or more scanners detected it as malicious.

However, AV companies (and the major test organizations) don't, in general, rely on these approaches. Obviously, a significant proportion of analysis for a virus lab has to be manual, though where tens of thousands of samples are received daily, there has to be as much automation as possible. So it may be that there are many factors at work here, such as:

  • Timing issues – a sample may be wrongly detected at one point because of "reputation sourcing" and detection removed after manual analysis. A Virus Total analysis report is a snapshot of one moment in time: it shouldn't be seen as a blot on a company's permanent record.
  • In fact, the same problems frequently pointed out in other contexts with the inappropriate use of Virus Total also apply here. VT uses a battery of command line scanners, and different products will behave differently in different contexts. Taken into account along with the timing issue mentioned above, the numbers cited here don't seem to mean much.
  • As Larry suggests, it's not unknown for an aggressive heuristic to generate a false positive. And as a spokesman for another product suggested, products with a wide range of functionality may react differently according to what backend, gateway or desktop settings are enabled.

In fact, the problem Kaspersky has flagged may have longer-lasting effects than the company has acknowledged. Larry Seltzer's article seems to say that subsequently, a "hello world" program generated with the same compiler and compiler settings was also flagged by at least two programs. Could this be a compiler-specific heuristic implemented as a result of Kaspersky's artificial false positives? If so, Kaspersky also bear some of the blame.

The sad thing is, that genuine problems may take second place here to the easy story of "who copies from whom." By going straight to the press and presenting journalists with the innocent executables, encouraging them to use Virus Total (inappropriately, in our view) to check their conclusions, Kaspersky has made inevitable what they flagged as a risk. Reproducible experimentation is a worthy target, as long as the experimental methodology is sound, but does that really apply here?

But the real problem here is that Magnus is perpetuating a fallacy that already worries many people in the industry. He suggests that the problem here is with static testing, and that the move to dynamic testing is going to fix things. However, that's neither the problem nor the cure in this case. The problem is non-validation, and the cure is validation. Right now, lots of tests claim to be dynamic, because that's what AMTSO advocates. Quite rightly, in principle: good dynamic testing has the potential to be a more accurate test of a product's efficacy than static testing. But good static testing remains a better approach than bad dynamic testing, which few testers are doing well at the moment. A significant part of that problem is, unfortunately, still validation.

The Research Team
ESET LLC

Author David Harley, ESET

  • Gyllen

    FAIL.
     
    Eset rulz for not fall in that.

  • Rico Rico

    Chamulleros….

    • Randy Abrams

      I’m not sure if Rico is calling us liars or Kaspersky, but he is welcome to his opinion :)

  • http://www.rtm.com.co Juan Carlos Calderon

    Big surprise, if you copy not learn… :), Congrat! ESET…

  • http://www.trucoswindows.net MinDokan

    PWNED!!!!!!!!1111111111111111111111111111 eleven!
    Rico is calling Chamuyeros to Eset or Kaspersky, but I don't know why. Who is Chamuyero? (Chamuyero = liar)

  • ZM

    What version of NOD32 is used in Virustotal ?

  • Hidden Peak

    >In fact, several vendors only detected one or two of those files
    In fact, several vendors detected all 10 fake samples, guys. And it's not funny.

    • http://www.smallblue-greenworld.co.uk David Harley

      Yes, they did. And it’s not funny. But let’s not lose sight of the real issue. Actually, maybe the best summary was made by Vesselin Bontchev, commenting on the Register’s article:

      “If we implement detection of some sample based solely on the fact that XYZ’s scanner detects it, we’re being accused of not doing proper analysis and copying other company’s detection. If we don’t detect the sample because our analysis has shown it is obviously not malicious, it gets into the testers’ test sets and our detection rate in the tests is lowered. When we protest, we’re being told that ‘but half a gazillion other products already detect it’. Welcome to the world of anti-virus research, where your only choices are bad ones and worse ones.”

  • Hidden Peak

    Bontchev's point ov view is understandable. Tests => PR => user's attention => sales => profit (which is the main reason why all security firms do exist).
    But let's not forget primary goal of any AV-product – protection of user's computer. And it has nothing to do with magazine's tests. Any clean object detected as malware = false positive. Messages about such pseudo-malware scares common users (ones who's not an expert or even advanced) as much as detection of real virus.
    Let's look at the other side of this story. Now we know the approach of some AV-vendors. Got a sample => look at virustotal => positive result from 'big vendors'? => add to databases without own analysis. Nice, isn't it ? In case of real virus it's nothing but stealing of intellectual property. 'Big vendor' put it's efforts, time and money to detect this malware, and some others just used the result in fastest and easiest way. But in case of 'fake malware' those vendors looked like clowns. And is't a reason for their users to think about the question: "Do I use  antivirus or pseudo-antivirus ?"

  • Optimus Prime

    FAIL
     

Follow Us

Automatically receive new posts via email:

Delivered by FeedBurner

3 articles related to:
Hot Topic
02 Feb 2010
ESET Virus Radar

Archives

Select month
Copyright © 2014 ESET, All Rights Reserved.