Jan Vrabec, my colleague in the Bratislava office, has some thoughts to get off his chest about AV vendors and misleading results from internal test results, so I'm letting him borrow my soapbox. All yours, Jan…
Lately, we have witnessed a new trend pushed by the marketing departments of several antivirus vendors: in-house product testing. Of course, most vendors use in-house testing as a tool for monitoring and improving the capabilities of their own products. However, it’s also being used increasingly as a vehicle for showcasing a company’s own AV products in the best possible light. Such tests are associated with the usage and promotion of marketing buzzwords, such as "Maximal Protection", "Best Protection", and other similar claims. What’s more, the self-proclaimed qualities of the products in question are often driven "home" to the user via various graphics or even live video. But let’s look closer at this phenomenon, and consider the ways in which the information that the vendors don’t provide about this practice can affect the potential customer.
First of all, you may wonder why some vendors are using in-house testing exclusively. Does this practice stem from worrying about how their product will perform in tests run by independent testing organizations? Perhaps this is a calculated strategy based on the premise that independent testing would not confirm their over-hyped claims. Another possible reason would be concern about the validity of the methodology applied, or even a bias in their testing process. Or perhaps they are worried that as newcomers they simply cannot measure up to the long (successful) track record achieved by other vendors.
Nevertheless, and whatever their reasons for staying out of the external testing arena may be, in-house testing seems to have gained a foothold as the easy way to produce instant buzz marketing. The suspicion of "home turf" bias is introduced whenever the "tester" is also "the tested." With this practice becoming rampant, it’s not surprising to find that their own product is positioned at the top, or at least highly placed among the "big AV names." If we compare the way in which in-house testing is practiced to competent independent testing, several major issues become apparent.
One discomfiting issue is the way in which a failure to disclose methodology has become almost synonymous with in-house testing, at any rate when such testing is used for advertising purposes. This practice makes it almost impossible for the reader to extract any kind of meaningful, objective conclusions from a comparison of products from different vendors.
When confronted by obvious anti-malware marketing claims, it’s a good idea to look first for some meaningful validation of those claims. This can be done by looking for information on how the test was carried out, as well as by whom. Moreover, once you have some idea of the methodology (often, methodological information is simply omitted entirely), check that it appears to conform to industry-agreed standards, such as those outlined by AMTSO (the Anti-Malware Testing Standards Organization). Withholding information about the test source, or using methodology that doesn’t correlate to industry standards, are practices that speak volumes about the product that is being promoted, and a potential customer should consider them as red flags.
Seemingly reliable anti-malware testing results often turn out to be invalid because the samples used in the tests were misclassified. For example, a high volume of what seem to be false positives might be the result of misidentification of malicious samples as clean (or as AMTSO puts it, “innocent.” Obviously, innocent files misclassified as malicious because of inadequate validation (or non-validation) can dramatically skew results. An equally common but less obvious problem occurs when a “grey” sample such as a “Possibly Unwanted” application is classified as malicious. This is because a product that doesn’t detect such applications by default may be marked down for non-detection of a sample which it is, in fact, perfectly capable of detecting.
Similar problems are caused by corrupted and/or unviable samples, or objects that are only unequivocally malicious in very specific contexts, especially contexts which are at best unlikely to be seen in real life. Thus, reasonable care must be taken to categorize test samples or test cases properly, and AMTSO is keen to encourage testers to revalidate test samples or test cases that appear to have caused false negative or false positive results. (See principle 5 at: http://www.amtso.org/amtso—download—amtso-fundamental-principles-of-testing.html.)
Proper validation of malware samples in testing is the first step towards an authoritative comparison of the effectiveness of anti-malware solutions.
Marketing materials using claims based on in-house testing or grandiose claims about product capability deserve to be approached with a healthy dose of customer scepticism. When choosing a security product or a suite, a customer should be aware of the sources used to support such claims, such as independent test results and reviews. Industry-recognized independent testing organizations to look out for include the following:
Yet another important consideration when deciding on your next anti-malware product is the security vendor’s commitment to protecting your personal information. It is important to realize that the vendor you choose will, to some extent, be responsible for protecting all your computer-stored assets, including your virtual identity. In other words, the choice you make will have far-reaching consequences for the protection of your privacy. This is one compelling reason to be as well-informed as possible when deciding on your next anti-malware product or security suite. Uncritical acceptance of marketing messages can expose you to unnecessary risk.
The vendor’s history and track record is, perhaps, just as important as the performance of its product range. In the past few years we have seen a dramatic rise of entries into the AV market. This in itself does not pose a problem: in fact, competition helps to drive the industry forward. However, too many newcomers have boldly claimed to be “the best” antivirus. It’s one thing to make an attention-grabbing marketing claim, but it’s a very different thing to actually stand behind that claim and submit the product for independent testing, going head-to-head with the competition.
When you choose your next antimalware product, you may find it helpful to think about all the development and maintenance overheads of producing such a complex and sophisticated application as an antivirus program. In general, it is good practice to scrutinize the vendors’ history closely, not to mention the track record of their products. The number of awards received from independent testing organizations over time is a good indicator of sound technology and long-term commitment to keeping consumers secure.
I hope that you will find this advice helpful when evaluating anti-malware solutions.
ESET Security Technology Analyst
Author David Harley, We Live Security