Of course, most vendors use in-house testing as a tool for monitoring and improving the capabilities of their own products. However, it’s also being used increasingly as a vehicle for showcasing a company’s own AV products in the best possible light.
The methodology and categories used in performance testing of anti-malware products and their impact on the computer remains a contentious area. While there’s plenty of information, some of it actually useful, on detection testing, there is very little on performance testing. Yet, while the issues are different, sound performance testing is at least as challenging, in its own way, as detection testing. Performance testing based on assumptions that ‘one size [or methodology] fits all’, or that reflects an incomplete understanding of the technicalities of performance evaluation, can be as misleading as a badly-implemented detection test.
Some of us are currently busily preparing for the AMTSO workshop in Helsinki on the 24th and 25th May 2010, just before the CARO workshop on 26th and 27th May (for which registration closes on 12th May). Before the Helsinki events, though, the EICAR conference in Paris includes some interesting testing-related material before and during the main conference.
Larry Seltzer posted an interesting item yesterday. The article on "SW Tests Show Problems With AV Detections " is based on an "Analyst's Diary" entry called "On the way to better testing." Kaspersky did something rather interesting, though a little suspect. They created 20 perfectly innocent executable files, then created fake detections for ten of them.
We have just come across a Buyer’s Guide published in the March 2010 issue of PC Pro Magazine, authored by Darien Graham-Smith, PC Pro’s Technical Editor. The author aims to give advice on which anti-malware product is the best for consumer users, and we acknowledge that the article includes some good thoughts and advice, but