The methodology and categories used in performance testing of anti-malware products and their impact on the computer remains a contentious area. While there’s plenty of information, some of it actually useful, on detection testing, there is very little on performance testing. Yet, while the issues are different, sound performance testing is at least as challenging, in its own way, as detection testing. Performance testing based on assumptions that ‘one size [or methodology] fits all’, or that reflects an incomplete understanding of the technicalities of performance evaluation, can be as misleading as a badly-implemented detection test.
Some of us are currently busily preparing for the AMTSO workshop in Helsinki on the 24th and 25th May 2010, just before the CARO workshop on 26th and 27th May (for which registration closes on 12th May). Before the Helsinki events, though, the EICAR conference in Paris includes some interesting testing-related material before and during the main conference.
After my last blog, I was asked what other EICAR papers would be of interest to people in the testing industry. In fact, quite a few of this year’s papers were focused on anti-malware testing and/or detection, and the abstracts for the industry papers are available here, and that may give you a start on
Yes, I’ve used that pun before, but I can’t resist using it again now that I’m back from the EICAR conference. I actually got back a couple of days ago, but I was sidetracked by some urgent administrivia and dental treatment. I’m having bacon and eggs for breakfast, my first pet’s name was Stuart Little
So the CARO workshop came and went (and very good it was too): unfortunately, because of the nature of the event, I can’t tell you too much about it. However, at least some of the presentations are expected to be made available soon, and we’ll pass on that information when we have it. After a