So who’s to blame? First and foremost, the victimizers. Well, persistent victims, yes. And anyone in the security industry who pushes the TOAST principle, the idea that all you have to do is buy Brand X and you never have to take responsibility for your own security. Though, of course, “who’s to blame?” is the wrong question: what matters is “how do we fix it?”
It's been a busy few weeks. Last week I was in Krems, Austria for the EICAR conference. The week before, I was in Prague for the CARO workshop (where my colleagues Robert Lipovsky, Alexandr Matrosov and Dmitry Volkov did a great presentation on "Cybercrime in Russia: Trends and issues" – more information on that shortly),
Well, the EICAR conference earlier this month was in Krems, in Austria, where I hear that they're not averse to the occasional brandy, but I was actually perfectly sober when I delivered my paper on Security Software & Rogue Economics: New Technology or New Marketing? (The full abstract is available at the same URL.) To conform with EICAR's
The March Threatsense report at http://www.eset.com/us/resources/threat-trends/Global_Threat_Trends_March_2011.pdf includes, apart from the Top Ten threats: a feature article on Japanese-disaster-related scamming by Urban Schrott and myself news of the Infosec Europe expo in London on the 19th-21st April, the AMTSO and CARO workshops in Prague in May, and the EICAR Conference in Austria that follows the story of
“Test Files and Product Evaluation: the Case for and against Malware Simulation” is a paper presented at the recent AVAR conference by Eddy Willems, Lysa Myers and myself: we were all at the EICAR conference and figured that it was a good moment to combine our experience of testing, EICAR, AMTSO and the anti-malware industry to cover the developments that had taken place since Sarah’s paper.
While I was at the EICAR conference earlier this week, I also co-presented (along with Pierre-Marc Bureau and Andrew Lee) a paper on “Security, Perception and Worms in the Apple”… so along with the new paper, I’ve made available again the paper on Macs and malware that I presented at Virus Bulletin in 1997.
The methodology and categories used in performance testing of anti-malware products and their impact on the computer remains a contentious area. While there’s plenty of information, some of it actually useful, on detection testing, there is very little on performance testing. Yet, while the issues are different, sound performance testing is at least as challenging, in its own way, as detection testing. Performance testing based on assumptions that ‘one size [or methodology] fits all’, or that reflects an incomplete understanding of the technicalities of performance evaluation, can be as misleading as a badly-implemented detection test.