It's been a busy few weeks. Last week I was in Krems, Austria for the EICAR conference. The week before, I was in Prague for the CARO workshop (where my colleagues Robert Lipovsky, Alexandr Matrosov and Dmitry Volkov did a great presentation on "Cybercrime in Russia: Trends and issues" – more information on that shortly),
Well, the EICAR conference earlier this month was in Krems, in Austria, where I hear that they're not averse to the occasional brandy, but I was actually perfectly sober when I delivered my paper on Security Software & Rogue Economics: New Technology or New Marketing? (The full abstract is available at the same URL.) To conform with EICAR's
The March Threatsense report at http://www.eset.com/us/resources/threat-trends/Global_Threat_Trends_March_2011.pdf includes, apart from the Top Ten threats: a feature article on Japanese-disaster-related scamming by Urban Schrott and myself news of the Infosec Europe expo in London on the 19th-21st April, the AMTSO and CARO workshops in Prague in May, and the EICAR Conference in Austria that follows the story of
“Test Files and Product Evaluation: the Case for and against Malware Simulation” is a paper presented at the recent AVAR conference by Eddy Willems, Lysa Myers and myself: we were all at the EICAR conference and figured that it was a good moment to combine our experience of testing, EICAR, AMTSO and the anti-malware industry to cover the developments that had taken place since Sarah’s paper.
The methodology and categories used in performance testing of anti-malware products and their impact on the computer remains a contentious area. While there’s plenty of information, some of it actually useful, on detection testing, there is very little on performance testing. Yet, while the issues are different, sound performance testing is at least as challenging, in its own way, as detection testing. Performance testing based on assumptions that ‘one size [or methodology] fits all’, or that reflects an incomplete understanding of the technicalities of performance evaluation, can be as misleading as a badly-implemented detection test.
Some of us are currently busily preparing for the AMTSO workshop in Helsinki on the 24th and 25th May 2010, just before the CARO workshop on 26th and 27th May (for which registration closes on 12th May). Before the Helsinki events, though, the EICAR conference in Paris includes some interesting testing-related material before and during the main conference.
After my last blog, I was asked what other EICAR papers would be of interest to people in the testing industry. In fact, quite a few of this year’s papers were focused on anti-malware testing and/or detection, and the abstracts for the industry papers are available here, and that may give you a start on
Yes, I’ve used that pun before, but I can’t resist using it again now that I’m back from the EICAR conference. I actually got back a couple of days ago, but I was sidetracked by some urgent administrivia and dental treatment. I’m having bacon and eggs for breakfast, my first pet’s name was Stuart Little
So the CARO workshop came and went (and very good it was too): unfortunately, because of the nature of the event, I can’t tell you too much about it. However, at least some of the presentations are expected to be made available soon, and we’ll pass on that information when we have it. After a