It's important to know the worst password choices, but also the worst choices for numeric passcodes.
A new conference paper discusses whether AMTSO has the credibility to achieve its aims of raising testing standards on its own.
'Tis the season to get ready for the autumn round of security conferences.
... I haven't recently posted any pointers to our content on SC Magazine's Cybercrime Corner, and now might be a good time to recap on what Randy and I have been posting there this month (so far...) ...
It's been a busy few weeks. Last week I was in Krems, Austria for the EICAR conference. The week before, I was in Prague for the CARO workshop (where my colleagues Robert Lipovsky, Alexandr Matrosov and Dmitry Volkov did a great presentation on "Cybercrime in Russia: Trends and issues" – more information on that shortly),
Well, the EICAR conference earlier this month was in Krems, in Austria, where I hear that they're not averse to the occasional brandy, but I was actually perfectly sober when I delivered my paper on Security Software & Rogue Economics: New Technology or New Marketing? (The full abstract is available at the same URL.) To conform with EICAR's
...I would suggest that you take any statement like "Grottyscan AntiVirus is best because it detects 200 million viruses" with a pinch of salt. Actually, a whole salt mine...
The March Threatsense report at http://www.eset.com/us/resources/threat-trends/Global_Threat_Trends_March_2011.pdf includes, apart from the Top Ten threats: a feature article on Japanese-disaster-related scamming by Urban Schrott and myself news of the Infosec Europe expo in London on the 19th-21st April, the AMTSO and CARO workshops in Prague in May, and the EICAR Conference in Austria that follows the story of
Summary of and link to an AVAR paper addressing some of the pitfalls of using malware simulation in product testing.
Here are a few papers and articles that have become available in the last week or two.
The methodology and categories used in performance testing of anti-malware products and their impact on the computer remains a contentious area. While there’s plenty of information, some of it actually useful, on detection testing, there is very little on performance testing. Yet, while the issues are different, sound performance testing is at least as challenging, in its own way, as detection testing. Performance testing based on assumptions that ‘one size [or methodology] fits all’, or that reflects an incomplete understanding of the technicalities of performance evaluation, can be as misleading as a badly-implemented detection test.
Some of us are currently busily preparing for the AMTSO workshop in Helsinki on the 24th and 25th May 2010, just before the CARO workshop on 26th and 27th May (for which registration closes on 12th May). Before the Helsinki events, though, the EICAR conference in Paris includes some interesting testing-related material before and during the main conference.
After my last blog, I was asked what other EICAR papers would be of interest to people in the testing industry. In fact, quite a few of this year’s papers were focused on anti-malware testing and/or detection, and the abstracts for the industry papers are available here, and that may give you a start on
Yes, I’ve used that pun before, but I can’t resist using it again now that I’m back from the EICAR conference. I actually got back a couple of days ago, but I was sidetracked by some urgent administrivia and dental treatment. I’m having bacon and eggs for breakfast, my first pet’s name was Stuart Little
So the CARO workshop came and went (and very good it was too): unfortunately, because of the nature of the event, I can’t tell you too much about it. However, at least some of the presentations are expected to be made available soon, and we’ll pass on that information when we have it. After a