A new paper aims to profile the victims most likely to fall for a phishing attack. But what is less clear is how you develop a profile while avoiding the pitfalls of stereotyping.
User-profiling is an interesting approach to countering phishing. In fact, the idea that user training might be implemented via tailored software somewhat resembles an approach to anti-malware that Jeff Debrosse and I discussed at Virus Bulletin a few years ago. Malice Through the Looking Glass: Behaviour Analysis for the Next Decade.
When we talk about behaviour analysis in this sector of the industry we’re usually referring to examination of the way that a program behaves in order to assess how likely it is to be malicious. The idea we put forward was that another (supplementary) approach would be to analyse the behaviour of the PC user and use that analysis to flag risky behaviour and attempt some sort of remediation. We didn’t consider implementation details – Virus Bulletin doesn’t like you to go over 6,000 words! – but one approach in a corporate product would be to alert not only the user, but the system administrator, who might recommend training for instance. In a training tool, risky behaviour might be addressed by switching the subject to a different, more intensive module, for instance. I’d think that would be compatible with the future research envisaged by the authors of the paper.
In fact there’s a great deal of academic literature out there on susceptibility to phishing. What is less clear to me is how you develop a profile while avoiding the pitfalls of stereotyping through over-simplification of social representation. In fact, the authors of the upcoming paper “Keeping Up With the Joneses: Assessing Phishing Susceptibility in an E-mail Task,” by Kyung Wha Hong of North Carolina State University, to be presented at next month’s 2013 International Human Factors and Ergonomics Society Annual Meeting.seem to have a profile in mind already: while it’s unsurprising that dispositional trust affects susceptibility to phishing, the study also suggests that gender, introversion, and openness to new experiences were also a factor. However, it’s not always clear which way those factors work, and how representative the population of participants (53 American undergrads aged between 18 and 27) is of the population as a whole.
Informally, it wouldn’t surprise me that this particular stratification had a bearing on how confident the group members were of their own ability to recognize phishing mails. In fact, in the larger population, I suspect that the general level of self-confidence is appreciably lower, and while gender differences may be important, they could have a very different impact according to how susceptibility was measured. I don’t know whether men are generally better than women at recognizing phish. Anecdotally, though, my (looooonnnnnggggg) experience of user support and of talking to students – Teach Your Children Well – ICT Security and the Younger Generation – in Europe and the UK suggests that females may be less susceptible to phishing attacks because they’re less likely to rely on their own self-perceived expertise and more likely to ask for advice from someone perceived to be more knowledgeable.
Similarly, a recent poll commissioned on behalf of ESET Ireland actually suggested that women in Ireland are more cautious when it comes to security than their male counterparts (age also seems to play a large part). Of course, self-assessment of security awareness and capability can, as this study suggests, be very different from the picture painted by objective testing. It depends, of course, on how good the testing is.
Several years ago Andrew Lee and I presented a paper (also for Virus Bulletin) on Phish Phodder: Is User Education Helping or Hindering? In that paper we looked in some detail at phishing quizzes as an educational tool. Online quizzes of the sort we were looking at are by no means identical to the task methodology described in this paper, and in fact the lack of methodological detail makes it impractical to tell whether the same issues apply, but it’s not impossible. One of the problems we found with quizzes was that even security professionals were not always able to classify a message correctly, because quizzes normally use screen shots, not actual emails. Thus, the information that we’d use to investigate further if we had access to the email wasn’t available. In this eventuality, a security professional will normally play safe and assume malice.
The wording of the Raleigh study gives the impression that an actual Gmail account was apparently used for the task used to generate objective data, rather than screenshots. However, this worries me a little. Gmail’s filtering is actually rather good. (If you don’t believe me but have access to a Gmail account, take a look at the spam folder.) I’m not sure how the researchers managed to use a Gmail account for the task without tweaking it in ways that might actually have compromised the results.
It’s a pity I’m not going to be there to ask some questions when the paper is actually presented. Nonetheless, it’s a very interesting paper that suggests valid lines of future research. This is certainly one of those areas where closer collaboration between academia and the security industry might generate even more interesting data.