While you guys in the US were enjoying the swings and roundabouts of the presidential election, the government here in the UK was playing its usual role as fairground Aunt Sally to the UK media, on this occasion attracting criticism because of the ongoing leakage of sensitive information from government or government-related resources. The UK government briefly closed down its Gateway site, where people register tax information, after the loss of a memory stick containing the user names and passwords of eleven people. Not a big issue compared to the volumes of data apparently lost in other incidents, but embarrassing nonetheless.

So I was fascinated to see that Ziff-Davis have made available a video in which Atos Origin, discuss  "lessons learned from handling security at the Beijing Olympics". Let's hope they're also learning lessons about security from events in Cannock, Staffordshire, where one of their employees apparently lost the aforementioned memory stick in the car park. ;-)

As  it happens, I was invited to comment on the Prime Minister's statement to a UK TV news service to the effect that the government can't promise that all information will always be safe, because it isn't possible to legislate for human error, and was quoted at length here. That article is a fair representation of what I actually said, except that I was talking about government in general, rather than this government. Nevertheless, I thought it might be worth briefly revisiting the issue here. Having spent most of my working life (at any rate, the IT/security phase of it) in the public sector, I have some strongly held views on the topic.

Up to a point, I doubt if many IT professionals would disagree with Gordon Brown . There is no way of eliminating the risk of data loss completely because systems, however good they are, are implemented, administered and used by human beings. There is a common aphorism that says something like "to err is human, but to really screw up you need a computer", which is amusing, but largely incorrect. Computer systems are very good at doing exactly what they're asked to do. But you can't expect an automated or semi-automated system or process to compensate for an inadequate specification or implementation, or inadequate training, education and enforcement of guidelines or policies. These are people problems, not technology issues, and you can't, to coin a phrase, fix social problems with technical solutions. (Which has a distinct bearing, by the way, on a paper on education that Randy and I are presenting at AVAR in December.) So there always risks. (I don't take seriously the attempts by other political parties to gain political advantage from ongoing problems, because I don't think they'd do any better. I know, cynical of me...)

However, that doesn't mean that there isn't a problem. Government doesn't exactly see itself as being responsible for directly managing risk (when I worked for the NHS, it was described as being "risk averse"). That doesn't mean they're not aware of risk, though it sometimes appears that way. Much of the time, though, it's more characteristic of government agencies to focus on the wrong risks than to ignore risk altogether!

When you've performed risk analysis and assessment, there are a number of approaches you can take to risk management, though in real life, the approach will be hybrid rather than a single approach. (SC referred to these as "tips", but they're actually just standard methodology.)

  • You can accept risks where mitigation costs are disproportionate to the anticipated benefits.
  • You can take measures to mitigate risks directly, for instance by installing or requiring the installation of specific measures. You might, for instance, specify levels of encryption, transport mechanisms and protocols, restricted use of portable devices, and so on.
  • You can prevent or avoid a risk by taking an approach that bypasses it. That's not very practical with human error, though.
  • You can eliminate it altogether by re-engineering your approach to the problem.
  • Or you can transfer it. This is, as I said to SC, the way government agencies often like to work, putting together a contract that specifies fairly high-level requirements, because government agencies (in the UK at any rate) tend to outsource as much as they can. The problem is that when you outsource the process, you don't necessarily outsource either the risk or the responsibility. As an outsourcer you still need to understand both the technical implications and the risk management implications of the outsourcing arrangement.

(Henk Diemer and I wrote a fairly lengthy chapter on the subject of outsourcing for an AVIEN book mentioned here before, by the way.)

One point more that's worth re-making:  very specific guidelines and policies on the handling of certain categories of sensitive data, storage and transfer procedures exist, but in many cases, they require a high level of security clearance before they can be accessed... (I'll probably be getting a visit from the security services just for admitting being aware of them :) ) If you, as a filing clerk or other drone, or as a more senior manager, aren't aware of those procedures, or don't have the necessary clearance, how much use are you going to be able to make of them? I wonder how many organizations across the world are so paranoid about their security processes that they hide them even from their employees, out of fear of giving away too much information to the bad guys?

David Harley 
Director of Malware Intelligence