[Update: my colleague Josep Albors has also commented on the Barnaby Jacks demonstration - thanks for the namecheck, Josep! - and Spanish speakers might well find his thoughts of interest.]
* “Is that a stethoscope in your pocket or are you just glad to see me?”
After the first twenty years or so of computer security, you get fairly accustomed to a certain amount of dÃ©jÃ vu. And since I spent most of my earlier years in IT on security in the context of medical informatics, the assertion that Computer Viruses Are “Rampant” on Medical Devices in Hospitals has a certain uncomfortable familiarity.
It was actually in the contexts of the UK’s National Health Service (and the cancer research organization where I worked for 11 years) that I first came across infectibility issues with research tools, diagnostic and clinical tools, and so on, but the issues that David Talbot discusses in MIT’s Technology Review article are pretty familiar. Where a device is sufficiently air-gapped that is, isolated from dangerous vectors (the Internet, USB drives) it probably doesn’t matter too much if the supplier makes a point (as is often the case) of discouraging or obstructing the use of security software and the patching of operating systems and other applications.
But that’s not the world we live in today: the interconnection of devices to allow the easy transfer or exchange of data is usually prioritized over security issues. After all, who expects a scintillation counter or a laboratory centrifuge to be virus-infected? No, I’m not talking about Stuxnet and those centrifuges here… On the other hand, there are clearly parallels between equipment found in many clinical/research environments and the systems found in some SCADA environments: in both cases, levels of security practice characteristic of a well-secured corporate environment may simply not be achievable because of the restrictions imposed by specialist equipment and services.
But there are infectible device scenarios even more esoteric. Consider, for example, the claims by Barnaby Jack that pacemakers and ICDs (implantable cardioverter-defibrillators) could be misused to deliver an intentionally fatal electric shock rather than a therapeutic shock intended to regulate a misfiring heart.
Worrying, interesting, dramatic (not to say sensational in tone, with predictions of a possible ‘worm with the ability to commit mass murder’) but not exactly new. There was a high-profile story last year about a somewhat similar scenario. Jay Radcliffe, who is a Type 1 diabetic, demonstrated at Black Hat how he could control an insulin pump implanted in his own body. The pump uses wireless sensors to monitor blood sugar levels, and he worked out how it would be possible to manipulate the process so that the pump received instructions to inject too much or too little insulin into the patient’s bloodstream. (I blogged briefly about it at the time.) Barnaby Jack has also been quoted with reference to somewhat similar research.
The issues that arose from that research seem to apply here, too. Radcliffe was able to get information about the hardware he hacked because the manufacturers of wireless devices in the US file their designs with the Federal Communications Commission.
The transmissions weren’t encrypted, enabling him to write code that was capable of capturing enough data to affect the functioning of the pump. To do that, he needed to know the serial number of the device, but that was obtainable programmatically as well as by social engineering. Similar vulnerabilities seem to have been thrown up in the course of Jack’s research into pacemakers and similar devices, though he also suggests that such devices may also have backdoor programming to enable access and modification without knowledge of the individual device’s unique serial number.
(If true, that would suggest a seriously flawed sense of priorities: the serial number may not be a foolproof/hackproof security precaution, but a backdoor that bypasses it altogether would indicate to me that the convenience of the practitioner or service provider is considered more important than the safety of the patient.)
Medtronic, the maker of the insulin pumps from Radcliffe’s demonstration, did say subsequently that its next-generation devices would address the issues raised during that demo and subsequent discussions, but obviously other manufacturers (including the unidentified manufacturer of Barnaby’s test device) may not be acting so responsibly, and in any case it’s unlikely that all older devices would be replaced with more secure models. The expense would be enormous.
That’s the bad news. But while none of the alarmist scenarios Barnaby cites are totally impossible, it seems to me that there are easier ways of committing mass murder than death by pacemaker hacking, and there are certainly easier ways of harvesting patient data than by hacking individual devices for the meagre Patient Identifiable Data (PID) that may be embedded there.
* Before you assume I’m indulging in heartless (pun intended) exploitation of the terminally ill, the photograph was actually taken from a 419 scam email I received some years ago – actually during my last spell working for the NHS. It was supposed to represent a wealthy oil merchant with a nasty case of oesophageal cancer wanting my help in distributing his wealth to charitable organizations.
David Harley CITP FBCS CISSP
ESET Senior Research Fellow
Author David Harley, We Live Security