Our friends at Heimdal Security sometimes put together what they call an 'expert roundup' blog article. It's an idea I like a lot: many security blogs are reluctant to give credit to competitors or include any links that will lead the reader away from their own site. I can see the PR logic behind that, but in my opinion it can actually diminish the credibility of the commentary, and in some cases results in the appropriation of ideas or research that originated elsewhere.

The Heimdal approach, however, shows a spirit of enquiry that goes beyond the quest for competitive advantage, suggesting a desire to get the best possible spread of answers to questions a blog reader might want to ask.

Of course, I would say that, since Heimdal has several times taken pity on a clapped-out old security maven and asked for my input for one of these roundups. The first time was an article put together by Aurelian Nagan on 50+ Internet Security Tips & Tricks from Top Experts. The second was put together by Andra Zaharia and addressed The Most Common Mistakes These 27 Cyber Security Experts Wish You’d Stop Doing. And recently, Andra invited me to respond to six questions concerning various aspects of patching. Which I was, of course, pleased to do, but when my answers snowballed into a document that was starting to resemble War and Peace in length, if not in quality, it was clearly more suitable for a blog in its own right, though hopefully Andra will find some suitable quotes for the Heimdal article. As those expert roundup articles tend to attract answers from some very knowledgeable people, I recommend that you check it out. By the time this article appears, it will probably already be available here.

  1. As an expert in cybersecurity, how do you prioritize patching in a multi-layered approach to data safety?

Fortunately, my days in hands-on research and security administration and management are long behind me: as an author and editor, I need comparatively little hardware and only a few reasonably reliable applications. So I don't have to worry much any more about bugbears like keeping a zoo or innumerable versions of various operating systems and applications, and the apps I use are nearly all mainstream apps that are professionally maintained at the vendor's end and allow me enough control at my end to ensure minimum disruption to my work.

However, the nature of my work does mean that I'm kept fairly up-to-date via mailing lists and various types of feed as regards patch/update/vulnerability issues that I might need to know about (as a security writer rather than as an end user). After all, I'm expected to write about vulnerabilities and exploits affecting systems and software of which I don't necessarily have current, direct hands-on experience. As a computer user, though, it helps, though, that my home is largely an Internet of Things-free zone, I don't at present have to rely on networked medical devices, and my not-very-smartphone is smart enough to know I use a laptop for any network services that go beyond telephone calls and texts. So I don't have to worry about issues like smartphone OS updates that may or may not be distributed to the brand of hardware I happen to use.

  1. How would you explain the importance of patching so your grandma can understand it?

Sadly, I'm of an age where it's challenge enough to explain it to my 90-something mother. Fortunately, she's resolutely refusing to get involved with any computing apparatus more recent than a comptometer. (Of course, there are computer-associated issues that affect non-users of modern technology, but that's for another time.) Since my own grandchildren sometimes feel the need to explain to me what the internet is (not always accurately, but they don't have the advantage of pre-dating the internet …), here's an attempt to put the patching issue in terms that even I can understand. ;)

For most people, the most important things about a computer (whether it's a big corporate mainframe, a desktop PC or a laptop, or a mobile gadget such as a tablet or smartphone) are the data they create and/or store on it (documents, photographs, and so on) and the services they access from it (email, social media, internet banking …) However, none of these data and services are accessible without the software programs that translate what you want to do into instructions that the hardware can understand so that it responds appropriately. The word software covers a huge range of functionality: from the networking software that connects you to the rest of the online world, to the browsers that enable you to access a multitude of services on the Web, to the complex data processing carried out by a database or a word-processing program, to the bits and bytes that read the movements of your fingers to allow you to make a phone call or type a message.

Computers are more reliable than we sometimes give them credit for. Programs, however, are written by people, and people make mistakes. The more complex and multi-functional a program is, the more certain it is that it contains programming errors. Some of these errors are noticeable, some are inconvenient but not necessarily critical, some may be dramatic in their effects (but these tend to be fixed very quickly). Many, on the other hand, are trivial and may not even be noticed by most or any users of the software under normal circumstances, so could be said hardly to matter at all.

But some, while they may be as good as invisible as far as the average computer user is concerned, but matter very much indeed. Not because of how they affect (or don't affect) the normal running of the system or application, but because potentially, they expose the user and his equipment to the risk of criminal intrusion. There are many reasons why a criminal may want to gain access to your system and data. They may simply want to cause damage, but it's more likely that their motivation is financial, and there are many ways in which they can make money from illegally accessing systems. For instance:

  • Stealing sensitive data that will allow them to access your bank accounts and similar services.
  • Stealing information that will enable them (or at least help them) to steal your identity: identity theft can be monetized in all sorts of ways.
  • Planting software that will allow them to control your system in ways that allow them to use it as part of a botnet. A botnet is a virtual network of infected or compromised machines. The owners of those machines don't usually realize they have a problem, but the botnet can be used for criminal purposes such as massive mailouts of phishing scams. There's big money in owning or renting such networks.
  • Planting ransomware so that criminals can deny you access to your own files until you pay a ransom so that they'll send you the decryption key that will unlock them.

Of course, people who program legitimate applications don't (usually) deliberately introduce flaws into the code that will allow criminals to exploit them. But quite a small coding slip can introduce a 'hole' through that allow access to areas – especially areas of computer memory – that they can use to issue inappropriate instructions to the computer that may have nothing to do with the application that contains the hole. Unfortunately, there are lots of people looking specifically for this sort of opening, and they don't all wear white hats.

There have always been geeks who were interested in testing the limits of software (and hardware, come to that). Some were driven by curiosity, some by the desire for the approval of their peers, some by a genuine desire to help companies improve their products. Not only in terms of security, of course, but there are quite a few people in the security industry who originally found their niche there by informing security companies about issues with their products. And then there were those whose intentions were less laudable, whether it was sheer mischief and malice, or profit-driven crime.

Nowadays, there's big money in all this: it's not all about hobbyists any more. Not that making money out of finding vulnerabilities is necessarily a bad thing. There are people whose legitimate aim is to make money out of the bounties sometimes offered by major companies to people who find genuine flaws and vulnerabilities, or to subscribers to a paid bug-finding service. We've long known that some governments are also paying their own agencies and in some cases freelancers to find flaws and code exploits, and many would regard that as a legitimate activity in pursuit of national security. And there are those whose interests are purely criminal, and usually profit-driven.

You may wonder why it is that companies aren't finding these flaws before others do. Don't they have people checking these things? Well, of course they do. Any reputable software company invests heavily in testing and quality assurance. But writing an operating system like Windows or OS X may require tens of millions of lines of code. Even a complex major application like a modern database or a word processing program contains more lines of code than I want to think about, and there's much more to code review than just reading the source code. You have to try to predict how each bit of code will interact with other routines in the program. However, you also have to try to predict how it will interact with other programs that might be running on the same machine at the same time. It's not as though you're shooting at a fixed target, either. Changes in the code are intended to improve the program's performance or security, but sometimes an apparently minor change can have unanticipated consequences, especially if the change of code is in a completely different program to which you probably don't have the source code.

What is surprising is not that sometimes things go wrong, but that they don't go wrong more often. The positive thing to take away from this, though, is that in general, companies are very aware of these issues, do their best to track any known issues that might affect their products and produce fixes as quickly and safely as they can.

  1. A question on every user’s mind: why is software so vulnerable? And what can software users do about it?

Actually, to some extent we sometimes overestimate how vulnerable (some) software really is, or at least its impact on the real world. Major software/hardware/service providers have learned to over-engineer: those monthly packages of umpteen patches usually include some whose significance is largely theoretical and/or will only affect a few people. Furthermore, there's a tendency among proponents of other operating systems to overestimate the insecurity of a popular operating system and overestimate the security of their own chosen OS. While this is partly attributable to general fanboyism and halo effect – those who favor one OS in general terms tend to play down its weaknesses in specific areas – it's also encouraged by quirks of data interpretation.

For example, the security of an operating system is sometimes seen as directly proportional to the yearly total of patches it requires (the assumption being that the more often it's patched, the more vulnerable it must be). However, it can also be suggested that while all complex operating systems need patching as new bugs and exploits are discovered, frequent patching is actually a measure of due diligence rather than insecurity. In fact, the number of variables involved in patching methodology, effectiveness and deployment makes either viewpoint seem over-simplistic.

In fact, the precise impact of patchable vulnerabilities is hard to determine. There are certainly high-profile cases of malware that exploits CVE-flagged vulnerabilities – Stuxnet is a particularly prominent example, though its use of multiple 0-days is actually unusual, and is usually seen as a byproduct of its specific targeting of high-value systems. (Value being assessed in terms of political and military importance rather than monetary value.) However, much and maybe most untargeted malware seems to rely primarily on social engineering, though known exploits may well be included almost incidentally, in the hope of finding victims using unpatched systems. However, that doesn't mean anyone should regard targeted malware as 'not my problem'. Apart from the risk of collateral damage to systems that don't match the target profile, attackers often favor sneaking up on a high-value target using a lower-value target system as a conduit. All malware is targeted: it’s just the size of the (ultimately) targeted population that varies.

Having said all that, though, it's not the exact percentage of malware and other attacks that take advantage of software vulnerabilities that matters: it's the fact that you can reduce your own systems' exposure to attack by taking advantage of updates and patches, and why wouldn't you do that?

  1. What is your main, practical advice for users regarding patching?

Well, there are reasons for being cautious about patching. There is a reason that large organizations sometimes have phased update mechanisms that start with testing on machines that aren't being used for critical business processes: sometimes a patch goes wrong for some people out in the real world, and occasionally the results are catastrophic. Most home users don't have the resources or expertise to implement an effective formal change management process, but they can at least take precautions so that even a disaster that results in permanent damage to or loss of a system – or even more than one system – doesn't mean that they lose all access to the data that was installed on that system.

That doesn't just mean keeping data backed up – vital though that is* – but also ensuring that they'll be able to reinstall all the applications needed to access that data in whatever ways might be needed, if necessary on a completely new system. After all, while catastrophic and permanent data loss due to a misfiring patch is uncommon, taking those same backup precautions is a necessary defense against security problems that are far more common – encryption of data by ransomware, for example. Remember also that one backup is better than none, but more than one (preferably stored in more than one locality) is much better than keeping all your eggs in a single basket.

*Aryeh Goretsky's article Options for backing up your computer offers some good advice on how to set about it.

  1. How could users cultivate a healthy habit of keeping their software up to date? Would you recommend any particular tools?

I can't recommend any particular update tool(s): I haven't spent time recently evaluating them. Major software companies offer automatic updates or at least notification of new updates, and it's obviously a good idea to subscribe to any such service. Unfortunately, smaller organizations may present different challenges in that they may not offer automatic updates or even feel it necessary to advise their customers when something goes wrong, especially if such a warning might lead to bad publicity. Certainly it's unusual for a small software house to offer a list of ongoing update/security information. You may find that you're likelier to get such information if you're on a list that offers more general product information. That may mean that you get more offers to buy new versions and products: only you can decide whether that's something you're OK with if it means you stay in the security loop. There are some more generic sources of information on known vulnerabilities, and these will sometimes catch information about flaws in quite obscure products.

The Common Vulnerabilities and Exposure (CVE) database is a dictionary of known vulnerabilities maintained at mitre.org. It's a good starting point for tracking such vulnerabilities (though it struggles nowadays to keep up with the sheer volume of submissions), but it doesn't offer a data feed/mailing list. The National Vulnerability Database (NVD) does offer a number of mailing lists, but they're probably of more interest to systems administrators, C-level security executives, and so on. The SANS Institute offers @RISK, a newsletter providing information on attack vectors, vulnerabilities and exploits, and so on, and a couple of other newsletters of more general interest. SecurityFocus has the longstanding and very reputable Bugtraq list, but also has some more specialized lists that may be useful to those who don't want to devote all their working hours to looking for vulnerabilities and updates. No, me neither…

  1. On a corporate/institutional scale, what could help more companies leverage the benefits of patching as a proactive security measure?

Patch/update management can be a serious drain on an organization's resources, though there are services to which the exercise can be outsourced. (I'm afraid I haven't evaluated any of them myself recently.) The problem is compounded when:

  1. The organization is served by many disparate systems without much standardization of hardware, operating systems, and applications.
  2. Even more so where there is heavy use of BYOD (Bring Your Own Device), unless the range and functionality of devices is restricted and supervised (i.e. Choose Your Own Device from a range of approved devices and apps).
  3. Staff are not made aware of their responsibilities for security (including the maintenance and proper use of systems and services) under a formal and well-publicized policy as part of an ongoing educational initiative.

Attempting to mitigate these difficulties is likely to make it easier to apply an appropriate change management process. It helps to have an IT team (whether in-house or outsourced) including staff members who are aware of the need to track patching issues and resourced to take appropriate action when an issue arises that needs it, whether it's ensuring that a patch is distributed where it's needed or dealing (proactively where possible) with compatibility and other issues that may arise.