There are – and will always be – vulnerabilities in software. Just like there is no perfect security, there is no perfect codebase. That begs the question: What is the best way to fix software problems, especially at scale? As is so often the case when it comes to security questions, the answer is “That depends.”

Who let the bugs out?

Open-source software allows anyone – for better or worse – to take a look under the hood and hopefully fix security or functionality issues. But they could also introduce backdoors that might go unnoticed, sometimes for years, according to a 2022 study published at the 31st USENIX Security Symposium.

Closed-source software, on the other hand, relies on the secrecy of its source code and the expertise of its own software developers, kind of an internal secret sauce hopefully maintained by experts with solid reputations for security, where their craft is at least good enough to retain customers and stay in business. Regardless of whether or not they make their source code available, developers can benefit from documents such as the OWASP Top Ten and the SEI CERT Coding Standards, which promote the development of secure coding practices.

While open-source software has roots back to the 1950s, it wasn’t until the early 1980s that software was considered copyrightable in the United States. One of the results of this was that many vendors which previously shipped source code as part of their products ceased doing so. Through the 1980s and into the 2000s, some software companies such as Microsoft saw open-source software as a kind of existential threat to their business, before embracing it in the 2010s.

Today, Big Tech increasingly promotes public-private collaboration on the security of open-source software, to the point that the White House had a summit on securing it in 2022, possibly brought on by the widespread exploitation of vulnerabilities in open-source software. In the course of writing this article, CISA announced the publication of its security roadmap for open-source software, underscoring both its recognition of the importance open-source software has in the technology ecosystem and their commitment to helping secure it.

Closed-source software companies also have the ability to make it someone’s task to update software based on issues as they come up. Open source is generally more reliant on crowds of volunteers to jump in and fix issues as they arise, a property known as Linus’s Law: “given enough eyeballs, all bugs are shallow.” But since volunteers are hard to corral, they’re harder to force to do the daily grind of timely bugfixes – the part of security that isn’t glamorous – and updates may lag. This may be changing, though: bug bounty programs offered by Google, Huntr are a way to monetize the finding and fixing of vulnerabilities in open-source software.

The reality of modern software is somewhere in between – since many closed-source projects often rely heavily on gobs of open-source “scaffolding” software to do the basics before layering their secret sauce on top. It makes sense, for example, not to build an email application from scratch to do administrative notifications: there are well-tested open-source projects that can easily handle that.

Some more open-source oriented companies, conversely, do actively contribute to open-source software projects they find important, and because they have commercial customers, their commercial revenue allows them to employ someone whose job is to fix bugs.

But this strange confluence of forces can still allow issues like Log4j vulnerabilities, which can undermine infrastructure and still perhaps provide a backdoor regardless of whether the full stack you use as a product is open, closed, or most likely something in between.

A secondary effect of open-source software is that it helps jumpstart entire communities of things like communication software that want to act securely, since they don’t have to build the whole thing from scratch to attempt to get the cryptography right.

That’s what some of the most popular privacy-protecting software projects in the world do, like Proton and Signal, each with solid reputations and histories of keeping things private and secure.

Signal’s authors invite anyone to review their code, and since personal messaging is such an important function for society, droves of security people are focused on just that, because a vulnerability, or cryptographic weakness, can have such far-reaching consequences.

Proton, based in Switzerland, got its start in super-secure email, and then expanding into a bunch of other services around protecting user identity – another hugely important function for society, and consequential if they get it wrong.

Lest you think that closed source has a better track record, even the most widely used closed-source software in the world can contain vulnerabilities for years, if not decades. Consider CVE-2019-0859. Discovered by Kaspersky Lab, it is a use-after-free vulnerability found in ten years’ worth of Microsoft Windows operating systems, from Windows 7 to Windows 8 to Windows 8.1 to Windows 10 on the desktop side, and Windows Server versions 2008 R2, 2012, 2012 R2, 2016 and 2019.

The devil is in the detail

The truth of the matter is that neither open-source nor closed-source software is inherently more secure than the other. What matters is the process through which software is developed, and fixes are implemented for vulnerabilities. The reliability of those fixes, and the speed at which they can be implemented, are what organizations should be focusing on in terms of determining a security posture – not the type of software license.

In the end it comes down to how responsive the host organization is to the broader security community. ESET, for example, contributes significantly to the MITRE ATT&CK® framework and provides lots of other security tools that are often free to use or open source.

In the hybrid world of software, nearly always a mashup of open- and closed-source software, that becomes the litmus test: whether the company or organization is open to suggestions and contributions, and whether it reinvests back into the security community. There’s a saying about the company you keep, make sure your software folks are in good company, and the rising security tide will lift all digital ships. And while perfect security will remain elusive, great teams with good reputations can certainly help.