Bugs deliberately placed in software to make code more secure

Can cramming code with bugs make it more secure? Some think so

Unbeknownst to exploit writers, the seemingly mouth-watering bugs would be bogus and non-exploitable

Unbeknownst to exploit writers, the seemingly mouth-watering bugs would be bogus and non-exploitable

Researchers at New York University have come up with an unconventional defensive technique that could ultimately deter attackers from even trying to write exploits targeting software vulnerabilities.

In a departure with the usual ways of addressing bugs, which normally involve eliminating known vulnerabilities or adding mitigations to render their exploitation less practicable, a team of three computer-science researchers now propose a different tack: stuffing code with vulnerabilities that appear exploitable to flaw-finding scanners, but are, in reality, anything but.

Their tactic – detailed in a paper called “Chaff Bugs: Deterring Attackers by Making Software Buggier” – pivots around a typical attacker workflow in exploit development: find vulnerabilities, ‘triage’ them to determine exploitability, develop working exploits, and deploy them to their targets.

In this case, however, the flaws are mere decoys, having been placed in the software deliberately, automatically, and in large numbers by the application’s developers. Dubbed “chaff bugs”, such would-be vulnerabilities would actually be non-exploitable and would only be intended to get black hats bogged down in futile efforts to come up with exploits.

“Our prototype, which is already capable of creating several kinds of non-exploitable bug and injecting them in the thousands into large, real-world software, represents a new type of deceptive defense that wastes skilled attackers’ most valuable resource: time,” wrote the researchers.

They took their strategy for a test drive and deployed it against web server software nginx and encoder/decoder library libFLAC, focusing on two commonly exploited types of flaws – stack-buffer overflows and heap overflows. They found that the functionality of the software isn’t harmed, and demonstrated that the faux bugs appear exploitable to current “triage tools”.

“By carefully constraining the conditions under which these bugs manifest and the effects they have on the program, we can ensure that chaff bugs are non-exploitable and will only, at worst, crash the program,” they stated. “Although in some cases bugs that cause crashes constitute denial of service and should therefore be considered exploitable, there are large classes of software for which crashes on malicious inputs do not affect the overall reliability of the service and will never be seen by honest users,” reads the paper.

Can it work?

On the flip side, the practicability of the technique may well be open to question, and the researchers themselves were quick to highlight some of its potential pitfalls.

“The primary limitation of our current work is that we have not yet attempted to make our bugs indistinguishable from real bugs,” they state. In other words, one worry is that attackers or their flaw-hunting rigs could eventually be able to identify the bogus bugs. Either way, the academics do believe that the phony bugs can be made indistinguishable from the naturally occurring ones and hope to tackle this aspect of the problem in future work.

As well, open-source software is out of the question, with the researchers stating that “we assume that the attacker has access to a compiled (binary) version of the program but not its source code”.

Other limitations include ensuring that the chaff bugs are indeed harmless and remain so after changes are later made to the code. In addition, the paper admits that software developers would probably shy away from working with source code that is riddled with extra bugs. “Hence we see our system as useful primarily as an extra stage in the build process, adding non-exploitable bugs,” wrote the researchers.