The Battle of the Bugs


In the nascent years of the computer industry some six decades ago, there were only a handful of machines globally and the machines took up huge rooms and consumed lots of power. Software was a brand-new discipline and it took very specialized training and patience to program the machines.

Today, less than one average lifetime later, computers many orders of magnitude more capable than those early monsters are everywhere. Even while most of the humans on the planet own or have access to at least one computer, the growth is expanding with the Internet of Things (IoT) or perhaps more appropriately, the Internet of Everything. And the humans who develop software number in the tens of millions rather than a few handfuls.

Being humans, these millions of people who write software for billions of computers make mistakes; lots of them. Fred Brooks, in his well-known and classic text on the challenges of software development[1] says this about programmers:

First, one must perform perfectly. The computer resembles the magic of legend in this respect, too. If one character, one pause, of the incantation is not strictly in proper form, the magic doesn’t work. Human beings are not accustomed to being perfect, and few areas of human activity demand it. Adjusting to the requirement for perfection is, I think, the most difficult part of learning to program.

The mistakes show up as bugs in the software and often the repercussions are insignificant. But sometimes the errors can be catastrophic. For example, the huge power failure in the North American Northeast on August 14th, 2003 was caused by a programming error. And the Heartbleed security flaw of 2016 was estimated to have caused around a half billion dollars to correct.

The Scope of the Problem

With millions of developers cranking out tens of millions of lines of code annually, the number of bugs being produced is growing too. Coralogix[2] the data logging analytics company, has studied the issue of developer productivity and makes the following claims:

[Here are] 5 amazing facts on exactly how much time is spent on debugging and code fixing in the software industry:

  1. On average, a developer creates 70 bugs per 1000 lines of code (!)
  2. 15 bugs per 1,000 lines of code find their way to the customers
  3. Fixing a bug takes 30 times longer than writing a line of code
  4. 75% of a developer’s time is spent on debugging (1500 hours a year!)
  5. In the US alone, $113B is spent annually on identifying & fixing product defects

Many of these bugs are logic bugs associated with the intended behavior of the software. A large number, though, arise from common coding errors. And some of these coding errors can have serious security consequences, such as the Heartbleed bug noted earlier, where it was shown that the problem was in the code for 2 years before being detected.

The US National Institute of Standards and Technology (NIST) maintains an ongoing repository of important security problems called the National Vulnerability Database – NVD[3]. From their Website:

The NVD is the U.S. government repository of standards based vulnerability management data represented using the Security Content Automation Protocol (SCAP). This data enables automation of vulnerability management, security measurement, and compliance. The NVD includes databases of security checklist references, security-related software flaws, misconfigurations, product names, and impact metrics.”

Squashing Bugs

Of course, developers are not intentionally creating these bugs. Developers like creating new features and behavior and find the task of debugging to be tedious if not onerous, but an important part of the job. To root out bugs and squash them, there are 4 general techniques for uncovering the bugs:

1. Dynamic Application Security Testing (DAST) –
This is what most developers are familiar with. Developers create test cases that consist of test data to force their programs to execute all phases of its execution path. The quality of the code is only as good as the thoroughness with which a developer creates their Unit tests. Further dynamic testing from independent testers is typically employed

  1. to check both the operation of each function and its operation in the presence of other developers’ code.
  2. Interactive Application Security Testing (IAST)
    One way to augment the value of testing a running program is to use IAST. This involves a separate run-time component or agent which executes within the running program environment. A feature of operating “inside” the program’s allows it to collect operational information about control and data flows allowing certain classes of security vulnerabilities to be tested.
  3. Runtime Application Self-Protection (RASP)
    RASP is a relatively recent advance in testing in that it consists of runtime components that execute alongside with the software being tested. Like IAST, it runs within the program’s runtime environment. Unlike IAST, it is not so much about testing the correct operation and safety of the program. Rather, it is more about executing ongoing sanity checks and defending the program from live attacks from intruders.
  4. Static Application Security Testing (SAST)
    Gartner defines SAST[4] as:

Static application security testing (SAST) is a set of technologies designed to analyze application source code, byte code and binaries for coding and design conditions that are indicative of security vulnerabilities. SAST solutions analyze an application from the “inside out” in a non-running state.

SAST has grown in importance since its broad introduction in the late 1990’s. Because it analyzes code without needing to run the code, it can detect problems that may not easily be discovered using the various runtime methods of testing. It also has the benefit if detecting code as it is being developed even before the software is ready to begin dynamic testing shortening the find/fix cycle.

Of the various methods of testing, SAST has a particularly valuable return on its usage in terms of developer productivity which may account for the fact almost twice as much is spent on tools for SAST as spent on DAST[5].

The benefits of using SAST in addition to other forms of testing will be addressed in a subsequent blog entry.


[1] The Mythical Man-Month, Frederick Brooks, pp. 8
[2] Coralogix Blog: “This is what your developers are doing 75% of the time, and this is the cost you pay”;
https://coralogix.com/log-analytics-blog/this-is-what-your-developers-are-doing-75-of-the-time-and-this-is-the-cost-you-pay/

[3] https://nvd.nist.gov
[4] Gartner Glossary: https://www.gartner.com/en/information-technology/glossary/static-application-security-testing-sast
[5] MarketsAndMarkets – Security Testing Market; Global Forecast to 2021

Recent Posts

SAST Signal to Noise

This is an opinion piece written by Charlie Bedard, COO of OpenRefactory. Charlie reflects on

FR Routing News

OpenRefactory reports that it has performed an analysis of a popular Open Source project: FRRouting.

TCS News

A major advancement for OpenRefactory is our Memo of Understanding with Tata Consultancy Services (TCS).

Fix Too

Everyone cares about security bugs—yet until OpenRefactory, there haven’t been tools that enable developers to