Developing a system for assessing how seriously the software development community should take vulnerabilities is a challenge, to put it lightly. Code is written by humans, and will always have flaws. The question then, if we assume that nothing will ever be perfect, is how do we best categorize the components according to their risk in a way that allows us to continue to work productively?

Just the Facts

While there are many different approaches that one could take in tackling this problem, each with their own valid justification, the most common method appears to be based on a quantitative model.

On the one hand, using a quantitative approach to judging the severity of a vulnerability can be useful in that it is more objective and measurable, based solely on the factors related to the vulnerability itself.

This methodology looks at what kind of damage could occur should the vulnerability be exploited, considering how widely used the component, library, or project is used throughout the software industry, as well as factors such as what kind of access it could give an attacker to wreck havoc should they use it to breach their target. Factors like easy potential exploitability can play a big role here in affecting the score. (For more on security, check out Cybersecurity: How New Advances Bring New Threats - And Vice Versa.)

If we want to look on a macro level, the quantitative perspective looks at how a vulnerability could hurt the herd, focusing less on the damage that could fall upon the companies that are actually hit with the attack.

The National Vulnerability Database (NVD), perhaps the most well known database of vulnerabilities, takes this approach for both versions 2 and 3 their Common Vulnerability Scoring System (CVSS). On their page explaining their metrics for evaluating vulnerabilities, they write of their method that:

Its quantitative model ensures repeatable accurate measurement while enabling users to see the underlying vulnerability characteristics that were used to generate the scores. Thus, CVSS is well suited as a standard measurement system for industries, organizations, and governments that need accurate and consistent vulnerability impact scores.

Based on the quantitative factors at play, the NVD is then able to come up with a severity score, both with a number on their scale – 1 through 10, with 10 being the most severe – as well as categories of LOW, MEDIUM and HIGH.

Accounting for Impact?

However, the NVD appears to be making an effort to stay clear of what we can term as more a qualitative measure of a vulnerability, based on how impactful a certain exploit has been in causing damage. To be fair, they incorporate impact in so far as they measure the impact of the vulnerability on the system, looking at the factors of confidentiality, integrity and availability. These are all important elements to look at – like with the more easily measurable access vector, access complexity, and authentication – but they do not feel up to the task of relating the real-world impact when a vulnerability causes an organization real losses.

Take, for example, the Equifax breach that exposed the personally identifiable information of some 145 million people, including their driver's license details, social security numbers and other bits that could be used by unscrupulous characters to carry out massive fraud operations.

It was the vulnerability (CVE-2017-5638) that was discovered in the Apache Struts 2 project that Equifax used in their web app that allowed the attackers to walk in the front door and eventually make it out with their arms full of juicy personal info.

While the NVD rightly gave it a severity score of 10 and HIGH, their decision was due to their quantitative assessment of its potential damage and was not affected by the extensive damage that occurred later when the Equifax breach became public.

This is not an oversight by the NVD, but a part of their stated policy.

The NVD provides CVSS "base scores" which represent the innate characteristics of each vulnerability. We do not currently provide "temporal scores" (metrics that change over time due to events external to the vulnerability) or "environmental scores" (scores customized to reflect the impact of the vulnerability on your organization).

For decision-makers, the quantitative measuring system should matter less since it is looking at the chances that it will spread harm across the industry. If you are the CSO of a bank, you should be concerned with the qualitative impact that an exploit can have if it is used to make off with your customer’s data, or worse, their money. (Learn about different types of vulnerabilities in The 5 Scariest Threats In Tech.)

Time to Change the System?

So should the vulnerability in Apache Strusts 2 that was used in the Equifax case receive a higher ranking in light of how extensive the damage turned out to be, or would making the shift turn out to be far too subjective for a system like the NVD to keep up on?

We grant that coming up with the necessary data to come up with an "environmental score" or "temporal score" as described by the NVD would be exceedingly difficult, opening the managers of the free CVSS team up to unending criticism and a ton of work for the NVD and others for updating their databases as new information comes out.

There is, of course, the question about how such a score would be compiled, as very few organizations are likely to offer up the necessary data on the impact of a breach unless they were required to by a disclosure law. We have seen from the case of Uber that companies are willing to pay out quickly in hopes of keeping the information surrounding a breach from reaching the press lest they face a public backlash.

Perhaps what is necessary is a new system that could incorporate the good efforts from the vulnerability databases, and add their own additional score when information becomes available.

Why instigate this extra layer of scoring when the previous one appears to have done its job well enough all these years?

Frankly, it comes down to accountability for organizations to take responsibility for their applications. In an ideal world, everyone would check the scores of the components that they use in their products before adding them to their inventory, receive alerts when new vulnerabilities are discovered in projects previously thought to be safe, and implement the necessary patches all on their own.

Perhaps if there was a list that showed how devastating some of these vulnerabilities could be for an organization, then organizations might feel more pressure not to get caught with risky components. At the very least, they might take steps to take a real inventory of which open-source libraries they already have.

In the aftermath of the Equifax fiasco, more than one C-level executive was likely scrambling to make sure that they did not have the vulnerable version of Struts in their products. It is unfortunate that it took an incident of this magnitude to push the industry to take their open-source security seriously.

Hopefully the lesson that vulnerabilities in the open-source components of your applications can have very real-world consequences will have an impact on how decision makers prioritize security, choosing the right tools to keep their products and customers’ data safe.