mark :: blog :: nvd

[ 1 ]


Back in August I found that many of the Common Vulnerability Scoring System (CVSS) scores that the National Vulnerability Database (NVD) assigned to vulnerabilities affecting open source software were incorrect.

Since then I've been sending in corrections on a monthly basis, taking into account the worst possible score across all affected platforms (and not how Red Hat products were affected specifically).

For the five months May to September 2007 I looked at 178 vulnerabilities (across all Red Hat products and services). Only 80 were accurate. Corrections were submitted to NVD and they fixed the incorrect CVSS scores on the remaining 98 vulnerabilities.

So, before the corrections, there were 65 issues rated "High" out of 178. After the corrections there are actually only 17 rated "High".

Fortunately the number of corrections needed each month seems to be decreasing, but we'll continue to send in corrections every month. Even with the corrections, the severity rating for a given vulnerability may well vary for the version each vendor ships; so you need to be careful if you are basing your risk assesments soley on the accuracy of third-party severity ratings.


The National Vulnerability Database (NVD) assign a severity rating to every vulnerability; "High", "Medium", or "Low". The rating is determined by ranges of CVSS (Common Vulnerability Scoring System) v2 scores. I've not been a big fan of CVSS: I don't think it works particularly well when applied to software that is shipped by multiple vendors, or for open source software and libraries that don't know all the possible use-cases of their software.

Even though I'm not a fan, NVD publish a CVSS score for every issue, security companies are using those scores in their vulnerability feeds to customers, and people are using them for metrics. So it's important that these scores are accurate.

I decided to take a look at how accurate the CVSS scores were, so for every vulnerability we fixed in any Red Hat product for June 2007 examined the CVSS score given by NVD. For each one figuring out if the CVSS base metrics were correct, and where they were not submitting the correction back to NVD. This analysis of the vulnerabilities was based on their possible worst-case threat to all platforms (I didn't adjust the CVSS scores for how the issues affected Red Hat products specifically).

There were 39 total vulnerabilities for which unfortunately only 8 scores were accurate. I submitted corrections to NVD and they fixed the CVSS scores on the remaining 31 vulnerabilities.

20 vulnerabilities ended up moving down in ranking, 6 vulnerabilities moved up, and 5 stayed the same (although the CVSS score changed).

Before the corrections there were 14 issues rated "High" out of 39, after the corrections there are just 3 rated "High".

Those corrections are now live in the NVD, and I really appreciate how quick the folks behind NVD were at checking and making the changes. I've submitted to them corrections for a couple more months too, and I'll write about those when there complete. Unfortunately it does take a lot of time to investigate each issue and do the corrections, so it will limit how far back into 2007 we can correct.


The National Vulnerability Database provides a public severity rating for all CVE named vulnerabilities, "Low" "Medium" and "High", which they generate automatically based on the CVSS score their analysts calculate for each issue. I've been interested for some time to see how well those map to the severity ratings that Red Hat give to issues. We use the same ratings and methodology as Microsoft and others use, assigning "Critical" for things that have the ability to be remotely exploited automatically through "Important", "Moderate", to "Low".

Given a thundery Sunday afternoon I took the last 12 months of all possible vulnerabilities affecting Red Hat Enterprise Linux 4 (from 126 advisories across all components) from my metrics page and compared to NVD using their provided XML data files. The result broke down like this:

Red Hat
13% Crit 24% Important 39% Moderate 24% Low
NVD
30% High 20% Moderate 50% Low

So that looked okay on the surface; but the diagram above implies that all the issues Red Hat rated as Critical got mapped in NVD to High. But that's not actually the case, and when you look at the breakdown you get this result: (in number of vulnerabilities)

 NVD: High
23 Critical
24 Important
35 Moderate
8 Low

 NVD: Moderate
9 Crit
18 Important
22 Moderate
12 Low

 NVD: Low
7 C
32 Important
62 Moderate
51 Low

That shows nearly half of the issues that NVD rated as High actually only affected Red Hat with Moderate or Low severity. Given our policy is to fix the things that are Critical and Important the fastest (and we have a pretty impressive record for fixing critical issues), it's no wonder that recent vulnerability studies that use the NVD mapping when analysing Red Hat vulnerabilities have some significant data errors.

I wasn't actually surprised that there are so many differences: my hypothesis is that many of the errors are due to the nature of how vulnerabilities affect open source software. Take for example the Apache HTTP server. Lots of companies ship Apache in their products, but all ship different versions with different defaults on different operating systems for different architecture compiled with different compilers using different compiler options. Many Apache vulnerabilities over the years have affected different platforms in significantly different ways. We've seen an Apache vulnerability that leads to arbitrary code execution on older FreeBSD, that causes a denial of service on Windows, but that was unexploitable on Linux for example. But it has a single CVE identifier.

So if you're using a version of the Apache web server you got with your Red Hat Enterprise Linux distribution then you need to rely on Red Hat to tell you how the issue affects the version they gave you -- in the same way you rely on them to give you an update to correct the issue.

I did also spot a few instances where the CVSS score for a given vulnerability was not correctly coded. CVSS version 2 was released last week and once NVD is based on the new version I'll redo this analysis and spend more time submitting corrections to any obvious mistakes.

But in summary: for multi-vendor software the severity rating for a given vulnerability may very well be different for each vendors version. This is a level of detail that vulnerability databases such as NVD don't currently capture; so you need to be careful if you are relying on the accuracy of third party severity ratings.

[ 1 ]

Hi! I'm Mark Cox. This blog gives my thoughts on security work, open source, home automation, and other topics.