Profile

Cover photo
Elliot Yoon
Attended Scripps Ranch High School
Lives in San Diego, CA
3,029 views
AboutPostsPhotosVideos

Stream

Elliot Yoon

Shared publicly  - 
 
 
Vulnerability counting is a horrible metric (and the security industry needs to stop pretending it isn't)

A few times a year I see a report lamenting all the vulnerabilities found in a class of major software, somehow trying to equate raw numbers in public reports to effective security. I get the appeal of this approach. There's an inherent perception of rigor and objectivity when you slap a bunch of numbers on a page, and it just feels like it should mean something. But the truth is it's mostly just noise when you’re looking at a single product, and outright harmful for comparing between products.

First, you should understand that if you're not seeing vulnerability reports against a piece of software it almost always means the software is either trivial from a security perspective or (more likely) no one is looking at its security. That may not seem like a great reality, but the fact is that any sufficiently complex piece of software is going to have security bugs. And given the tradeoffs that are typically made in terms of runtime performance and developer productivity, we have a good sense of how and where those security bugs are going to show up. So a lack of signal more often than not indicates a lack of effective investigation.

The next thing to appreciate is that that there’s wide variability in what a vulnerability report exactly means and what a software maker chooses to report. This is easier to see if you look at large, publicly developed, open source software project like Chrome or Firefox. When you do, you'll find the lion’s share of the “vulnerabilities” are internally found bugs that typically fall into the rather broad class of possible memory corruption.

Of course, for Chrome and Firefox the vast majority of these bugs haven’t been verified as being genuinely dangerous. Some may be bad, but many are going to be very difficult (or impossible) to reliably exploit. And while it’s certainly possible to work out a full exploitability analysis on every bug found, it is immensely more expedient to just fix the problem and push a timely update to users. (On Chrome we actually used to assign CVEs to all of these bugs, but was tremendous hassle and a net negative to our users, because the information was so prone to misinterpretation.)

The interesting corollary in closed source software is that it also has similarly large numbers of internal finds like these. However, the associated bug trackers aren't public and closed source software makers don't often see a reason to disclose any details on internal reports. So, if a potential security bug is found internally or through a partner, you most likely will never know. A fix will get shipped at some point, but you're almost certainly not going to see a timeline or breakdown of vulnerabilities for anything outside of verified reports from external parties.

That of course brings us to how external reports are verified. If you've reported a vulnerability to e.g. Microsoft you're probably familiar with the dance that typically ensues with MSRC (or the many other vendors who operate similarly). You report the vulnerability and get push back for delivering anything short of a full, working PoC (proof of concept). And even when you deliver a PoC, there’s often still back and forth over how reliable an exploit would be in the wild, or how severe the impact really is. So, it ends up as a bit of a negotiation, because the software maker is: 1) concerned with filtering out the noise (from often a huge volume) of junk reports; and 2) focusing on details they believe will best inform their consumers' patch deployment strategies. The downside here is that the strategy might be preventing real vulnerabilities from getting reported because it places a large burden on the bug reporter.

On the opposite end of the spectrum you have the reporting process for e.g. Chrome. The exploitability and impact assessment are very cursory, and the tendency is to just assume the worst potential impact, push a fix, and pay out a bounty. (The process is actually so permissive that I've seen Chrome list bugs as a vulnerabilities and pay out bounties in cases where I was certain the bugs were not exploitable.) The upside here is that a legitimate vulnerability is extremely unlikely to slip through the cracks. But, the downside is that the vulnerability details are less curated, so this process puts the onus on the end consumer to apply patches or accept updates more aggressively than they might otherwise.

Accepting the wide variance in what gets counted as a vulnerability, it gets worse when you realize that there’s essentially no consistency in how vulnerabilities are scored by anyone. Attempts have been at made with things like CVSS (common vulnerability scoring system), but in the end the scoring systems are very subjective and open to wide interpretation. So, most big software makers have gravitated toward simpler scales that generally don't align with other software in the same class. And even if we could get the rankings to align, the whole point is really to help the consumer answer the question of how quickly they need to patch—an answer which is almost always ASAP, so why bother with complicated rankings?

tl;dr: Vendor provided vulnerability information covers a very broad spectrum, where the quantity of vulnerabilities listed and the details included within vary wildly, are heavily influenced by the approach of the vendor itself, and can change regularly over the lifetime of a given product. Simply put, it’s just not possible to use that information to make qualitative statements about the relative security of a single product, much less for comparing different products or different vendors. In the end, it’s going to be an apples-to-bowling-balls comparison at best and an apples-to-interstellar-warcraft comparison at worst.

In closing... Please, in the name of everyone’s sanity, just don’t play the vulnerability counting game. It doesn't do anyone any good.

#chrome #security
View original post
1
Add a comment...

Elliot Yoon

Shared publicly  - 
1
Add a comment...
People
In his circles
51 people
Places
Map of the places this user has livedMap of the places this user has livedMap of the places this user has lived
Currently
San Diego, CA
Previously
Honolulu, HI - Chesterfield, VA - La Crescenta, CA - Valencia, CA
Story
Tagline
Technology
Introduction

Korean. Likes technology. Android and Windows/Linux user. Programmer. 

Plays a lot of Pokemon, to the point of almost being competitive. Also watches anime and reads manga.

Education
  • Scripps Ranch High School
    2015
Basic Information
Gender
Male
Apps with Google+ Sign-in