Shared publicly  - 
 
"Given enough eyeballs, all bugs are shallow"

(This post was originally a comment on a post by +Alex Ruiz and is reposted here by request)

In the wake of Apple's SSL "goto fail" bug followed quickly by a defect in GnuTLS, we are now dealing with #heartbleed , arguably the worst security vulnerability in the history of the Internet. In each case, the defect was in "open source" code for years before being discovered.

As news and understanding of just how bad this vulnerability is, and in trying to reconcile cognitive dissonance with old, largely unchallenged adages like "open source is more secure" and "given enough eyeballs, all bugs are shallow" many are questioning whether these statements are really true or not.

So, is open source more secure, simply because it is open?

We are at a point in which we can make some observations, but I would propose that we are far from being able to draw any reasonably supportable conclusions about it. To be clear, I am not arguing that closed is more secure or open is more secure. I am pointing out details and observations that would be explored in an actual rigorous pursuit of answering such a question. If I am arguing anything, I am arguing that we don't have enough data to answer that question yet.

The blue team today is everywhere, so I'll red team this (note that I can argue against many of the following points as well. http://en.wikipedia.org/wiki/Red_team):

Yes, open source allows anyone to do a security audit. Including the NSA, including organized crime syndicates, including disorganized criminals, including hostile governments. If these organizations are more zealous in their participation, the net effect is less safety. When a vulnerability is introduced into code, it exists as a potential exploit. It becomes realized when someone identifies it. Which someone will depend on motivation, access, and skill.

Yes, open source allows anyone to do a security audit, but can they (are they even capable in skill?), and do they? GnuTLS and OpenSSL are giving a strong hint of an overall diffusion of responsibility problem in the model. DoR (bystander effect) is a well studied phenomenon, and it appears to be a potential root cause in this case. The problem is that we have two security defects in a row that have been in the open for multiple years at a time. This should raise some questions about the model, and it should be cause for reflection.

Yes, closed source is more difficult and time consuming to audit. However, this is true not just for white hats, but for the black hats as well. For example, the various groups jailbreaking the iPhone relied (black to Apple, white to others) on the fact that the kernel source was open. When Apple started holding it back, it slowed them considerably. Likewise, they started holding back disclosures to use for future jailbreaks. The interplay of human motivation and development approach are inextricably intertwined in answering this question.

The more secure methodology then is determined by whether white hats are more zealous and aggressive than black hats. Unfortunately, "not my job" is showing up a lot in this topic as well, which would suggest that black hat groups are more motivated and focused, because they are taking the initiative. Point here is, the code "being open" makes it neither more or less secure. The human behavior around it what results in increased or decreased security. In a population of motivated black hats and unmotivated white hats, it's reasonably to hypothesize that this backfires. For example, if it came out that the NSA was using that, that would support such a hypothesis.

"Given enough eyes, all bugs are shallow" doesn't even stand up to logical analysis, much less experiment. Infinite "eyes" with no knowledge or understanding of the 'C' language will never find many defects, much less obscure security vulnerabilities. So we can throw that out and say, "given enough highly skilled eyes that are given enough time and interested in doing the analysis, all bugs are shallow" (this is further debatable, as we can look at a correlation between skill level and defect recognition. That is, for any given level of skill, defects beyond that skill level will never be recognized.)

In closed approaches, highly skilled and specialized eyeballs are more often hired (which means they are incentivized to do the boring and tedious job) and accountability is clear (the owner of the 'closed' system is accountable compared to an 'no owners' model). Who should have caught OpenSSL? Who is, at the end of the day, accountable for OpenSSL? The 47 independent contributors to it? Red Hat? Google? The Linux Foundation? Everyone? No-one? The advantage of closed software in this regard is that accountability is clear, and they generally have the financing to hire those highly skilled security experts who do not generally work for free. (an extension of this would be the proprietary company that releases open source, but takes accountability and ownership of it. another extension, the company that profits from open source, they did not write it, but they benefit from it)

"Open source is more secure" may be true, but it's likely to be followed with lots of qualifiers like: if said source has a clear steward or 'owner' and if that steward/owner can attract and retain skilled security talent interested in vetting that code and if they outnumber and outperform the skilled security talent the bad guys are hiring.

In the end, code is more secure when the right (white hat) people with the right (security analysis) skills perform audits before the wrong (black hat) people with the right skills do. There are many variables and nth order effects in all the arrangements of that.
23
9
Christopher Woods's profile photoNena D'mente's profile photoMohamed Sobhy's profile photoJohn D. Bell's profile photo
7 comments
 
Another example I think is in documentation with open source stuff. Sometimes documentation for open source projects is horrible and the only reason I can think of is that for a lot of people it's just not fun to do so it just doesn't happen or it doesn't happen well.
 
Of course we lack data about what is more secure, and it's quite possible that we'll never get this data in proper form.

But one thing is sure - being a "bystander" with closed source is certain, being a "bystander" with open source software is a choice

Cases of "being accountable" for software are quite rare exceptions. I don't know who is accountable for Windows, for sure it's not Microsoft, they quite clearly told me this in license agreement :).

"Security audit" problem - or is it possible, that big commercial organizations are "bystanders"; or more specifically how it is possible, that big companies like Yahoo are taking some software and using it without checking? Is it possible, that it's like this because it's cheaper than put some people and attention to one of the most important areas of their business?
But does it mean when they would buy closed source software they would do a full security scanning?  I would be almost sure it of course would be the case :)...

...but unfortunately we have another "big&rich organization security problems" case - and that's about "closed" and "commercial" and supposedly "accountable" software, and about dead people:

http://www.sddt.com/Commentary/article.cfm?SourceCode=20131104tbc&Commentary_ID=140&_t=Software+bugs+found+to+be+cause+of+Toyota+acceleration+death#.U0Wiz_l_s08

http://www.safetyresearch.net/2013/11/07/toyota-unintended-acceleration-and-the-big-bowl-of-spaghetti-code/

At the end Toyota paid:
http://articles.chicagotribune.com/2014-03-19/classified/chi-us-toyota-settlemen-20140319_1_u-s-justice-department-acceleration-responsive-and-customer-focused-organization
... probably only because they didn't give customers a proper license agreement for their cars :) - it's not "the mechanical engineering" failed; third party, commercial, closed source software did. And BTW I still have no idea how to measure someone's death in money. 

Apple-jailbreak example - yes, lack of public jailbreaks for closed source kernel is for sure an evidence that "real black hats" didn't break it :).

Summary (or tl;dr) - software by definition can't be "accountable"; software can be more or less "accountable"; Open Source is not a "holy Graal" nor even the-only-good-way, it's just a way of making things with real possibility for getting greater accountability; but - like every software - it can't guarantee anything.
 
In short: Security by Obscurity is known not to work.

Yes, if you pay a group of people to audit and test your software, they will do a better job than if you ask random people to do it for free, and while this effect is related to Open Source, it does not define it.

You don't get Open Source it seems. If you need a incredibly secure infrastructure, your business relies on it, you are required to audit it. You would do the same with a closed source solution.  All these commercial servers out there, relying on OpenSSL should have contributed to the product they are using, by doing their part. Open Source is not about just "using what illiterate neck-beards do for free because nobody would pay them for their lacking C-skills" but about "build upon what others built and pass it on".
Maxx D
+
1
2
1
 
+Wojciech Mardyła Knowledge is a journey, we'll never have enough data. My criticism is of the idea that code being "open" automatically makes it more secure or, in some claims simply "secure" (which we know is impossible.) Michael Shermer (author of Smart People Believe Weird Things) said, "...we need to teach that science is not a database of unconnected factoids but a set of methods designed to describe and interpret phenomena, past or present, aimed at building a testable body of knowledge open to rejection or confirmation."

Security is mostly boring and tedious with small elements of satisfaction along the way. What results in an overall "secure" system (as traded against almost every other quality) in practice is the combination of many factors, and the fact that the code is open or closed only has a little weight in the overall calculus. On the other hand, a line by line review by a team of highly qualified security analysts has a very significant weight. As I said in the post, I can also put a skewer through a claim that "closed source makes software more secure/safer/more reliable" as well. There's another rabbit hole called "monoculture" as well that we haven't even touched on. ;)

When I say "accountable" let me clarify that I mean "moral accountability" as opposed to "legal liability" Yes, everyone (GPL, BSD, Apache, proprietary) legally disclaims liability but, think of it this way: If heatbleed were found in WinSSL, there would be very little confusion as to where to point the finger. Lawsuits notwithstanding, Microsoft would be held accountable in the public eye and would suffer loss of marketshare for it. How do I know? It happened a number of times in the early 2000s and led to an ~80% marketshare of Linux powering, basically, the entire world wide web. It is very difficult to recover from bad press.

Security is hard won and requires getting a whole lot of things right. To people outside the tech chamber, they see and hear "open source makes everything magically secure" and that just isn't true. When it comes to security, we should all be very humble. Claiming security superiority is just waving a red flag at a raging bull.

Above coding and business practices, beyond any specific qualities, I am concerned with scientific human progress in computing technology. How we respond to claims of what we should do and why. No claim should
be off limits if it doesn't make sense. This just happens to be the topic of the day, but:

"open source is just a bunch of hobbiests"
"macs are only good for graphics"
"goto is evil"
"java is slow"
"firewalls protect you from viruses"
"antivirus scanners work"

Science is good for computing and we should endeavor to have more of it or, failing that, more skeptical analysis of what we do.
 
+Maxx Daymon
You know I bet at some point a kind of AI can be used to test for security problems in software and find problems not possible for a single person or group of people to think of or find.
Maxx D
 
+Philipp Raich "Security by obscurity" is a term referring to Kerckhoff's Principle (or Shanon's Maxim) and, while considered well established (true), there are nuances and it can be misunderstood or over-applied.

Specifically, his design principles were around cryptographic system design though they translate to security systems in general.

An English translation of this principle is, "A cryptosystem should be secure even if everything about the system, except the key, is public knowledge."

He came up with these principles in 1883, and many cryptographic systems have been designed following his principle since, but many were never disclosed and most were built in secret. How does that make sense?

Steven M. Bellovin is a Professor of Computer Science at Columbia University. He was previously a fellow at AT&T labs, and he co-invented the encrypted key exchange password authenticated agreement methods with Michael Merritt. He said this,

"The subject of security through obscurity comes up frequently. I think a lot of the debate happens because people misunderstand the issue."

"It helps, I think, to go back to Kerckhoffs's second principle, translated as "The system must not require secrecy and can be stolen by the enemy without causing trouble," per http://petitcolas.net/fabien/kerckhoffs/). Kerckhoffs said neither "publish everything" nor "keep everything secret"; rather, he said that the system should still be secure even if the enemy has a copy."

http://catless.ncl.ac.uk/Risks/25.71.html#subj19

In other words, closed source is no more a violation of "security by obscurity" than open source is an "information disclosure" vulnerability. KP is about not relying on secrecy, it's about how you design your system. If your system security posture is not changed because someone sees it, it complies with the principle.

Separately, most security systems aren't "secure" and we know they aren't. When we design them, we are told to trade off security against performance, usability, accessibility, and other 'bilities.

For example, home security systems aren't security systems. They are home subterfuge systems. Any attacker with a full understanding of a home security system can easily get around it, but they have to consider the risks. Maybe the information is incomplete, maybe they will make a mistake. Maybe this home next door without a system at all will just be easier.

They want to change their probabilities, and that's all I was getting at with closed source. It's a hurdle for users and attackers, open source is a benefit for users and attackers. You can't make it closed for attackers and open for users, they are the same group.
Maxx D
+
1
2
1
 
+Tony Bonavera There is already progress along those lines in terms of watching code execute and analyzing the machine code output after transformation by compilers and optimizers. Source code doesn't run on computers, machine code does. Some vulnerabilities are derived from source, but others can come from translation/compilation.

Static source analyzers are already fairly powerful for certain classes of vulnerability. It's definitely an arms race.
Add a comment...