A personal story from a very white hat: What if Weev had just read one record and done “responsible disclosure”?
-------------------
One thing I’m hearing, as Andrew Auernheimer was yesterday sentenced to 3.5 years in prison for scraping and publishing private data incorrectly exposed via a public API, is that if he had only downloaded a single record and done “responsible disclosure”, everything would have been OK. That might be true, but it might not be true. I’m as squeaky-clean a white hat as they come, but here’s a story for some context:
A little over a year ago, when the BEAST attack on TLS was released, I did some research on why TLS 1.1 (immune to the attack) wasn’t more widely deployed, nearly ten years after it has become a standard. It turns out that some sites are “intolerant”; if a client reports that it supports TLS 1.1, the server will fail the connection. I figured that, surely, the number and importance of such sites must be small relative to the concrete threats now facing TLS 1.0, and asked Twitter for references to some specific sites showing this intolerance.
One contact privately gave me the name of a major site for a large US government agency. I reached out to their security team in an email, informing them of the issue, the context (BEAST and helping the entire Internet move to a more secure set of protocols) and offering to help them diagnose and repair it, and even work with their vendor to help other impacted customers get fixed.
The response? Based solely on my email, the head of that agency reported me to the Department of Homeland Security and started an investigation where I was accused of performing “unauthorized testing” on and possibly attacking their servers. The “testing” I’d done? I typed the name of the site into my browser once and saw the home page load. Then I turned on TLS 1.1 support, typed it again and watched it not load.
Now, it happens that some important people stood up for me, and their and my reputations helped convince DHS that I wasn’t a threat. I dropped the issue entirely, and as far as I know the site is still TLS 1.1 intolerant. But as I read about cases like Aaron Swartz and Andrew Auernheimer, I shudder to think how easily the consequences of my intended act of goodwill could’ve spiraled out of control. If I was an independent researcher, worked for a little-known security firm, or had any minor wrongdoing in my past, my career and life could’ve easily have been ruined by this harmless act of positive outreach.
Do I support what Weev did? No. Do I think he even remotely did the right thing? No. Did he make his own situation much worse than it needed to be, up to and including his attitude and actions at his sentencing hearing? Yes. But all these can be true,
and the prosecutorial culture of paranoia, zero-tolerance, fear and political “message sending” around all things “cyber” can still be frighteningly out of control. I think it is. The prosecution wanted to “send a message”, and the message was received. There are ways, big and small, that this white hat researcher has already stopped trying to help out the security of the Internet. I know I will never report even an incidentally discovered vulnerability in an online service again; the risks are just too great.
The primary beneficiaries of this culture of fear are powerful organizations who think silence and threats are a better solution to Internet insecurity than engineering, and the real criminals that routinely and successfully disregard the threat of prosecution and exploit the systems that silence allows to remain vulnerable. The biggest losers are consumers, who will be deprived of any objective information on the relative security posture and history of online services when making important choices like where to bank, shop, get email service, or work with their health care information online.
Meanwhile, I applaud companies that have affirmatively provided permission for security research on their online services on the condition that such research not harm users and be responsibly disclosed. Last year, Dan Kaminisky compiled a small list of such companies here:
http://dankaminsky.com/2012/02/26/review/ , and I am happy to see that there are today many more doing the same through various kinds of bug bounty programs. It is my personal opinion that it would be good to see this kind of policy enshrined in law, just as many states already have “Good Samaritan” laws eliminating liability against people who, with good intent, give first aid to injured strangers. We've decided we don't want a culture in the physical world where people walk past an accident scene because they are afraid of the consequences of offering help, we ought to do the same online.