Shared publicly  - 
 
If the NSA is indeed tapping communications from Google's inter-datacenter links then they are almost certainly using the open source protobuf release (i.e. my code) to help interpret the data (since it's almost all in protobuf format).  Fuck you, NSA.

EDIT:  Please keep in mind that I have not worked for Google for some time now, so my comments should not be mistaken for bearing any kind of authority or insider knowledge.
Agency positioned itself to collect from among millions of accounts, many belonging to Americans.
66
11
Ruslan Abdikeev's profile photoRadek Spáčil's profile photoJeroen van Gelderen's profile photoShawn Hannah (zona)'s profile photo
86 comments
 
I had to admit I was shocked by one thing: I'm amazed Google is transmitting unencrypted data between datacenters.
 
By declaring war on the United States, back in 2004, I pretty much made sure they would be spying on me.
 
+Jake Weisz - We're (I think) talking about Google-owned fiber between Google-owned endpoints, not shared with anyone, and definitely not the public internet.  Physically tapping fiber without being detected is pretty difficult and a well-funded state-sponsored entity is probably the only one that could do it.  Google perhaps underestimated how far the NSA would go here, but it sounds like they're working hard to encrypt this traffic now that they know.
 
+Pereira Braga - Has that been turned on yet?  The article you quote says only that they're working on it.
 
+Pereira Braga From articles, I got the notion that it's a work in progress. Which means a good deal of my Google-stored data is still probably unencrypted.

+Kenton Varda Regardless of how far-fetched something might sound given the level of technology at the time, someone should never figure it isn't worth securing properly. If it exits the building, it should be encrypted. I'm amazed that Google would've made such a mistake.
 
+Kenton Varda Google owns some of its backbone fibers, but also leases fibers and waves. 
 
+Jake Weisz Define "exits the building".  (Though I agree with you that encryption should be much more widely deployed, figuring out the appropriate boundary probably isn't quite so simple as it sounds.)
 
+Kenton Varda I'll stick with PR words and not saying mine :): “We’re troubled by allegations of the government intercepting traffic between our data centers, and we are not aware of this activity. However, we have long been concerned about the possibility of this kind of snooping, which is why we continue to extend encryption across more and more Google services and links.” - from http://blogs.wsj.com/digits/2013/10/30/report-nsa-intercepts-google-and-yahoo-server-data/

+Jake Weisz I don't think you really know well the amount of traffic generated in Google services and the amount of resources used to encrypt such amount of data. :)
 
I'm actually surprised that they did this surreptitiously. At this point, it seems to have been established that the NSA has the legal authority to compel Google and Yahoo to turn over any data it wants, and they don't seem to have hesitated to use that authority.

As an interesting note, the fact that this was done surreptitiously, without any kind of court order, means that I think Google would be within their authority to remove the taps, if they can find them.
 
+Bryan Mills I have Datacenter A here, and Datacenter B somewhere else. I presume Google doesn't own contiguous land between each and every one of their data centers around the world. If its someone's personal data, and you are protecting the privacy of that data, it should be encrypted before it leaves your physical perimeter.

+Pereira Braga If Google can't secure it, Google shouldn't be doing it. If I knew this before, I never would have trusted Google with my data, and I'm willing to bet most of Google's corporate customers wouldn't have either.
 
+Jake Weisz Let me put this way... If Google can't secure it, I doubt any other big company in the world can. ;)
 
How about we just don't have the NSA?
 
There is no such thing as privacy on the internet.  Never has been.  Never will be.  It's time that illusion was dispelled.
 
+Jake Weisz - That's really easy for you to say, but in practice you're talking about a very large expense in terms of both CPU and developer time to make it work.  Everything in Google's network stack is highly customized and running near its theoretical performance limit.  We're not talking about a mom and pop web startup that communicates between Ruby servers using JSON over HTTP and can just flip a bit to enable SSL without even noticing any performance impact on top of all their existing wasted cycles.  We're talking about highly-tuned C++ systems infrastructure on top of modified kernels on top of networking hardware that simply doesn't exist elsewhere.  We're talking about things that can't use TCP because it's too slow.  Encrypting that is going to take tens, perhaps hundreds of millions of dollars of investment and is just not something you do if the threat is only theoretical and seemingly unlikely.

(Disclaimer:  I don't actually know very much about Google's networking infrastructure.  I just know enough to know that it's really a whole lot more complicated than you'd expect, and sweeping statements like "All data exiting the building should be encrypted" are simply not realistic.)
 
+Jake Weisz That heuristic seems necessary (in retrospect) but it's not clear to me that it is sufficient.

Up until recently, a similar statement with "fibers terminated at datacenter A" would not have sounded unreasonable to me.  Will "physical buildings" still sound like a reasonable boundary in another couple years' time?
 
+Jake Weisz - Saying that nobody should store data in the cloud is saying that people should go out of their way to avoid convenience in order to uphold your ideals that they may or may not share.  The real world isn't as simple as that.
 
+Bryan Mills - Indeed, if the NSA can tap private fiber lines then it seems like they could probably break into a building and tap the internal network...  You really have to encrypt the storage, and you have to encrypt each document with a different key so that it's impossible for one user to obtain the key to another user's document unless they are intended to have that permission.  This is a really hard problem.
 
+Bryan Mills Physical access is total access, so best practice is to secure anything going anywhere you don't physically have a security guy patrolling.

+Kenton Varda Most corporations would agree its worth sacrificing convenience to ensure their data isn't Google-able.

I think good security would be users encrypting it, so that even Google couldn't read the data. But of course, then Google can't read your data. And God forbid that. Google believes that nobody should read your data but Google.
 
If Google would like the world to believe giving up physical control of your data and entrusting it to a cloud provider is a good idea, I'm sure Google would be happy to store copies of all source code pertaining to Google Search on Amazon and Microsoft's cloud services.
 
+Jake Weisz - Google actually does sometimes use Amazon and Microsoft services, when they don't have an adequate in-house solution.  Obviously Google has no practical need to store source code on Amazon's servers but I'm not so sure they'd refuse to if it made practical sense. Facebook, for its part, is a heavy user of Google Docs, despite the fact that those docs presumably contain secrets that Facebook doesn't want Google to know about.  (EDIT:  This may not be entirely true. http://www.quora.com/Do-Facebook-employees-use-Google-Apps-for-work)

All that said, I again say that you are projecting your personal cost-benefit analysis onto other people.  Many users don't feel that this is a serious concern for them.  Google is not saying that cloud storage is right for everyone; it's just offering an option that makes sense for many users.  You obviously shouldn't use it, if you  don't trust Google.

(For my part, having worked at Google and seen the lengths they go to internally to protect user data, I feel my data is safer on Google's servers than my own desktop.  The NSA could certainly hack my machine much more easily than they can tap Google's fiber.)
 
I can see how you would be really upset that your creation is being misused. Hopefully it won't sour you on future contributions to open-source.
 
+Kenton Varda I'd like to hear of Google storing their most sensitive trade secret code on someone else's cloud services merely as evidence that Google was willing to put its money where its mouth is on cloud security.

From what I've seen, Google is two-faced about almost everything they claim to support.

- Google demands things be open source... but only when its code they don't control. They're happy to be closed source on any of their own products.

- Google trusts the cloud as far as Google controls the cloud. From my understanding, Google has an internal version of Google+ and other tools which are separate or isolated from the versions the public use. Which means that while a message I send on public Google cloud may be accidentally redirected by Google to a complete stranger, we won't ever stumble across a sensitive Google document due to a Google Drive sharing permissions error.

- Net neutrality: Google's only a fan while Google isn't an ISP with stake in the business. Now they are more than happy to set limits on what Internet service can and can't be used for.
 
tampering with fiber optic cable is nothing surprising. But that an entity of the U.S. military is going at great length to dragnet inside of one of the most important (U.S.) corporations is stunning.
What if there is a global tipping point in trusting anything software related that is somehow related to the US of A? Then even a great company with awesome products and people faces some hard to overcome boundaries.
So I'm really looking forward to see how this unfolds. And I hope at Google folks are 'in it to win it'.
 
Sad fact of release code as open source - anyone can use it.
 
Also, the cost/benefit of hacking Google is much higher than your desktop. They might be able to hack you easier, but they're much more likely to hack Google. Everyone putting their eggs in one basket is a bad idea.
 
Aside from the hand-drawn diagram that explains one of the things that GFE does and shows that inter-DC traffic was mostly unencrypted, none of the primary documents have been released for this story. It seems plausible (Google at least appears to find it plausible given their recent actions) that the GCHQ was tapping at least one of the links into the UK (there is certainly no evidence given that they've infiltrated "data centers worldwide" as the article title breathlessly claims), but it's hard to verify without the documents, and there is still some chance that this whole thing has been as misinterpreted as the original PRISM documents.

Then again, maybe I'm overly jaded by the poor technical reporting on these leaks so far.
 
+Marc-Antoine Ruel - Jake regularly comments on my posts and I don't think he's a troll.

+Jann Van Hamersveld - Of course not.  I quit Google to work full-time on open source on my own dime; I'm hardly about to let the NSA stop me.  :)

+Jake Weisz - You are really stretching here.

Google doesn't "demand" anything be open source.  They share their own code when it makes sense to do so from a business perspective, which is I think all you can expect from a business.

Google's internal versions of its own apps are in fact just its enterprise product offerings.  A document on "internal" Google Drive can in fact be shared with external users.  (I once owned the code that implemented this access control.)

Your third point appears to be in reference to disallowing servers on residential Google Fiber.  This is a terms-of-service restriction, not a technical restriction (nothing is technically preventing such hosting).  It's a pretty obvious restriction to apply when you're handing out cheap gigabit internet -- it would be impossible for Google to offer quality service in a residential price range if businesses started hosting their public servers on it.  (That said, I'd personally prefer that Google apply traffic shaping instead, but that would also make people angry.)

In general, you are making a lot of demands that revolve around your personal ideals of how the world should work, and insisting that everyone else should have the same ideals as you.  You need to get over it.  No, of course Google isn't going to host their search ranking code on Microsoft's servers, but they also aren't claiming that Microsoft should host its core trade secrets on Google's servers.  Google is just trying to offer a service that has a set of trade-offs that make sense for many people (obviously, not you).  By your argument, anyone trying to offer such services is inherently evil, which is ridiculous and an insult to everyone who actually benefits from what Google is provided.
 
+Devesh Parekh I think the root of the issue is the premise that they sync all servers using spanner and GFS, so data-stores are not necessarily local to the U.S. 
 
+Kenton Varda In terms of "demanding open", the current issue with H.264 is a great example. Its the standard everyone else uses, but since it isn't open, and not under Google's control, Google intends to fragment web compatibility (again).

Okay, I'll accept two.

Yes, it is. Sure, its hard to meet that price point while delivering the service promised: Unlimited Gigabit up and down. This is why unlimited data is a scam. It's overselling. Net neutrality is offering access regardless of what you send over it.

Google isn't asking Microsoft, no. But they seem to be asking everyone else to host their trade secrets on Google's servers. Servers which apparently don't encrypt traffic between datacenters.

I don't have a problem with them offering the service. I have a problem with how they obscure and downplay the critical flaws of that service, and how people have bought into it hook, line, and sinker. Google doesn't portray their services as an option for specific cases, they're trying to sell the concept that everyone should give Google all their data as a default assumption. Meanwhile, Google's data remains something they're absolutely terrified of giving anyone else.
 
As far as I can tell, the information we're supposed to rely on regarding security at Google is "just trust us". That's pretty much the response I get when I question their new more-closed product lines.

There's a point where I stop being willing to put blind trust in a corporation. Because underneath all the friendliness and the bright colors, and the strong support for open source, that's all it is. Just another corporation. And trusting corporations has not exactly served us well in the past.
 
Not sure if anyone wants to comment on this, but my guess is that sync is a huge issue for Google.

Conceptually if I were them, I would break the globe into convex hulls and then index this data using a coordinate system within these hulls as sub-indexes while also adjusting relative time and sync rate based on activity so that the index frames in each sector reflected the volume of data in these zones. This being a very horrible thought, is that hypothetically, if data is indexed by GPS location then a selector could be targeted at a specific zone based on it's local index and relative timeframe. If sync, spans then that same data is processed at every Google data center.
 
+Jann Van Hamersveld That's one of the reasons I don't fully trust this story and want to see the primary documents. Due to the redundancy requirements for Gmail and other Google services, the data has to be copied to other datacenters. If the NSA and its cousins were really tapping all those lines, they would have no need to send requests to Google and Yahoo! to put certain accounts under surveillance. Also, the numbers reported in the article seem suspiciously low, and some of the quotes from the slides just don't make any sense (e.g., what does it mean to "defeat" data that you don't want).

Again, I believe it is plausible that they've tapped at least one link into the UK, but I would really like to see the evidence before I believe any of the article's analysis.
 
+Kenton Varda "Facebook, for its part, is a heavy user of Google Docs", even if you know this for a fact, it is disturbing that you would say this in public.
 
+Jake Weisz You've misunderstood Google's (and Mozilla's and Opera's) objection to H.264. They don't want to build the web on formats that require licensing fees. Imagine if you had to pay to distribute compressed images on the web. Independent websites would have to resort to having their images hosted by companies with the pockets to pay licensing fees just to show images at all, which is silly. Similarly, a video format that requires licensing fees (and additional fees for real-time encoded video) is something that companies that promote the web platform should oppose for inclusion as a required format.
 
+Kenton Varda really needs to add a cleverly concealed back-door into his protobuf implementation... :}

Btw, am I the only one who bothers to notice that this implies SSL actually gives the NSA some trouble?
 
+Devesh Parekh If everyone else is using it, and Cisco is paying for it anyhow... it makes sense to accept the standard that everyone else is using, when you're trying to make an interoperable web platform.

+Nathaniel Hourt Well, we already knew that. Lavabit was ordered to turn over SSL keys.
 
+Anand Kumria - Err.  Well, I guess I don't know it for a fact.  I don't actually work for Google anymore (not sure if that was clear), nor do I work for Facebook, I've only heard second-hand.  But companies using their competitors' products is hardly news.  Google uses lots of Apple laptops and of course uses Windows all over the place.
 
True, but I was under the impression SSL was based on ECDSA, which they backdoored. Maybe I am misinformed.
 
+Kenton Varda I think there's a big difference between utilizing someone's hardware and software, and storing your data on their servers.
 
+Jake Weisz - Practically every startup in silicon valley uses Google Apps, from what I've seen...
 
+Kenton Varda Given the issues even the NSA has with abuse of private information, I have a hard time believing that no abuses happen, in a corporation of Google's size. Regardless of assurances to the contrary.
 
As +Kenton Varda  pointed out, tapping privately owned Fiber and remaining undetected is a very uncommon & gutsy tactic.  Speaking about information security in general, I think any security engineer who would pose fiber-tapping of privately owned & maintained lines as a top-tier risk would be fired.  The thing to keep in mind is that sticky note abstracts away large numbers of network devices, hundreds of protocols, service configurations, etc.  It is too simple to reflect reality in any meaningful way.
 
+Jake Weisz - Google is big but the number of people who have access to user data is small, and their access is logged and reviewed.

But yes, there are occasionally abuses:  http://gawker.com/5637234/gcreep-google-engineer-stalked-teens-spied-on-chats

Over time more and more protections have been put in place to prevent this.  I personally helped implement safeguards against rogue employees, so I know it is taken as a serious threat, even though there haven't been many actual examples.
 
+Kenton Varda That is... in fact, creepier than I'd expected when I suggested the problem.
Hong Z
 
Even NSA can't protect itself from leaking, security on Internet is hard. Google is doing better than most big companies. FWIW, millions of personal computers are hacked every year. You can only do so much before you can't afford it.
 
Who cares ? God has been watching us all along. 
 
+Jake Weisz "Everybody else" is not using it. Until yesterday, 3 of the 5 major browser vendors, representing the majority of web browser share, opposed the inclusion of H.264 as a required codec. This changed yesterday with Cisco's announcement, but the result is a web that requires plugins for users of Mozilla's browsers on some platforms, which is less than ideal. Mozilla is admitting defeat on the video tag front, and they're going to try again in two years with a new royalty-free codec, but I have no reason to believe they'll be any more successful with that fight than they were with VP8.

Google has some fault in this situation, but their fault is in the other direction -- not fighting hard enough against making H.264 required on the web.
 
+Devesh Parekh And as of yesterday, Apple, Microsoft, and Mozilla have all agreed on H.264, and Google said shortly after they were going to stick to VP8 anyhow.
 
+Jake Weisz Help me understand why you think Mozilla's admission of defeat is a good thing. Why don't you blame Apple and Microsoft for not supporting a royalty-free codec instead?
 
+Devesh Parekh Because we need one baseline format that works on any web browser. You can all add VP8 on whatever you want after that. But if three of the largest developers are now in agreement, it's time for Google to accept it and move on, rather than being the Internet Explorer 6 of HTML5. The browser everyone has to code exceptions for.
 
+Jake Weisz Why wouldn't you say the same when three of the largest developers were in agreement on VP8?
 
+Devesh Parekh Can I assume this fifth developer you keep talking about is Opera? I'm not willing to give Opera the same high status as the others, given it's infinitesimally small market share.
 
+Jake Weisz When the video tag discussion started, Opera had the highest mobile market share of all of the companies mentioned.

Even if you disregard Opera, tell me why you think the current situation is better than if Microsoft and Apple had accepted VP8 and why you wouldn't blame them even if it were just Mozilla vs. Microsoft and Apple (one vs. two) on that issue due to the basic principle alone.
 
+Devesh Parekh I don't think it's "better" than VP8. But, disregarding Opera (because they have less than 2% desktop share, and at this time, are not much better off in mobile either), we had two versus two. Now we have three versus one.

I would've rather had HD DVD over Blu-ray, but the reality is, Blu-ray won, and now you can get a Blu-ray player in a Toshiba, because Toshiba knew when to accept defeat.
 
+Jake Weisz  You still haven't explained why you can disregard Opera when the video tag issue started. At the start of the issue, they had the highest mobile market share, which would have made the situation 3 vs. 2. Following your "majority wins" logic, why not blame the 2 when that was the case?

Also, as I alluded to in my Mozilla vs. Microsoft and Apple hypothetical, why do you think this "majority wins" logic is good for the web? When Microsoft had a 90% share, were you happy with the outcome of letting them control the development of the web platform for a few years? Would you have supported an upstart that tried to make things better, or would you have blamed them for going against the majority?
 
+Devesh Parekh I disregard Opera because it's 2013, and they're irrelevant right now. You seem obsessed with historical footnotes. What we're at, is trying to get one standard, right now. At the current time, Google is the problem.
 
+Jake Weisz Let me get one thing straight before I finally stop hijacking this thread. Were Apple and Microsoft the problem before yesterday, and do you blame them for holding up the video tag for two-and-a-half years while they were the minority? I don't understand why Google has become the troll overnight on the video tag issue for you.
 
+Devesh Parekh You had a standoff, two big browsers versus two other big browsers. And yes, I would've preferred VP8. But now there's a free way for this to be settled, so we can finally move on, and Google's holding up the train.
 
+Jake Weisz I still don't understand. Two-and-a-half years ago, it was three big browser vendors vs. two big browser vendors. There was a free way to settle it then without plugins, and that free way still exists.
 
You really love Opera, don't you?

There's a big difference between multiple companies on both sides, and really just one company holding up progress. If Google capitulated, we'd have like 98% market share agreement on video format compatibility.
 
Is there a reason Google can't support both? Is there a reason to reject supporting a format that is being licensed for free?
 
Kenton, I doubt they're "using your code" in the usual sense.  If I needed to interpret a stream of serialized protos for which I lacked descriptors, I might use the library as a reference, but I would definitely write new code.
 
AFAICT, the Cisco announcement only applies to WebRTC (not <video>?), and although their code is open source, you must download their binary blob from their server to be in compliance with the MPEG-LA license.

Seems like a pretty shitty solution to me.
 
+Ambrose Feinstein - I'm assuming they've taken the rather-easy step of reverse-engineering the schemas they're interested in and at that point it makes sense to use the full protobuf library for all the same reasons anyone else would.

The protobuf library also includes low-level APIs for decoding protobufs without descriptors.  They're not too hard to rewrite from scratch, sure, but I'm not sure why someone would prefer that.

Unless they are afraid I slipped a back door in there...
 
+Jake Weisz Your last two questions seem to be directed incorrectly:

"Is there a reason Google can't support both?" Is there a reason Microsoft and Apple couldn't support both when the leading mobile browser vendor and the two other leading desktop browser vendors supported VP8 2.5 years ago and didn't have a royalty-free way to support H.264?

"Is there a reason to reject supporting a format that is being licensed for free?" This question applies to Microsoft and Apple as well and has applied for the last two-and-a-half years. You're blaming Google for one day of intransigence while ignoring 2.5 years of ongoing obstruction from these other vendors. As far as a reason to not support it, some platforms (like iOS) don't support the downloadable plugin model that Cisco has provided, and worse, iOS doesn't have APIs to allow third parties to use their licensed real time H.264 implementation, so a hypothetical iOS browser from a vendor like Mozilla that does not pay H.264 royalties would be unable to implement the spec.
 
+Devesh Parekh Fine. The other companies should've supported VP8. But they didn't. So Google can be the more mature company, and support the standard everyone else is using, and end the stalemate.

Your comments seem to be focused on getting me to blame Microsoft and Apple... but you fail to see that I don't care. I just want to know that if I embed a video in a page, every browser can play it.
 
+Jake Weisz My comments aren't focused on getting you to blame Microsoft and Apple but around why you continue to blame Google for "[intending] to fragment web compatibility" when it takes two to fragment. You could achieve your desired result just as easily if VP8 were to "win."
 
+Devesh Parekh Sure. But which do you think is more likely to "win" at this point? I mean, surely the Google rep could have at least said they'll consider their options or have nothing to comment about at this time. It seemed liked the statement was going out of its way to state that Google intends to remain an outlier.
 
I'd also feel compelled to point out that H.264 is used in... pretty much everything, whereas VP8 is limited to... well, Google Chrome.

One of the biggest challenges today with video formats is the utter lack of options in video conversion tools that don't cost money. I'd have to argue there's more value in H.264 by the merit that its relatively standard for recording and playback of video already.
 
It would be pretty silly for Google to throw in the towel right before the IETF votes on the issue.  Of course they're going to continue championing their own tech at least until they lose the vote.
 
Jumping in here momentarily to discuss feasibility of encrypting traffic...

Quad-core Intel Core i5 processors with the AES instruction set can encrypt 3.5 gigabytes/second using AES 256 (after handshaking, it's all stream cipher anyway). Building chips whose only purpose is AES? I think that 3.5 gigs/second could be beaten, especially with the quality of hardware guys at Google.

I don't believe it is out of the world to expect that cross-DC encryption wouldn't actually be as expensive as we'd expect, aside from the R&D necessary to develop and manufacture the edge routers to perform DHE+AES w/cert verification transparently.

I suspect the more likely cause of the delay (instead of cost, time to develop, etc.) is actually that they need to upgrade the links without taking down Google itself. This is not something that you can say over a phone, "Ted are you ready? On three we swap number five. One, two..."
 
+Josiah Carlson - I'm sure Google is capable of building such hardware, but not overnight.  It would take years to design, manufacture, and deploy.

3.5 GB/s is how much a CPU can encrypt if that's all it is doing, but in practice you also need to get the data into and out of the machine, which is not trivial.  Even if you replace the encryption step with memcpy() you are still adding a fair amount of latency to get the data off the network, into the encryption process, and then back out to the network again.  Maybe you could get the overhead pretty low if you built the code directly into the Linux network stack (to avoid scheduling) but that's again going to take some effort to develop, and you still have some networking overhead.

And of course inserting these encryption machines into the network edges may itself be non-trivial.

The more practical approach is probably to get the application servers to do their own encryption, whenever they happen to be talking across datacenters.  But that's herding cats -- you need to get all the app maintainers to push new builds and run tests to make sure nothing breaks.

Any change that affects all of production is a huge effort, no matter how simple it may appear from the outside.
 
+Kenton Varda If they are already deploying it (according to the linked article from September), then they've already been at it.

But here's the thing: you don't put desktop machines on the network to do encryption, you build RSA cert verification, DHE key exchange, and AES crypto in a stand-alone chip, and you put that chip in your network hardware. Then you're not pulling it off the network, you are building into your switches and routers, which are already processing and routing traffic based on ARP MAC tables. All of your data is already chopped into frames in the router/switch, passed into buffers for routing/switching, then distributed to the proper ports.

You need a nice pre-designed and manufactured AES coprocessor with support for CTR mode (the AES mode for WPA2 does CTR encryption and CBC for content verification, and both TI and Amtel have a chip for this), then even if you can't go fast enough with one, you can go parallel with as many as you need.

Or if you are doing it yourself, you build some test hardware with FPGAs for dozens of iterations a day, then use your results from that to fab an asic.

Don't get me wrong, I'm not saying that it is trivial. But the cost delta (after design and tooling) between a router/switch that does encryption vs. one that doesn't isn't an order of magnitude, it's closer to $5/gigabit or better (the Amtel chip is about 65 cents each in volumes over 4000).
 
+Josiah Carlson - You brought up the i5, so I commented on a software solution.  I don't know much about what off-the-shelf hardware exists in this space.

To be clear, most of my off-the-cuff cost estimate was mostly for R&D and installation, not the cost of procuring physical hardware which is probably reasonably cheap.  Even if there is an OTS hardware solution, fitting it into Google's custom network stack is probably non-trivial.  (But perhaps that's exactly the point you were making.)
 
+Kenton Varda The problem with this statement is that the threat has been both practical and likely for more than a decade. Tapping fibers isn't rocket science (thousands of miles through unguarded middle of nowhere, in many cases even crossing US government locations such as military bases), isn't new (public security conference presentations a decade ago) and has been done for as little as a few million dollars worth of potential pay of.

Access to Google's backend networks transmitting the private information of tens of millions of users can be worth a lot more than a few million dollars so the cost-benefit analysis prompts us (and has prompted us for years) to expect such an attack.

It is precisely for this reason that large banks which own (or lease dark) fibers run (bump-in-the-fiber) hardware encryptors between their data centers. (I.e. the ECB already did this 13 years ago though they seemingly used US-sourced equipment to protect against US eavesdropping which is questionable but we'll save that for another day.)

Surely this isn't inexpensive or easy. But neither are biometric access controls for Google's data centers, or measures against insider attacks. When you have a dataset like Google's, this is par for the course.

Lastly, commercially available bump-in-the-fiber encryptors won't add appreciable latency for inter-DC links and easily operate at wire speed. No match for Google's amazing engineering prowess.

Pretending that this attack was unlikely is neither realistic, nor helpful.
 
+Kenton Varda I guess you missed in my first post where I said, "Building chips whose only purpose is AES? I think that 3.5 gigs/second could be beaten, especially with the quality of hardware guys at Google."

Also, +Jeroen van Gelderen pointed out that hardware encryption for fiber already exists, so I'm going to stick with my opinion that it's totally possible.
 
+Josiah Carlson In fairness to Google, we do have to keep in mind that given the value of the dataset, and the unhealthy level of interest the government has in it, Google may not be able to rely on many commercial equipment manufacturers. Especially if those manufacturers rely on large government contracts for their subsistence.

That said, building hardware encryptors in-house is definitely well within the realm of possibility for Google's engineers. Most of the IP blocks can be sourced.

Since these devices are transparent below layer 2 you just have to feed 'em fiber, space and power without affecting any of the rest of the network infrastructure and software. You key two units and run the fiber through them on both ends.

Google's resilient, redundant infrastructure probably won't even notice the downed fiber while you are deploying the unit. The interruption will simply be treated like back-hoe fade or a faulty line card: the infrastructure will route around the faulty link until its back online.

The challenging part is secure key management. But Google already employs people who've forgotten more about that kind of thing than you and I will ever know.
Add a comment...