Shared publicly  - 
 
15 years ago (on Feb 1st, 1999) I first set foot in a Google datacenter. Well, not really -- in the Google cage in the Exodus datacenter in Santa Clara.  Larry had led me there for a tour (I wasn't an employee yet) and it was my first time in any datacenter.  And you couldn't really "set foot" in the first Google cage because it was tiny (7'x4', 2.5 sqm) and filled with about 30 PCs on shelves.  a1 through a24 were the main servers to build and serve the index and c1 through c4 were the crawl machines.

By that time we already had a second cage, immediately adjacent, that was about 3x larger and contained our first four racks, each containing 21 machines named d1-42 and f1-42 (don't ask me what happened to the b and e racks, I don't know).  I don't recall who manufactured d and f but they were trays with a single large motherboard and a Pentium II CPU.  (Later, the g rack would be the first corkboard rack.)

Some interesting details from the order:

- Yep, a megabit cost $1200/month and we had to buy two, an amount we didn't actually reach until the summer of 1999.  (At the time, 1 Mbps was roughly equivalent to a million queries per day.)

- You'll see a second line for bandwidth, that was a special deal for crawl bandwidth.  Larry had convinced the sales person that they should give it to us for "cheap" because it's all incoming traffic, which didn't require any extra bandwidth for them because Exodus traffic was primarily outbound.

- Note the handwritten "3 20 Amps in DC" change to the standard order form.  At the time, DC space was sold per square foot, and we always tried to get as much power with it as possible because that's what actually mattered.

- This particular building was one of the first colocation facilities in Silicon Valley.  Our direct neighbor was eBay, a bit further away was a giant cage housing DEC / Altavista, and our next expansion cage was directly adjacent to Inktomi.  The building has long since been shut down. 
1372
505
Christoph Schmitter's profile photoYoucef Benchouk's profile photoJesus Eisus Aaron S Uni Peg Unix “Unicorn Pegasus” Christ-Payne 333's profile photoergys doci's profile photo
70 comments
 
And all of that computing power now exists in about half a dozen Nexus 7's.
 
Altavista in the same colo as Google. Who would have thought it. Long gone are the days of square footage for dc space too.
 
Good deal.  Complimentary reasonable number of re-boots every month. :)
 
I wonder how much Youtube would cost, if Google were still paying $1400 a megabit..
 
the first time i saw this cage in person was july of 99 when i was taking my ccie in sntc and visiting a friend whose company occupied a nearby cage (i also happened to work for exodus at the time).  At the time  it was so tiny and one of the ugliest and unkempt cages i'd ever seen, which was somewhat funny given we both used google more than altavista.   a few years later they were renting space in 35,000 sq ft increments rather than 28 and googly-ness had made it way into the cabling.
 
Only 15 years, how did we cope before Google. 
Todd Reed
+
3
3
4
3
 
+John Looney 

Let's round and call it $3 billion per month.

Roughly 130 MB for one hour of video (Assuming 360p)

130 MB = 0.127 GB

6 billion hours of video viewed per month

(Source:
https://www.youtube.com/yt/press/statistics.html)

6 billion hours * .127 GB / hour = 762,000,000 GB per month

You can transfer roughly 320 GB in a month on a 1 Mbps circuit.

762,000,000 gigs per month / 320 gigs per 1Mbps circuit = 2,381,250 Mbps circuit

2,381,250 * $1,200 per Mbps.

$2,857,500,000.

This calculation is possibly wrong and provided for entertainment purposes only. :)
Tau-Mu Yi
+
1
2
3
2
 
Google's growth in 15 years is nothing short of amazing.
 
Is it possible that I've seen one of these first racks in the museum of computer history in silicon valley? How do these two bits of computing history match up?

And: Thanks for sharing!
 
shit shit shit I knew nvever click that lik ever ;D
 
 
A few quick fillins.  The d and f machines were assembled by Kingstar if my memory serves me.  We skipped "b" because c stood for crawl.  I then decided to skip "e" because I figured it sounded too much like "d" and would be confusing though of course we later adopted all the other similar sounding letters anyway.

A quick footnote to the "a" machines: we improvised our own external cases for the main storage drives including our own improvised ribbon cable to connect 7 drives at a time (we were very cheap!) per machine.  of course there is a reason people dont normally use ribbon cables externally and ours got clipped on the edge while we ferried these contraptions into the cage.  so late that night, desperate to get the machines up and running Larry did a little miracle surgery to the cable with a twist tie.  incredibly it worked!
 
"Because components frequently failed, the system required effective fault-tolerant software."  Now, that sounds familiar.
 
2mbitps for $2400/month to run the whole of Google in 1999. And now Google is 30% of the whole web? Google now is 2petabit/s for $2.4 Billion per month? 2 years from now Google may be 2zetabit/s for $24 Billion per month? I like your growth.
 
More corkboard images here:
http://www.flickr.com/photos/nationalmuseumofamericanhistory/sets/72157635280626381
 
Back in 2000, the main ads database was on a single machine, f41. The ads group was five engineers back then, so we took turns carrying a pager. I had to abort a dinner date one night (it was Outback Steakhouse in Campbell) to come back to the Googleplex because f41 was wedged.
 
How often did you actually have to go above that 15 Mbps?

+Matt Cutts do you remember who that dinner date was with?
 
In late 1999, +Craig Silverstein and I made a mad dash to the datacenter in his aged Porsche because we were afraid hackers had broken into our servers and were shutting them down.  Turned out it was failsafe software that we had installed.  That was the only time I visited the facility that +Urs Hölzle mentions.  I recall being astonished by the vast waste of space in the Altavista cage.
 
I understand none of this, but I find it extremely fascinating. 
 
I love the noncommittal “includes a reasonable number of reboots per month."
 
That's awesome! And interestingly, I think that's about the same week I started my first tech job
 
I'm also fascinated by this discussion. Thanks everyone!
 
That would have been fun. I started my first network job around that time and was fairly spoiled with enterprise type stuff. Never got to fix home made ribbon cables with twist ties!
 
thanks for posting, this was awesome. 
 
absolutely, I love stories of how companies had humble beginnings, thanks Urs!
 
If only buying new compute capacity were so easy these days =)
 
+Urs Hölzle (or others), do you think Google could have gotten off the ground then if net neutrality was under siege as it is now?
 
Those are bandwidth prices not too uncommon in some parts of Africa today!
 
What is money but promissary notes implimented with our certificate of live birth
 
I'm probably not allowed to share an invoice of 2007! But your numbers slightly changed over a few years!!
 
+Bill Hartzer it was supposed to be with my wife. She wasn't delighted to hear "Hey, instead of a nice night out, we need to leave right now to reboot a computer." (This was before we had smartphone apps that could easily SSH from a cell phone.)
 
+Damon UK I still see some providers charging by the square foot such as Telx and Terremark.
 
Imagine what the future will be like...
 
Thank you Urs for posting this. Congratulations on what you've done in 15 years, from the beginning in the little cage. 
 
Hah, I worked at Exodus during that time. I still remember the nights I got paged frantically into SC3 to reboot some Google cluster or to "wiggle those cables and we'll ping it from here."

It was an amazing time. Walking down SC3 and SC5 was like a Who Is Who of debauchery, high tech, and innovation. Google was next to Raytheon who were next to Altavista and behind that was buy.com, pets.com, and Inktomi. Lycos was on the other side of building, and the guys and gals working at Sun used their corridor more than once for a nerf gun fight.

Good times :)
 
It appears the sales person had the better of Google in the traffic negotiations. Traffic paid irrespective of direction. So when you buy 15mbit/s/month it doesn't matter whether this is up or down. Google however paid for both up and down and so effectively paid for 17Mbit/s, thinking it had saved on costs for incoming traffic :-)
 
Guess this is a bad time to bring up the naming bugaboo

Who decided to use something other than a pure base 26 naming convention after we had more than 26 racks?
 
+Bernd Jendrissek because 1U is 1.75" so 42 of them is 73.5" and once you add a bottom and top, it barely fits through a standard door, or airline shipping.

and of course because its the ultimate answer.  the answer to life the universe and everything.
 
I remember touring the Google cage when they had built up the double sided server architecture. There was a monolithic rack with two server motherboards on each side front and back. Anyone remember this? or was I just a casualty of the 1990's? Dan Dale Exodus Communications 1997-2001 
 
I was a systems engineer with Exodus from 1997 to 2001. I found it amazing when Google went to to a monolithic double sided motherboard architecture. To me it looked like a tall 42RU or 45RU rack with 1RU motherboards front and back. Am I remembering this correctly? I used to give tours for prospective customers but could never fully explain the Google architecture. I think by the time we knew who Google was they were occupying more than two cages and Alta Vista was but a distant memory, And now it is all steampunk data centers in locations that were formally aluminium smelters close to hydro electric dams. My how times have changed ;)  
 
Ah... maybe I am just remembering the Corkboard servers.
 
And probably everyone working on the servers by the time had long hair :D 
 
+Dan Dale I remember this.  I was the CSR that managed Google from the Ops side.  And their rack configuration put off so much heat, that we had to bring in secondary cooling just to keep their neighbor's servers from melting.
 
When i was with Grand Central, we were in one of the Exodus SC centers with you.   Some of our engineers used to laugh at your custom cases and racking methods.  In retrospect it probably had a better TCO/ROI than the mountain of SUN gear we were using.... :)  nice post thanks for the trip down memory lane....
 
Currenex was right next to your cage. What were the machines that had the open wires and alligator clips attached to them? Saw the cork boards too. Our Sun servers looked so much tidier, though we had fewer CPUs per sq ft.

 
before Google in 1999, EXODUS had many other firms that all would recognize...
Exodus had a 30 day Money-back type TRIAL period. ringnow.com was a startup that was in Three datacenters in santa clara county at once (aboveNet Exodus Faultline) ! ...Joe Harr, Heather Wilson, and many others. thanks for the memories!!!! The unfenced recycle dumpsters in the back used to be goldmines of "extra" server stuff (Installation CDs, manuals, old hardware)
 
FUNET was vast and limitless!
and free for students and staff :)
 
interesting how a CoLo was called a Virtual DC. 
 
Having worked in Google's data centers has been the greatest work experience so far. It's amazing to see this is how it started.
 
Very cool. Lawson data centre (SC2) one month before I joined Exodus. 
 
we all miss Joe Harr and Heather Wilson ...
Add a comment...