Cover photo
James Pearn
Attended University of St Andrews
Lives in Munich, Germany
7,799 followers|808,549 views


James Pearn

Shared publicly  - 
Mark Bruce's profile photo
Nice photospheres!

James Pearn

Shared publicly  - 
Carbon nanotube circuits built for first time
Full-wafer digital logic structures

Some days I hear news of technological progress which is so dizzying it almost makes me fall over. And today is another of those days.

Researchers at Stanford have built full-wafer digital logic structures using carbon nanotubes. Below is an electron microscope image of their work. The article they published yesterday is here:

Only last week I was reading about carbon nanotubes on Wikipedia and wondering how close we are to building complete microprocessors out of these. I concluded that although single transistors had been built, the fabrication of circuits was still many years away. And yet, here we are.

Incredible. This is the future unfolding before our eyes.
Mark Bruce's profile photoJeremy McMillan's profile photo
It really hits you between the eyes when you're wondering how many years until a thing arrives and then you read about it being done the following week :-) Its just one of many things that make the times we live in so damn interesting.

James Pearn

Shared publicly  - 
Interactive 3D model of the human brain

The Brain Surface and Tractography Viewer was developed by Dan Ginsburg and +Rudolph Pienaar. The different colours represent the different directions of the neural tracks. Inferior/superior are blue, lateral/medial are red, and anterior/posterior are green.

Take a look: - (requires Google Chrome)

The imagery is not as detailed as the static pictures posted by +Owen Phillips earlier this week. But it's nice to be able to interact with it, to rotate and explore the inner connectivity of the brain. Note, this requires a WebGL enabled browser such as Chrome, Firefox, or Safari.
Bill Liu's profile photoOwen Phillips's profile photoJulia Robertson's profile photoJonathan Langdale's profile photo

James Pearn

Shared publicly  - 
How many servers does Google have?

My estimate: 1,791,040 as of January 2012
And projection: 2,376,640 in early 2013

This estimate was made by adding up the total available floor space at all of Google's data centers, combined with knowledge on how the data centers are constructed. I've also checked the numbers against Google's known energy consumption, and various other snippets of detail revealed by Google themselves.

Satellite imagery:

Google doesn't publicly say how many servers they have. They keep the figure secret for competitive reasons. If Microsoft over-estimates and invests in more servers then they'll waste money - and this would be good for Google. Conversely, if Microsoft builds fewer servers then they won't match Google's processing power, and again, this would be good for Google. Nevertheless, from the limited amount of information that is available I've attempted to make a rough estimate.

First of all, here's some background on how Google's data centers are built and organised. Understanding this is crucial to making a good estimate.

Number and location of data centers

Google build and operate their own data centers. This wasn't always the case. In the early years they rented colocation space at third-party centers. Since the mid-2000s, however, they have been building their own. Google currently (as of January 2012) has eight operational data centers. There are six in the US and two in Europe. Two more are being built in Asia and one more in Europe. A twelfth is planned in Taiwan but construction hasn't yet received the go-ahead.

Initially the data center locations were kept secret. Google even purchased the land under a false company name. That approach didn't quite work however. Information always leaked out via the local communities. So now Google openly publishes the info:

Here are all 12 of Google's self-built data centers, listed by year they became operational:

2003 - Douglas County, Georgia, USA (container center 2005)
2006 - The Dalles, Oregon, USA
2008 - Lenoir, North Carolina, USA
2008 - Moncks Corner, South Carolina, USA
2008 - St. Ghislain, Belgium
2009 - Council Bluffs, Iowa, USA
2010 - Hamina, Finland
2011 - Mayes County, Oklahoma, USA

2012 - Profile Park, Dublin, Ireland (operational late 2012)
2013 - Jurong West, Singapore (operational early 2013)
2013 - Kowloon, Hong Kong (operational early 2013)
201? - Changhua Coastal Industrial Park, Taiwan (unconfirmed)

These are so-called “mega data centers” that contain hundreds of thousands of servers. It's possible that Google continues to rent smaller pockets of third-party colocation space, or has servers hidden away at Google offices around the world. There's online evidence, for example, that Google was still seeking colocation space as recently as 2008. Three of the mega data centers came online later that year, however, and that should have brought the total capacity up to requirements. It's reasonable to assume that Google now maintains all its servers exclusively at its own purpose-built centers - for reasons of security and operational efficiency.

Physical construction of data centers

Although the locations are public knowledge, the data center insides are still fairly secret. The public are not allowed in, there are no tours, and even Google employees have restricted access. Google have, however, revealed the general design principles.

The centers are based around mobile shipping containers. They use standard 40' intermodal containers which are ~12m long and ~2.5m wide. Each container holds 1,160 servers. The containers are lined up in rows inside a warehouse, and are stacked two high.

See the video Google released in 2009: Google container data center tour

Are all of Google's data centers now based on this container design? We don't know for sure, but assume that they are. It would be sensible to have a standardised system.

As for the servers themselves - they use cheap, low-performance, open-case machines. The machines only contain the minimal hardware required to do their job, namely: CPU, DRAM, disk, network adapter, and on-board battery-powered UPS. Exact up-to-date specifications are not known, but in 2009 an average server was thought to be a dual-core dual-processor (i.e. 4 cores) with 16 GB RAM and 2 TB disk.

The containers are rigged to an external power supply and cooling system. Much of the space inside a warehouse is taken up with the cooling pipes and pumps. The cooling towers are generally external structures adjacent to the warehouse.

Counting servers based on data center floor space

This is by no means a precise method, but it gives us an indication. It works as follows.

First we determine the surface area occupied by each of Google's data center buildings. Sometimes this information is published. For example the data center at The Dalles is reported to be 66,000 m². The problem with this figure, however, is we don't know if it includes only the warehouse building itself or the whole plot of land including supporting buildings, car parks, and flower beds.

So, to be sure of getting the exact size of only the buildings, I took satellite images from Google Maps and used those to make measurements. Due to out-of-date imagery some of the data centers are not shown on Google Maps, but those that are missing can be found on Bing Maps instead.

Having retrieved the satellite imagery of the buildings I then superimposed rows of shipping containers drawn to scale. Care was taken to ensure the containers occupied approximately the same proportion of total warehouse surface area as seen in the video linked above. That is, well under 50% of the floor space, probably closer to 20%. An example of this superimposed imagery is attached to this post, it shows one of the warehouses in Douglas County, Georgia, USA.

All floor plan images:

Having counted how many container footprints fit inside each warehouse, I then doubled those figures. This is because I assume all containers are stacked two high. Quite a large assumption, but hopefully a fair one.

It turns out that in general the centers house around 200,000 servers each. Douglas County is much larger at about twice that figure. Meanwhile Lenoir, Hamina, and Mayes County are smaller. Mayes County is due to be doubled in size during 2012. The sizes of the future data centers in Singapore and Hong Kong have not been measured. Instead I assume that they'll also host around 200,000 servers each.

This results in the following totals:

417,600 servers - Douglas County, Georgia, USA
204,160 servers - The Dalles, Oregon, USA
241,280 servers - Council Bluffs, Iowa, USA
139,200 servers - Lenoir, North Carolina, USA
250,560 servers - Moncks Corner, South Carolina, USA
296,960 servers - St. Ghislain, Belgium
116,000 servers - Hamina, Finland
125,280 servers - Mayes County, Oklahoma, USA

Sub-total: 1,791,040

Future data centers that'll be operational by early 2013:

46,400 servers - Profile Park, Dublin, Ireland
200,000 servers - Jurong West, Singapore (projected estimate)
200,000 servers - Kowloon, Hong Kong (projected estimate)
139,200 additional servers - Mayes County, Oklahoma, USA

Grand total: 2,376,640

Technical details revealed by Google

A slide show published in 2009 by Google Fellow +Jeff Dean reveals lots of interesting numbers. In particular it mentions "Spanner", which is the storage and computation system used to span all of Google's data centers. This system is designed to support 1 to 10 million globally distributed servers.

Given that this information was published over two years ago, it's likely the number of servers is already well into that 1-to-10 million range. And this would match with the floor space estimation.

Slide show:

Counting servers based on energy consumption

Last year +Jonathan Koomey published a study of data center electricity use from 2005 to 2010. He calculated that the total worldwide use in 2010 was 198.8 billion kWh. In May of 2011 he was told by +David Jacobowitz (program manager on the Green Energy team at Google) that Google's total data center electricity use was less than 1% of that worldwide figure.

From those numbers, Koomey calculated that Google was operating ~900,000 servers in 2010. He does say, however, that this is only "educated guesswork". He factored in an estimate that Google's servers are 30% more energy efficient than conventional ones. It‘s possible that this is an underestimate - Google does pride itself on energy efficiency.

If we take Koomey's 2010 figure of 900,000 servers, and then add the Hamina center (opened late 2010) and the Mayes County center (opened 2011) that brings us to over a million servers. The number would be ~1,200,000 if we were to assume all data centers are the same size.

Koomey's study:


The figure of 1,791,040 servers is an estimate. It's probably wrong. But hopefully not too wrong. I'm pretty confident it's correct within an order of magnitude. I can't imagine Google has fewer than 180,000 servers or more than 18 million. This gives an idea of the scale of the Google platform.


YouTube videos:
- Google container data center tour
- Google Data Center Efficiency Best Practices. Part 1 - Intro & Measuring PUE
- Continual improvements to Google data centers: ISO and OHSAS certifications
- Google data center security - Google patent for container-based data centers - Standard container sizes - +Jeff Dean's slideshow about Google platform design - “In the Plex” book by +Steven Levy - +Jonathan Koomey's data center electricity use

Articles by +Rich Miller of Data Center Knowledge:

Original copy of this post:

Attached image below is one of Google's data warehouses in Douglas County, Georgia. Photo is from Google Maps, with an overlay showing the server container locations.
Chris Davies's profile photoJames Pearn's profile photoamine amine's profile photoObby Tt's profile photo
Google makes their own servers too on site. 

James Pearn

Shared publicly  - 
Henry Markram, creator of the Human Brain Project, is giving a presentation today ( in Warsaw, Poland. The aim of the HBP is to build a molecular-level simulation of the human brain within a supercomputer. Markram believes this will be possible by the year 2023. He wants to use the simulation to unravel the exact nature of consciousness within his lifetime. I hope he succeeds, because I want to know the secret within my lifetime too. Website:
Terrence Lee Reed's profile photoChris Hennick's profile photoJames Pearn's profile photoCris Smith's profile photo
+James Pearn My point is that for the output to be comprehensible -- let alone interesting or useful -- to humans, it has to be informed by enough of the huge volume of input a human gets from environment and culture. Otherwise you're just creating Cthulhu.
Have him in circles
7,799 people
Muzafar Ali's profile photo
Mario Cano's profile photo
Robert Bertrand's profile photo
Katin Imes's profile photo

James Pearn

Shared publicly  - 
A vision of the future

I came across this image on the new Singularity Institute website and I really like it. Will the Earth ever look like this, I wonder? Imagine all of humanity condensed into a few mega high-rise cities while the rest of the planet is returned to nature.

I would expect the skyscrapers to be much taller though. The cities would be connected via a global network of 8,000 km/h vacuum tube trains. Food production would fully automated and energy would be generated via nuclear fusion.

Image created by: +Conrad Allan 
Mark Porritt's profile photoLars Clausen's profile photoRandy Parker's profile photoCarlo Zottmann's profile photo
My guess is they tore down the taller buildings, once most of the people had been uploaded to data centers underground or in orbit.

James Pearn

Shared publicly  - 
The Transcension Hypothesis
When post-humans leave the visible universe

We've not yet managed to reverse engineer the brain and build artificially intelligent replicants, but it's only a matter of time. And once we do, an intelligence explosion will follow. Imagine building an artificial brain that is just as sentient as a human, but operates at a million times the speed and memory capacity?

These replicants will be our post-human descendants. With superintelligence they'll figure out a deeper understanding of the laws of physics. They'll then build technology to manipulate spacetime and disappear into whatever dimensions lie beyond. That's transcension.

Great new video from Jason Silva:
The Transcension Hypothesis - What comes after the singularity?
Robert Poole's profile photoGeoffrey Teale's profile photoSasa Popovic's profile photoKeith Wiley's profile photo
+Daniel Estrada Well, first off, I think you have a lot of chutzpah to first engage in an ad hominem attack, and then later tell me to calm down.

"You are spending a lot of words saying very little [...]"
Several things come to mind:
1. This smells like an ad hominem attack.
2. It's certainly insulting on a personal level.  This is not ripping on a chosen profession, but a person.
3. I was attempting to address the points you raised in response to me.  Therefore, I felt I was saying exactly what I needed to say.

"I don't care about your prejudice against philosophers, and I won't bother to argue against it."
But then you proceed to do that more-or-less in your next paragraph.  Speaking of which...  The first two "philosophers" you mention are not philosophers to me at all.  Chomsky and Skinner... to me they are a linguist and a behavioral psychologist, respectively.  It just so happens I recognize their contributions to cognitive science.  I just don't recognize them as philosophers, and I really don't care that anyone else does.

When you say "you don't know your history very well," you presume that I place any kind of value on history.  Other people care about that stuff.  Not me.  When people say things like that to me, it is automatically interpreted as snobbish and potentially classist.  It also strikes me as highly irrelevant.  If I'm talking about politics or etymology, then knowing history is expected.  But I frankly don't care much about the history of science or philosophy as much as I care about the ideas and the discoveries.

In a similar vein, let me point out something in your own G+ profile:
"It is somewhat unfashionable to talk about thinkers that inform your work, as opposed to issues. But philosophers in particular tend to map out the problem space by reference to each position's strongest defenders."
And this is one of the reasons I really don't accept what philosophy has to offer: this insistence on tying concepts to people.  It seems regressive to do things this way, because it's not a very efficient means of cataloging knowledge.  (Also bad because my brain does not work this way.)  When you add in the regrettable human obsession with nationalism, you get different groups calling the same thing by two different people's names.

So, now moving on to your next post...

(1) Science is reductionist in its very nature.  I know a brain is made up of neurons.  I know that the brain is what does thinking.  I am amused by your attempt to corner me in a philosophical conundrum by asserting that I can't endorse reductionism and emergent behaviors at the same time.  You claim that if consciousness can be reduced, it can be traced to an individual.  Then you strongly imply that this is in fact my viewpoint.

First of all, I think your attack stems from a misapprehension of how I understand "Reductionism."  I'm going to borrow this definition from Wikipedia, simply to avoid making this far too long:
"Reductionism can mean either (a) an approach to understanding the nature of complex things by reducing them to the interactions of their parts, or to simpler or more fundamental things or (b) a philosophical position that a complex system is nothing but the sum of its parts, and that an account of it can be reduced to accounts of individual constituents."
Since I am not a philosopher and don't find much use for philosophy, I think you know by now that I mean sense (a) as stated above.

Based on this, it would seem obvious that in my world view, there is no intrinsic conflict between my adherence to a brand of scientific reductionism and my conviction that consciousness is an emergent phenomenon.

What do I mean by an emergent phenomenon/behavior?

I define an emergent phenomenon or behavior as an activity or behavior of the collection (e.g., an organism, an ant colony, a bee hive) that is only exhibited in aggregate and not exhibited by individual components (e.g., a cell, an ant, or a worker bee).  Furthermore, it is implicit that the lowest level components of such a system are very simple and work together.  This implies communication of some sort, to coordinate behavior.

I would be willing to concede that a corporation might be an emergent entity, but it doesn't exhibit certain emergent behaviors that I am looking for.  Corporations exhibit some intelligent behavior, but in other respects don't measure up to human-level intelligence, let alone exceed it.

However, I disagree that the most interesting level of analysis is of the organized whole.  Let me shift focus to the human brain.  While it's true that a single neuron doesn't do much (and therefore consciousness can't be "reduced" to that level, to use your understanding of the word), it's also true that the brain can be broken down into subsystems which are composed of neurons and which communicate with each other.  To me, this is very reminiscent of Minsky's Society of Mind.  If we're to have any hope of building synthetic minds, it would certainly help to understand how these subsystems operate and how they are organized, how they communicate together.

(2) You have no problem with representational theories of mind?  That's not quite the ringing endorsement I expected.  It seemed to me that you were firmly in the representational theory of mind camp.

I am also unaware that I was using any kind of slanderous language, so I am sincerely confused by your accusation.

"The issue at hand here is about collective intelligence, and a corporation is clearly a collectively intelligent agent, acting according to its own internal dynamics."
Actually, the issue at hand for me was swinging by to watch Mr. Silva's video and read what Mr. Pearn had to say, and to read the follow-on discussion.  You were the one who injected collective intelligence into the discussion, and in my initial response to this thread, I didn't address it.

That was by intention.  Your conjecture is interesting, as far as it goes.  Maybe it's correct (and maybe I even agree with most of your assertions about corporations), but I don't find it a terribly interesting or useful observation.  After all, corporations aren't the kind of intelligent entities that I find interesting, and I find them only marginally useful in my day-to-day life in the sense that I am forced to deal with them.  Afterward, I did briefly address your argument about corporations, but only superficially, because I really only cared about one small piece of what you had to say.

And that's it.  When you write, "But you haven't even engaged my actual argument. Whether or not corporations are intelligent is besides the point," you've missed the mark by a proverbial mile.  Because I never cared about your whole argument regarding corporations as being just like the entities that Singularists worry about.  I am not here for you or your posts.  I was here for something else, and you happened to be part of the discussion.  The argument seemed mostly self-evident (though I had some reservations about some of the ancillary bits), and therefore kind of banal to me.

Is there a way to really emphasize even more how much I really don't care about your thesis about corporations?  I mean, I'm not saying you don't have a point, but it doesn't change the fact that (a) corporations exist, (b) they do bad things, (c) they are more powerful than any one person, and (d) there's precious little that I can do to change that.

Just for emphasis...
"The fact that you completely missed this argument is somewhat disappointing, because I think the question is interesting."
I didn't miss your argument.  I do not think it is interesting.  When you want to talk about minds and how one might go about building a mind, then give me a call.  A corporation does not, in my view, have a mind.

By the way, you used "beyond reproach" incorrectly.  You sabotaged yourself by saying the exact opposite of what you meant.  Of course, you were wrong anyway since I wasn't "avoiding the question" in the sense that you meant.  My prejudices are what they are, and they exist for a reason which you have some inkling of now.  I would not have gone off on you at all if your first comment hadn't touched a raw nerve.

James Pearn

Shared publicly  - 
What is Google's total computational capacity?

I estimate: 40 petaflops

This is 4x as powerful as the world's most powerful supercomputer.

For competitive reasons Google doesn't reveal this information themselves. We can, however, estimate their total number of servers together with the capacity per server. These figures can then be compared to other high-performance computer systems and used to extrapolate total capacity.

Number of servers

In a previous post from January 2012 ( I calculated that Google's total number of servers is around 1,800,000. This includes all eight of their self-built data centers currently in operation worldwide. Other respected industry watchers are saying Google has 900,000 servers ( But that figure is based on only a single data point (energy usage) that is both unreliable and over a year old. Google have opened whole new data centers since then. So I still think 1,800,000 is a reasonable up-to-date estimate.

Average power per server

In 2009 it was revealed that the average Google server is a commodity-class, dual-processor, dual-core, x86 PC system. That is, each server has four processor cores. See the paper where this is described: (PDF, page 7). Note that this paper was published three years ago. It's quite possible that the servers are replaced over a three-year cycle. So the average now, in 2012, might be a dual-processor, quad-core system (eight cores per server, or even more). But let's be conservative and assume the 2009 info is still valid.

This means Google is running ~7,200,000 processor cores.

Google has said they go for power in numbers. That is, they use lots of cheap processors rather than a smaller number of costlier, more powerful ones. Let's assume then that the average processor is one that first came to market five years ago, i.e. in 2007. This might be the Intel Core2 Duo E4400 ( running at 2 GHz. This processor is capable of around 6 gigaflops per core. Multiply that by our estimated number of cores and Google's total comes out at 43 petaflops.

The capacity of a system is not, however, a simple multiplication of core count and flops-per-core. Rarely can a system reach its theoretical maximum. So for that reason it's helpful to look at other large-scale systems where the total capacity is known.

TOP500 supercomputers

According to the list, the world's most powerful supercomputer is currently the K computer in Japan. It has 705,024 processor cores and a maximum speed of 10 petaflops. This gives it an average speed-per-core of 14.9 gigaflops.

The K computer uses Sparc VIIIfx processors which are rated at 16 gigaflops per core. This tells us that the supercomputer is achieving 93% of the theoretical capacity of all its processors combined. If Google's servers achieve a similar percentage that would mean their total capacity is 40 petaflops, or four times that of the K computer.

Note that even if Google were able and inclined to run the Linpack benchmark across their whole platform they still wouldn't qualify for inclusion in the TOP500 list. Supercomputers only qualify if they're housed entirely under a single roof.

Amazon EC2 Cluster

An Amazon EC2 Cluster instance is currently number 42 on the TOP500 list. Like Google, it is also built using commodity hardware. The exact details are not known, but their web pages mention Xeon and Opteron x86 processors. In a benchmark test the cluster was able to achieve 240 teraflops using 17,024 cores. This averages to 14 gigaflops per core. If Google's servers are around the same performance, that would give them a total of just over 50 petaflops.

Grid computing

BOINC is a grid computing system originally developed for the SETI@home project. Volunteers around the world download client software which utilizes their PC's spare CPU cycles for scientific research. As of February 2012 the system has ~450,000 active computers (hosts) and processes on average 5.7 petaflops.

If we assume that the average BOINC host has the same power as the average Google server, and if we also assume that the average BOINC host is utilized the same amount of time as a Google server, then we can simply multiply the figures. Google has four times the number of servers as BOINC has hosts, so that would mean Google's total processing power is 22.8 petaflops.

Folding@home is another distributed computing project similar to BOINC. It is designed to perform simulations of protein folding and other molecular dynamics. As of February 2012 the project had around 414,000 active processors for a total of 8.4 petaflops. If we assume that Google's average processor performs similar to the average Folding@home processor, this would bring Google's total processing power to 36 petaflops.

Future growth

If Google's computational capacity grows according to Moore's Law then it will double every 18 months. This means Google will become an exascale machine (capable of 1 exaflops) by 2019.

Google said themselves in 2009 that their system is designed for 1 to 10 million servers ( If they have ~2 million currently, that means there's room for five-fold growth, which would mean up to ~200 petaflops.

To reach 1 exaflops Google might need to evolve their architecture. Maybe that they'll start using GPUs, or processors with hundreds of cores. I've no idea, but I would guess someone inside Google is already thinking about it.


- - FLOPS on Wikipedia
- - K computer on Wikipedia
- - Amazon EC2 Cluster
- - Amazon EC2 Cluster
- - BOINC grid computing project
- - Folding@home grid computing project
- - the "globe" graphic used below

#googlecomputecapacity #googleservercount #petascale #exascale #exacycle #singularity
Matthew Yager's profile photoAli Adelstein's profile photoMette Coïni's profile photoJames Pearn's profile photo
I understand that the processing power is only one aspect of intelligence and there is a big different between performance of brain and other aspects of human brain like it's special design and natural software's that control the brain.
But human brain's computing power is a milestone for Artificial intelligence . It is only a sign of singularity for me .

James Pearn

Shared publicly  - 
Deep down, your brain is a chaotic seething soup of particles. On a higher level it is a jungle of neurons, and on a yet higher level it is a network of abstractions that we call symbols. The most central and complex symbol is the one you call "I". An "I" is a strange loop where the brain's symbolic and physical levels feed back into each other and flip causality upside down so that symbols seem to have gained the paradoxical ability to push particles around, rather than the reverse.

Some pre-Christmas downtime, reading in Starbucks on Leopoldstrasse.
Jason Preater's profile photoGeoffrey Teale's profile photoJames Pearn's profile photoEugenio Culurciello's profile photo
I liked his other book Godel, Escher, Bach but that was in another lifetime. And is it right to say "I" liked it anyway?
Have him in circles
7,799 people
Muzafar Ali's profile photo
Mario Cano's profile photo
Robert Bertrand's profile photo
Katin Imes's profile photo
  • University of St Andrews
    Biochemistry (BSc Hons), 1992 - 1996
Basic Information
Thirty-something web developer, online community manager, and CTO. Originally from the UK, now living in Munich, Germany.

Interested in artificial intelligence, neuroscience, Google, exponential advances in technology, cross-country skiing, and river surfing.

Also known as Editor Bob, founder of Toytown Germany.
Map of the places this user has livedMap of the places this user has livedMap of the places this user has lived
Munich, Germany
Edinburgh, UK - St Andrews, UK - Chester, UK
Contact Information
+49 177 7838547
Unertlstr. 24, 80803 Munich Germany
James Pearn's +1's are the things they like, agree with, or want to recommend.

PINPOST - eine App, die virtuelle Kontakte erlebbar machen kann – aber nicht muss. Finde mit der PINPOST App Personen in Deiner unmittelbare

bitcoinj 0.11.2 - Google Groups

Google Groups. bitcoinj 0.11.2. Andreas Schildbach, 22.04.2014 09:08. Veröffentlicht in der Gruppe: bitcoinj-announce. -----BEGIN PGP SIGNED

Project Meshnet

Our objective is to create a versatile, decentralized network built on secure protocols for routing traffic over private mesh or public internetworks independent of a central supporting infrastructure.

Bitcoinium Prime ★No-Ads★

Track your favorite crypto-currencies with ease directly from your Android smartphone! ☆ Following exchanges are supported: • VirtEx: https:

Mycelium Testnet Wallet

This is the wallet variant for the Testnet for the Mycelium Bitcoin Wallet

ZenMate - Your Free Privacy & Freedom companion!

Enjoy unrestricted access to any website from anywhere + total privacy protection & data encryption. FREE addon for Google Chrome. Join over

Taco Libre - Munich

First authentic taqueria in Munich - opened July 2010 at Munich's Hauptbahnhof. Offering Tacos - Burritos - Quesadillas - Fresh Salads - Cof

Blockchain - Bitcoin Wallet

*** Make sure you are scanning the QR Code in Account Settings not on the welcome page ***Blockchain is a bitcoin wallet which combines the

Mycelium Bitcoin Wallet

With the Mycelium Bitcoin Wallet you can send and receive Bitcoins using your mobile phone. The unparalleled cold storage functionality allo

Free Ross Ulbricht

“It is traditional for the government to exploit high profile cases and sensational charges to make bad law.”Ross’ attorney, Joshua L. Drate

Lenovo to acquire Motorola Mobility

We've just signed an agreement to sell Motorola to Lenovo for $2.91 billion. As this is an important move for Android users everywhere, I wa


Credit cards weren't made for the internet. Bitpay, every country, no chargebacks.

comdirect mobile App

Mit der kostenlosen comdirect mobile App haben Sie jederzeit und überall Zugriff auf Ihre Konten und Depots. Banking und Brokerage per Finge


DriveNow – CarSharing in Berlin, Hamburg, München, Düsseldorf und SanFrancisco


Die kostenlose „Bayern-Fahrplan“-App der Bayerischen Eisenbahngesellschaft mbH (BEG) für Android ist ein hilfreicher Reisebegleiter im öffen


Bitcoin Wallet for Android

Internet population by country and their favorite websites

Tweet A graphic of the top websites by country · Wikipedia list of countries by internet users in 2012. China 568192000 42.3% United States

Spunky startup company which is taking the European news publishing industry by storm.
Public - a year ago
reviewed a year ago
Great food, although the portions are slightly ungenerous I'd say. Smallish place, one room with space for ~10 tables. Cozy decor, leopard skin upholstery on the chairs. Will come back.
Food: ExcellentDecor: Very goodService: Good
Public - a year ago
reviewed a year ago
Traditional Ethiopian where you eat with your fingers using the Injera bread. Go easy on the honey wine, it slips down so easily you can be drunk as a skunk with 10 minutes.
Food: Very goodDecor: GoodService: Very good
Public - a year ago
reviewed a year ago
KVR specialises in steak. You choose your cut of meat from the vitrine and they grill it on the charcoal BBQ in front of you. Steak snobs say it's not the best, but I think it's just fine. I live on the same street and go there every couple of months. They have a nice terrace that catches the evening sun in the summer.
Food: Very goodDecor: GoodService: Good
Public - a year ago
reviewed a year ago
28 reviews
Great selection of Asian groceries, including a walk-in refrigerated area for fresh spices. Friendly service has always been quick to help me find what I was looking for.
Food: Very goodDecor: GoodService: Very good
Public - a year ago
reviewed a year ago
This is my local Japanese restaurant. I go there regularly and love it. I'd recommend the 10-course menu for two. It comes with various starters, miso soup, sushi, sashi, tempura, and green tea tiramisu.
Food: Very goodDecor: GoodService: Very good
Public - a year ago
reviewed a year ago
Address is wrong. This cafe is now located at: Klopstockstraße 10, 80804 München
Food: Poor to fairDecor: Poor to fairService: Poor to fair
Public - a year ago
reviewed a year ago