Cover photo
James Pearn
Attended University of St Andrews
Lives in Munich, Germany
8,159 followers|2,410,305 views


James Pearn

Shared publicly  - 
A vision of the future

I came across this image on the new Singularity Institute website and I really like it. Will the Earth ever look like this, I wonder? Imagine all of humanity condensed into a few mega high-rise cities while the rest of the planet is returned to nature.

I would expect the skyscrapers to be much taller though. The cities would be connected via a global network of 8,000 km/h vacuum tube trains. Food production would fully automated and energy would be generated via nuclear fusion.

Image created by: +Conrad Allan 
Lars Clausen's profile photoLeo Wandersleb's profile photoManuel Jose's profile photoChris Hennick's profile photo
My guess is they tore down the taller buildings, once most of the people had been uploaded to data centers underground or in orbit.

James Pearn

Shared publicly  - 
The Transcension Hypothesis
When post-humans leave the visible universe

We've not yet managed to reverse engineer the brain and build artificially intelligent replicants, but it's only a matter of time. And once we do, an intelligence explosion will follow. Imagine building an artificial brain that is just as sentient as a human, but operates at a million times the speed and memory capacity?

These replicants will be our post-human descendants. With superintelligence they'll figure out a deeper understanding of the laws of physics. They'll then build technology to manipulate spacetime and disappear into whatever dimensions lie beyond. That's transcension.

Great new video from Jason Silva:
The Transcension Hypothesis - What comes after the singularity?
Daniel Estrada's profile photoRobert Poole's profile photo
+Daniel Estrada Well, first off, I think you have a lot of chutzpah to first engage in an ad hominem attack, and then later tell me to calm down.

"You are spending a lot of words saying very little [...]"
Several things come to mind:
1. This smells like an ad hominem attack.
2. It's certainly insulting on a personal level.  This is not ripping on a chosen profession, but a person.
3. I was attempting to address the points you raised in response to me.  Therefore, I felt I was saying exactly what I needed to say.

"I don't care about your prejudice against philosophers, and I won't bother to argue against it."
But then you proceed to do that more-or-less in your next paragraph.  Speaking of which...  The first two "philosophers" you mention are not philosophers to me at all.  Chomsky and Skinner... to me they are a linguist and a behavioral psychologist, respectively.  It just so happens I recognize their contributions to cognitive science.  I just don't recognize them as philosophers, and I really don't care that anyone else does.

When you say "you don't know your history very well," you presume that I place any kind of value on history.  Other people care about that stuff.  Not me.  When people say things like that to me, it is automatically interpreted as snobbish and potentially classist.  It also strikes me as highly irrelevant.  If I'm talking about politics or etymology, then knowing history is expected.  But I frankly don't care much about the history of science or philosophy as much as I care about the ideas and the discoveries.

In a similar vein, let me point out something in your own G+ profile:
"It is somewhat unfashionable to talk about thinkers that inform your work, as opposed to issues. But philosophers in particular tend to map out the problem space by reference to each position's strongest defenders."
And this is one of the reasons I really don't accept what philosophy has to offer: this insistence on tying concepts to people.  It seems regressive to do things this way, because it's not a very efficient means of cataloging knowledge.  (Also bad because my brain does not work this way.)  When you add in the regrettable human obsession with nationalism, you get different groups calling the same thing by two different people's names.

So, now moving on to your next post...

(1) Science is reductionist in its very nature.  I know a brain is made up of neurons.  I know that the brain is what does thinking.  I am amused by your attempt to corner me in a philosophical conundrum by asserting that I can't endorse reductionism and emergent behaviors at the same time.  You claim that if consciousness can be reduced, it can be traced to an individual.  Then you strongly imply that this is in fact my viewpoint.

First of all, I think your attack stems from a misapprehension of how I understand "Reductionism."  I'm going to borrow this definition from Wikipedia, simply to avoid making this far too long:
"Reductionism can mean either (a) an approach to understanding the nature of complex things by reducing them to the interactions of their parts, or to simpler or more fundamental things or (b) a philosophical position that a complex system is nothing but the sum of its parts, and that an account of it can be reduced to accounts of individual constituents."
Since I am not a philosopher and don't find much use for philosophy, I think you know by now that I mean sense (a) as stated above.

Based on this, it would seem obvious that in my world view, there is no intrinsic conflict between my adherence to a brand of scientific reductionism and my conviction that consciousness is an emergent phenomenon.

What do I mean by an emergent phenomenon/behavior?

I define an emergent phenomenon or behavior as an activity or behavior of the collection (e.g., an organism, an ant colony, a bee hive) that is only exhibited in aggregate and not exhibited by individual components (e.g., a cell, an ant, or a worker bee).  Furthermore, it is implicit that the lowest level components of such a system are very simple and work together.  This implies communication of some sort, to coordinate behavior.

I would be willing to concede that a corporation might be an emergent entity, but it doesn't exhibit certain emergent behaviors that I am looking for.  Corporations exhibit some intelligent behavior, but in other respects don't measure up to human-level intelligence, let alone exceed it.

However, I disagree that the most interesting level of analysis is of the organized whole.  Let me shift focus to the human brain.  While it's true that a single neuron doesn't do much (and therefore consciousness can't be "reduced" to that level, to use your understanding of the word), it's also true that the brain can be broken down into subsystems which are composed of neurons and which communicate with each other.  To me, this is very reminiscent of Minsky's Society of Mind.  If we're to have any hope of building synthetic minds, it would certainly help to understand how these subsystems operate and how they are organized, how they communicate together.

(2) You have no problem with representational theories of mind?  That's not quite the ringing endorsement I expected.  It seemed to me that you were firmly in the representational theory of mind camp.

I am also unaware that I was using any kind of slanderous language, so I am sincerely confused by your accusation.

"The issue at hand here is about collective intelligence, and a corporation is clearly a collectively intelligent agent, acting according to its own internal dynamics."
Actually, the issue at hand for me was swinging by to watch Mr. Silva's video and read what Mr. Pearn had to say, and to read the follow-on discussion.  You were the one who injected collective intelligence into the discussion, and in my initial response to this thread, I didn't address it.

That was by intention.  Your conjecture is interesting, as far as it goes.  Maybe it's correct (and maybe I even agree with most of your assertions about corporations), but I don't find it a terribly interesting or useful observation.  After all, corporations aren't the kind of intelligent entities that I find interesting, and I find them only marginally useful in my day-to-day life in the sense that I am forced to deal with them.  Afterward, I did briefly address your argument about corporations, but only superficially, because I really only cared about one small piece of what you had to say.

And that's it.  When you write, "But you haven't even engaged my actual argument. Whether or not corporations are intelligent is besides the point," you've missed the mark by a proverbial mile.  Because I never cared about your whole argument regarding corporations as being just like the entities that Singularists worry about.  I am not here for you or your posts.  I was here for something else, and you happened to be part of the discussion.  The argument seemed mostly self-evident (though I had some reservations about some of the ancillary bits), and therefore kind of banal to me.

Is there a way to really emphasize even more how much I really don't care about your thesis about corporations?  I mean, I'm not saying you don't have a point, but it doesn't change the fact that (a) corporations exist, (b) they do bad things, (c) they are more powerful than any one person, and (d) there's precious little that I can do to change that.

Just for emphasis...
"The fact that you completely missed this argument is somewhat disappointing, because I think the question is interesting."
I didn't miss your argument.  I do not think it is interesting.  When you want to talk about minds and how one might go about building a mind, then give me a call.  A corporation does not, in my view, have a mind.

By the way, you used "beyond reproach" incorrectly.  You sabotaged yourself by saying the exact opposite of what you meant.  Of course, you were wrong anyway since I wasn't "avoiding the question" in the sense that you meant.  My prejudices are what they are, and they exist for a reason which you have some inkling of now.  I would not have gone off on you at all if your first comment hadn't touched a raw nerve.

James Pearn

Shared publicly  - 
What is Google's total computational capacity?

I estimate: 40 petaflops

This is 4x as powerful as the world's most powerful supercomputer.

For competitive reasons Google doesn't reveal this information themselves. We can, however, estimate their total number of servers together with the capacity per server. These figures can then be compared to other high-performance computer systems and used to extrapolate total capacity.

Number of servers

In a previous post from January 2012 ( I calculated that Google's total number of servers is around 1,800,000. This includes all eight of their self-built data centers currently in operation worldwide. Other respected industry watchers are saying Google has 900,000 servers ( But that figure is based on only a single data point (energy usage) that is both unreliable and over a year old. Google have opened whole new data centers since then. So I still think 1,800,000 is a reasonable up-to-date estimate.

Average power per server

In 2009 it was revealed that the average Google server is a commodity-class, dual-processor, dual-core, x86 PC system. That is, each server has four processor cores. See the paper where this is described: (PDF, page 7). Note that this paper was published three years ago. It's quite possible that the servers are replaced over a three-year cycle. So the average now, in 2012, might be a dual-processor, quad-core system (eight cores per server, or even more). But let's be conservative and assume the 2009 info is still valid.

This means Google is running ~7,200,000 processor cores.

Google has said they go for power in numbers. That is, they use lots of cheap processors rather than a smaller number of costlier, more powerful ones. Let's assume then that the average processor is one that first came to market five years ago, i.e. in 2007. This might be the Intel Core2 Duo E4400 ( running at 2 GHz. This processor is capable of around 6 gigaflops per core. Multiply that by our estimated number of cores and Google's total comes out at 43 petaflops.

The capacity of a system is not, however, a simple multiplication of core count and flops-per-core. Rarely can a system reach its theoretical maximum. So for that reason it's helpful to look at other large-scale systems where the total capacity is known.

TOP500 supercomputers

According to the list, the world's most powerful supercomputer is currently the K computer in Japan. It has 705,024 processor cores and a maximum speed of 10 petaflops. This gives it an average speed-per-core of 14.9 gigaflops.

The K computer uses Sparc VIIIfx processors which are rated at 16 gigaflops per core. This tells us that the supercomputer is achieving 93% of the theoretical capacity of all its processors combined. If Google's servers achieve a similar percentage that would mean their total capacity is 40 petaflops, or four times that of the K computer.

Note that even if Google were able and inclined to run the Linpack benchmark across their whole platform they still wouldn't qualify for inclusion in the TOP500 list. Supercomputers only qualify if they're housed entirely under a single roof.

Amazon EC2 Cluster

An Amazon EC2 Cluster instance is currently number 42 on the TOP500 list. Like Google, it is also built using commodity hardware. The exact details are not known, but their web pages mention Xeon and Opteron x86 processors. In a benchmark test the cluster was able to achieve 240 teraflops using 17,024 cores. This averages to 14 gigaflops per core. If Google's servers are around the same performance, that would give them a total of just over 50 petaflops.

Grid computing

BOINC is a grid computing system originally developed for the SETI@home project. Volunteers around the world download client software which utilizes their PC's spare CPU cycles for scientific research. As of February 2012 the system has ~450,000 active computers (hosts) and processes on average 5.7 petaflops.

If we assume that the average BOINC host has the same power as the average Google server, and if we also assume that the average BOINC host is utilized the same amount of time as a Google server, then we can simply multiply the figures. Google has four times the number of servers as BOINC has hosts, so that would mean Google's total processing power is 22.8 petaflops.

Folding@home is another distributed computing project similar to BOINC. It is designed to perform simulations of protein folding and other molecular dynamics. As of February 2012 the project had around 414,000 active processors for a total of 8.4 petaflops. If we assume that Google's average processor performs similar to the average Folding@home processor, this would bring Google's total processing power to 36 petaflops.

Future growth

If Google's computational capacity grows according to Moore's Law then it will double every 18 months. This means Google will become an exascale machine (capable of 1 exaflops) by 2019.

Google said themselves in 2009 that their system is designed for 1 to 10 million servers ( If they have ~2 million currently, that means there's room for five-fold growth, which would mean up to ~200 petaflops.

To reach 1 exaflops Google might need to evolve their architecture. Maybe that they'll start using GPUs, or processors with hundreds of cores. I've no idea, but I would guess someone inside Google is already thinking about it.


- - FLOPS on Wikipedia
- - K computer on Wikipedia
- - Amazon EC2 Cluster
- - Amazon EC2 Cluster
- - BOINC grid computing project
- - Folding@home grid computing project
- - the "globe" graphic used below

#googlecomputecapacity #googleservercount #petascale #exascale #exacycle #singularity
Sanjeev Kumar's profile photoMatthew Yager's profile photoJames Pearn's profile photoNasser T's profile photo
I understand that the processing power is only one aspect of intelligence and there is a big different between performance of brain and other aspects of human brain like it's special design and natural software's that control the brain.
But human brain's computing power is a milestone for Artificial intelligence . It is only a sign of singularity for me .

James Pearn

Shared publicly  - 
Deep down, your brain is a chaotic seething soup of particles. On a higher level it is a jungle of neurons, and on a yet higher level it is a network of abstractions that we call symbols. The most central and complex symbol is the one you call "I". An "I" is a strange loop where the brain's symbolic and physical levels feed back into each other and flip causality upside down so that symbols seem to have gained the paradoxical ability to push particles around, rather than the reverse.

Some pre-Christmas downtime, reading in Starbucks on Leopoldstrasse.
Kevin Walsh's profile photoGeoffrey Teale's profile photoEugenio Culurciello's profile photoandreas vichr's profile photo
I was reminded of an amazing book about "strange" loops I have read ca. 2001. I have to search for it and will hit you up again …

James Pearn

Shared publicly  - 
I ate the 99%.
Keith Wiley's profile photoEmmy  A Horstkamp's profile photoKimberly Chapman's profile photoChris Kaiser's profile photo
treat ist healthy :-)

James Pearn

Shared publicly  - 
Checking out an enormous dinosaur fossil at the Munich Minerals Fair this afternoon. I wonder what species of creature could be marveling at my bones in 200 million years time? In reality, my bones probably won't survive that long, but this post just might. Website:
James Pearn's profile photoGeoffrey Teale's profile photo
It certainly looks like some form of Ichthyosaur.

James Pearn

Shared publicly  - 
Carbon nanotube circuits built for first time
Full-wafer digital logic structures

Some days I hear news of technological progress which is so dizzying it almost makes me fall over. And today is another of those days.

Researchers at Stanford have built full-wafer digital logic structures using carbon nanotubes. Below is an electron microscope image of their work. The article they published yesterday is here:

Only last week I was reading about carbon nanotubes on Wikipedia and wondering how close we are to building complete microprocessors out of these. I concluded that although single transistors had been built, the fabrication of circuits was still many years away. And yet, here we are.

Incredible. This is the future unfolding before our eyes.
Jeremy McMillan's profile photoMark Bruce's profile photo
It really hits you between the eyes when you're wondering how many years until a thing arrives and then you read about it being done the following week :-) Its just one of many things that make the times we live in so damn interesting.

James Pearn

Shared publicly  - 
Interactive 3D model of the human brain

The Brain Surface and Tractography Viewer was developed by Dan Ginsburg and +Rudolph Pienaar. The different colours represent the different directions of the neural tracks. Inferior/superior are blue, lateral/medial are red, and anterior/posterior are green.

Take a look: - (requires Google Chrome)

The imagery is not as detailed as the static pictures posted by +Owen Phillips earlier this week. But it's nice to be able to interact with it, to rotate and explore the inner connectivity of the brain. Note, this requires a WebGL enabled browser such as Chrome, Firefox, or Safari.
Owen Phillips's profile photoJonathan Langdale's profile photoJulia Robertson's profile photoAndrè Curry's profile photo

James Pearn

Shared publicly  - 
How many servers does Google have?

My estimate: 1,791,040 as of January 2012
And projection: 2,376,640 in early 2013

This estimate was made by adding up the total available floor space at all of Google's data centers, combined with knowledge on how the data centers are constructed. I've also checked the numbers against Google's known energy consumption, and various other snippets of detail revealed by Google themselves.

Satellite imagery:

Google doesn't publicly say how many servers they have. They keep the figure secret for competitive reasons. If Microsoft over-estimates and invests in more servers then they'll waste money - and this would be good for Google. Conversely, if Microsoft builds fewer servers then they won't match Google's processing power, and again, this would be good for Google. Nevertheless, from the limited amount of information that is available I've attempted to make a rough estimate.

First of all, here's some background on how Google's data centers are built and organised. Understanding this is crucial to making a good estimate.

Number and location of data centers

Google build and operate their own data centers. This wasn't always the case. In the early years they rented colocation space at third-party centers. Since the mid-2000s, however, they have been building their own. Google currently (as of January 2012) has eight operational data centers. There are six in the US and two in Europe. Two more are being built in Asia and one more in Europe. A twelfth is planned in Taiwan but construction hasn't yet received the go-ahead.

Initially the data center locations were kept secret. Google even purchased the land under a false company name. That approach didn't quite work however. Information always leaked out via the local communities. So now Google openly publishes the info:

Here are all 12 of Google's self-built data centers, listed by year they became operational:

2003 - Douglas County, Georgia, USA (container center 2005)
2006 - The Dalles, Oregon, USA
2008 - Lenoir, North Carolina, USA
2008 - Moncks Corner, South Carolina, USA
2008 - St. Ghislain, Belgium
2009 - Council Bluffs, Iowa, USA
2010 - Hamina, Finland
2011 - Mayes County, Oklahoma, USA

2012 - Profile Park, Dublin, Ireland (operational late 2012)
2013 - Jurong West, Singapore (operational early 2013)
2013 - Kowloon, Hong Kong (operational early 2013)
201? - Changhua Coastal Industrial Park, Taiwan (unconfirmed)

These are so-called “mega data centers” that contain hundreds of thousands of servers. It's possible that Google continues to rent smaller pockets of third-party colocation space, or has servers hidden away at Google offices around the world. There's online evidence, for example, that Google was still seeking colocation space as recently as 2008. Three of the mega data centers came online later that year, however, and that should have brought the total capacity up to requirements. It's reasonable to assume that Google now maintains all its servers exclusively at its own purpose-built centers - for reasons of security and operational efficiency.

Physical construction of data centers

Although the locations are public knowledge, the data center insides are still fairly secret. The public are not allowed in, there are no tours, and even Google employees have restricted access. Google have, however, revealed the general design principles.

The centers are based around mobile shipping containers. They use standard 40' intermodal containers which are ~12m long and ~2.5m wide. Each container holds 1,160 servers. The containers are lined up in rows inside a warehouse, and are stacked two high.

See the video Google released in 2009: Google container data center tour

Are all of Google's data centers now based on this container design? We don't know for sure, but assume that they are. It would be sensible to have a standardised system.

As for the servers themselves - they use cheap, low-performance, open-case machines. The machines only contain the minimal hardware required to do their job, namely: CPU, DRAM, disk, network adapter, and on-board battery-powered UPS. Exact up-to-date specifications are not known, but in 2009 an average server was thought to be a dual-core dual-processor (i.e. 4 cores) with 16 GB RAM and 2 TB disk.

The containers are rigged to an external power supply and cooling system. Much of the space inside a warehouse is taken up with the cooling pipes and pumps. The cooling towers are generally external structures adjacent to the warehouse.

Counting servers based on data center floor space

This is by no means a precise method, but it gives us an indication. It works as follows.

First we determine the surface area occupied by each of Google's data center buildings. Sometimes this information is published. For example the data center at The Dalles is reported to be 66,000 m². The problem with this figure, however, is we don't know if it includes only the warehouse building itself or the whole plot of land including supporting buildings, car parks, and flower beds.

So, to be sure of getting the exact size of only the buildings, I took satellite images from Google Maps and used those to make measurements. Due to out-of-date imagery some of the data centers are not shown on Google Maps, but those that are missing can be found on Bing Maps instead.

Having retrieved the satellite imagery of the buildings I then superimposed rows of shipping containers drawn to scale. Care was taken to ensure the containers occupied approximately the same proportion of total warehouse surface area as seen in the video linked above. That is, well under 50% of the floor space, probably closer to 20%. An example of this superimposed imagery is attached to this post, it shows one of the warehouses in Douglas County, Georgia, USA.

All floor plan images:

Having counted how many container footprints fit inside each warehouse, I then doubled those figures. This is because I assume all containers are stacked two high. Quite a large assumption, but hopefully a fair one.

It turns out that in general the centers house around 200,000 servers each. Douglas County is much larger at about twice that figure. Meanwhile Lenoir, Hamina, and Mayes County are smaller. Mayes County is due to be doubled in size during 2012. The sizes of the future data centers in Singapore and Hong Kong have not been measured. Instead I assume that they'll also host around 200,000 servers each.

This results in the following totals:

417,600 servers - Douglas County, Georgia, USA
204,160 servers - The Dalles, Oregon, USA
241,280 servers - Council Bluffs, Iowa, USA
139,200 servers - Lenoir, North Carolina, USA
250,560 servers - Moncks Corner, South Carolina, USA
296,960 servers - St. Ghislain, Belgium
116,000 servers - Hamina, Finland
125,280 servers - Mayes County, Oklahoma, USA

Sub-total: 1,791,040

Future data centers that'll be operational by early 2013:

46,400 servers - Profile Park, Dublin, Ireland
200,000 servers - Jurong West, Singapore (projected estimate)
200,000 servers - Kowloon, Hong Kong (projected estimate)
139,200 additional servers - Mayes County, Oklahoma, USA

Grand total: 2,376,640

Technical details revealed by Google

A slide show published in 2009 by Google Fellow +Jeff Dean reveals lots of interesting numbers. In particular it mentions "Spanner", which is the storage and computation system used to span all of Google's data centers. This system is designed to support 1 to 10 million globally distributed servers.

Given that this information was published over two years ago, it's likely the number of servers is already well into that 1-to-10 million range. And this would match with the floor space estimation.

Slide show:

Counting servers based on energy consumption

Last year +Jonathan Koomey published a study of data center electricity use from 2005 to 2010. He calculated that the total worldwide use in 2010 was 198.8 billion kWh. In May of 2011 he was told by +David Jacobowitz (program manager on the Green Energy team at Google) that Google's total data center electricity use was less than 1% of that worldwide figure.

From those numbers, Koomey calculated that Google was operating ~900,000 servers in 2010. He does say, however, that this is only "educated guesswork". He factored in an estimate that Google's servers are 30% more energy efficient than conventional ones. It‘s possible that this is an underestimate - Google does pride itself on energy efficiency.

If we take Koomey's 2010 figure of 900,000 servers, and then add the Hamina center (opened late 2010) and the Mayes County center (opened 2011) that brings us to over a million servers. The number would be ~1,200,000 if we were to assume all data centers are the same size.

Koomey's study:


The figure of 1,791,040 servers is an estimate. It's probably wrong. But hopefully not too wrong. I'm pretty confident it's correct within an order of magnitude. I can't imagine Google has fewer than 180,000 servers or more than 18 million. This gives an idea of the scale of the Google platform.


YouTube videos:
- Google container data center tour
- Google Data Center Efficiency Best Practices. Part 1 - Intro & Measuring PUE
- Continual improvements to Google data centers: ISO and OHSAS certifications
- Google data center security - Google patent for container-based data centers - Standard container sizes - +Jeff Dean's slideshow about Google platform design - “In the Plex” book by +Steven Levy - +Jonathan Koomey's data center electricity use

Articles by +Rich Miller of Data Center Knowledge:

Original copy of this post:

Attached image below is one of Google's data warehouses in Douglas County, Georgia. Photo is from Google Maps, with an overlay showing the server container locations.
Mike Manz's profile photoJames Pearn's profile photoChris Davies (cd34)'s profile photoDon Denesiuk's profile photo
There used to be videos on Google engineering tech talks on the 'Google Platform' which talked about the software that manages this massive information machine spread over several continents. Unfortunately it's been dissappeared for 'competitive' reasons you site above. It's fascinating stuff. I wonder if they still use this as the basic building block?
On-board gel cell UPS is genius.

James Pearn

Shared publicly  - 
Henry Markram, creator of the Human Brain Project, is giving a presentation today ( in Warsaw, Poland. The aim of the HBP is to build a molecular-level simulation of the human brain within a supercomputer. Markram believes this will be possible by the year 2023. He wants to use the simulation to unravel the exact nature of consciousness within his lifetime. I hope he succeeds, because I want to know the secret within my lifetime too. Website:
Kevin Walsh's profile photoSki Fahrer's profile photoJames Pearn's profile photoChris Hennick's profile photo
+James Pearn My point is that for the output to be comprehensible -- let alone interesting or useful -- to humans, it has to be informed by enough of the huge volume of input a human gets from environment and culture. Otherwise you're just creating Cthulhu.

James Pearn

Shared publicly  - 
This salmon I bought today has a tracking code that you can type into and it'll tell you exactly where the fish was caught and what route it took to the supermarket. In this case the salmon F101335 was cultured in the fjords near Kirkenes on the north coast of Norway before being transported via Sweden to Buchholt in Germany. Pretty cool. I think all foods should have this.
Antonia Conti's profile photoJames Pearn's profile photoJoyce Leonardo's profile photojoe breskin's profile photo
back in late 2005 we - a small group of owners - decided to try to get our local, member-owned food coop to put meaningful labels - information far beyond simple WTO COA labels - on all products, and do background checks on the products carried. A 5-year political battle ensued, which we - the owners - lost to Industry and their pawns in management, but in the course of the effort we discovered a substantial contingent who maintained that one of their basic rights of ownership was bliss through ignorance. Labeling still needs to happen.

James Pearn

Shared publicly  - 
This slide sums up my attitude towards the financial crisis. It is taken from a TED talk given in 2009 by Juan Enriquez. He describes very eloquently how the world economy will be rebooted by three technologies: genetic engineering, tissue engineering, and robotics. Worth a watch: Juan Enriquez: Tech evolution will eclipse the financial crisis
Lars Clausen's profile photoGeoffrey Teale's profile photoMarcus Engholm's profile photo
The trouble was the credit expansion which lead to a massive expansion of the money supply. Dept is created out of thin air, this is not just a transaction in which money is transferred from one person to another. There is no way this build up in dept had occurred if the money supply hadn't expanded dramatically. There's no production to back up all this dept, it has to go back to where it came from, nowhere. Sadly, most of the bigger economies face these problems which is why I'm pretty pessimistic.

However, I do agree that innovation is the key to helping us out of this crisis. But it will not be easy to work our way through these piles of dept.
Thirty-something web developer, online community manager, and CTO. Originally from the UK, now living in Munich, Germany.

Interested in artificial intelligence, neuroscience, Google, exponential advances in technology, cross-country skiing, and river surfing.

Also known as Editor Bob, founder of Toytown Germany.
Map of the places this user has livedMap of the places this user has livedMap of the places this user has lived
Munich, Germany
Edinburgh, UK - St Andrews, UK - Chester, UK
Contact Information
Unertlstr. 24, 80803 Munich Germany
  • University of St Andrews
    Biochemistry (BSc Hons), 1992 - 1996
Basic Information
James Pearn's +1's are the things they like, agree with, or want to recommend.
Introducing Android Instant Apps | Android Developers Blog

Today we’re sharing a preview of a new project that we think will change how people experience Android apps. We call it Android Instant Apps

This Is The End: Venezuela Runs Out Of Money To Print New Money | Zero H...

Venezuela is now so broke that it no longer has enough money to pay for its money.

Cost of Living in Norway

Average prices of more than 40 products and services in Norway. Prices of restaurants, food, transportation, utilities and housing are inclu

The Seventh Information Revolution

We want a New Information Revolution for Christmas. Open Prediction Markets are the New Information Revolution - specifically, the Seventh I

Beyond Salmon: Fish Buying FAQ

Have you ever wondered why some fish markets smell fishy and others don't? Is prepackaged fish as good as cut to order? And what about farm-

Greeks Told To Declare Cash "Under The Mattress", Jewelry And Precious S...

When earlier today we read a report in the Greek Enikonomia, according to which Greek taxpayers would be forced to declare all cash "under t


Bitcoin XT development and technical chat Do: - Ask questions about patch development, feature development, bugs etc. - Get advice for runni

Active members

Toytown Germany currently has 1583 posting members + 4701 reading members = 6284 active members. An active member is arbitrarily defined as

Thoughts on the next 'stress test'

Gepostet am 18.08.15 10:22, 3 Nachrichten

Internet Live Stats - Internet Usage & Social Media Statistics

Watch the Internet as it grows in real time and monitor social media usage: Internet users, websites, blog posts, Facebook, Google+, Twitter

WeatherPro – Android-Apps auf Google Play

„Im Moment gibt es nichts Besseres.“ Testsieger bei Connect ***** „WeatherPro punktet mit einer ganzen Reihe von nützlichen Funktionen.“ Tes

Airline seat densification will continue, it is just of matter of how th...

All long range airline and airplane manufacturing industry forecasts (such as those from Boeing) have denser seating included in the detaile


AdBlock. The #1 ad blocker with over 200 million downloads. Blocks YouTube, Facebook and ALL ads by default (unlike Adblock Plus).

Signatures to Remove Ellen Pao as CEO of Reddit Eclipses 73,000

A petition calling for Ellen Pao to resign as the CEO of Reddit has surged in the last 36 hours and now has over 73,000 signatures. PaoThe s

Unenumerated: The Greek financial mess; and some ways Bitcoin might help

Many years of government debt buildup in Greece has ultimately resulted, in the last few days, in a political and financial maelstrom. The p

Love the nouveaux-Bavarian style. :-)
Public - 2 weeks ago
reviewed 2 weeks ago
Public - 2 months ago
reviewed 2 months ago
Seems to be the largest of the four or five supermarkets in Les Deux Alpes.
Public - 2 months ago
reviewed 2 months ago
Stocks most of the basics for European cooking. They also have some fresh herbs that the other supermarkets don't, such as coriander and dill. But it's still quite small, they don't have fresh salmon, for example.
Public - 3 months ago
reviewed 3 months ago
63 reviews
Public - 3 months ago
reviewed 3 months ago
Public - 3 months ago
reviewed 3 months ago