While an interesting thought experiment, I think this article badly misunderstands the nature of intelligence. Intelligence isn't something where more processing power and more information yields exponential gains, but where you hit diminishing returns no matter how much computation you have.

Take weather prediction as an example. It's a complex system, characterized by lack of information, noise, and complicated turbulent flows, with a clear prediction task.  Despite throwing crazy amounts of processing power at the problem (see, eg, http://goo.gl/VGLxZ6), the gains are merely incremental in our ability to make reliable predictions. This is because no matter how much information you gather or processing power you throw at the problem, noise and uncertainty dominate.

Intelligence is taking information, extracting patterns, and applying those patterns to new information to make predictions. The problem is, as you throw more and more information, more computation, and match those patterns that much faster, the problems you are solving are inherently noisy and uncertain, where no amount of computation or preexisting information will yield a perfect solution. More computation rapidly hits diminishing returns, as the pattern recognition searches for a match without the information it needs to be able to do any better.

While it may be true that Moore's Law and Medcalf's Law yield exponential growth, there's a quiet assumption in the article that the difficulty of the problems we are solving in intelligence systems are not exponential as well. To my knowledge, all evidence says the opposite. For all the prediction tasks we try to solve, for all the intelligent systems we try to build, we hit diminishing returns, as the noise and uncertainty in the system makes the problem exponentially harder as we desperately throw exponentially more computation and information at the problem.
Shared publiclyView activity