Shared publicly  - 
With this algorithm that optimally solves all well-defined problems, it's time to turn our attention to ill-defined problems - which are, in fact, what I already specialize in.
Abstract: An algorithm M is described that solves any well-defined problem p as quickly as the fastest algorithm computing a solution to p, save for a factor of 5 and low-order additive terms. M optimally distributes resources between the execution of provably correct p-solving programs and an enumeration of all proofs, including relevant proofs of program correctness and of time bounds on program runtimes. M avoids Blum's speed-up theorem by ignoring programs without correctness proof. M has broader applicability and can be faster than Levin's universal search, the fastest method for inverting functions save for a large multiplicative constant. An extension of Kolmogorov complexity and two novel natural measures of function complexity are used to show that the most efficient program computing some function f is also among the shortest programs provably computing f.
Homepage of Marcus Hutter containing Paper: The Fastest and Shortest Algorithm for All Well-Defined Problems
Mohsen Nosratinia's profile photoPeter Speckmayer's profile photoKentaro Mori's profile photoAvinash Pujala's profile photo
Also see similar work by Jürgen Schmidhuber:

MLJ 2004 abstract: We introduce a general and in a certain sense time-optimal way of solving one problem after another, efficiently searching the space of programs that compute solution candidates, including those programs that organize and manage and adapt and reuse earlier acquired knowledge. The Optimal Ordered Problem Solver OOPS draws inspiration from Universal Search designed for single problems and universal Turing machines. It spends part of the total search time for a new problem on testing programs that exploit previous solution-computing programs in computable ways. If the new problem can be solved faster by copy-editing/invoking previous code than by solving the new problem from scratch, then OOPS will find this out. If not, then at least the previous solutions will not cause much harm. We introduce an efficient, recursive, backtracking-based way of implementing OOPS on realistic computers with limited storage. Experiments illustrate how OOPS can greatly profit from metalearning or metasearching, that is, searching for faster search procedures.
Also relevant:

Goedel machines are self-referential universal problem solvers making provably optimal self- improvements.
They formalize I. J. Good's informal remarks (1965) on an "intelligence explosion" through self-improving "super-intelligences".
A Monte Carlo AIXI Approximation

"This paper introduces a principled approach for the design of a scalable general reinforcement learning agent. Our approach is based on a direct approximation of AIXI, a Bayesian optimality notion for general reinforcement learning agents. Previously, it has been unclear whether the theory of AIXI could motivate the design of practical algorithms. We answer this hitherto open question in the affirmative, by providing the first computationally feasible approximation to the AIXI agent. To develop our approximation, we introduce a new Monte-Carlo Tree Search algorithm along with an agent-specific extension to the Context Tree Weighting algorithm. Empirically, we present a set of encouraging results on a variety of stochastic and partially observable domains. We conclude by proposing a number of directions for future research."
Abstract: An algorithm M is described that solves any well-defined problem p as quickly as the fastest algorithm computing a solution to p, save for a factor of 5.....

I managed to get to "5" before my brain urgently instructed my eyes to "look anywhere but at the screen". That happened every time I tried to read that paragraph. So far I have looked at the window, a bird, the floor, a piece of dust, the back of my hand and the cat's butt. It was at the point of finding myself looking at the latter that I decided not to attempt to read that paragraph again!
Joan Baez is my cousin.  Hope you get better!
If only it weren't for those pesky "low-order additive terms".
Thanks for all the extra links, +Alexander Kruel.  This new research looks really interesting - I hope to learn all the technical details.  However, I think that reflecting on this work  people will soon discover how little of what we do amounts to solving well-defined problems or optimizing known quantities!   It reminds me of a thought experiment I like: suppose you had a genie that could quickly answer any well-posed math question whatsoever.  What would you do with it?   It could be very helpful, but you'd still have to figure out what questions to ask, and that turns out to be the hard part.
+Gareth Rees - yes, I don't know if they've been estimated.  If a program takes a googol years before it starts efficiently solving your problem, it's not so great. 
Thanks.  That's an even more audacious paper title.  Can anyone guess what he'll publish next?
"Reconciling The Future, The Past and The Present Through The Multiverse in Two Easy Steps Using Only A Sheep and Fractal Geometry" ?
+John Baez > "...suppose you had a genie that could quickly answer any well-posed math question whatsoever.  What would you do with it?   It could be very helpful, but you'd still have to figure out what questions to ask, and that turns out to be the hard part."

One suggestion is to formalize the goal "satisfy the coherent extrapolated volition of humanity"
Holy crap this is exciting.  I wish I could understand it better.
+John Baez And is the genie going to answer in a way that you can understand? Doron Zeilberger believes that human mathematicians are only able to navigate in the shallow waters of mathematics... that is, our brains are just unable to grasp arbitrarily deep ideas. Just think that you ask for some proof, and the genie gives you 500,000 pages of what appears to you to be gibberish...
"The large constants cp and dp seem to spoil a direct implementation of Mp
¤ ."

The basic idea is that if we start with "a", "b", "c", ... "aa", "ab", "ac", ... "ba", "bb", "bc", ... "aaa", "aab", ... ,  we will eventually find "print hello", and every other possible computer program.  It may take time (a lot of time), but in theory we'll find it if we can wait long enough.  The paper shows that if we spend some time generating more programs and some time running the best one we know about, we'll either get an answer or we'll find a new program that runs faster.

But let's face it, we won't find another program by the `brute force' method unless we give it a substantial head start.  That's what the "constant factor" is.  Unfortunately, the head start necessary is exponential in the length of the program.  A trivial program, like adding 1 to a number, can take dozens of steps, so the head start would be hundreds of billions of search steps.  A more interesting program is simply not feasible.
+Daniel Lemire wrote: "And is the genie going to answer in a way that you can understand?"

The genie knows nothing of human psychology and how to explain things so humans understand them.  You have to type in precisely formulated questions - let's pretend a proof assistant can translate careful mathematical English into formal mathematics.

So, you can ask if Goldbach's conjecture is true - that's a yes-or-no question that can be settled by going through a countable infinity of cases, but let's say our genie can do that in one second.  Suppose it says "yes".  Then you can ask if it's provable from Zermelo-Fraenkel set theory.  Again, that can be settle by a countably infinite search.  It says yes.  Then you can ask it to print out the shortest proof.  Whoops! - it's too long for you to understand it. 

So, you have superpowers, but you're still very limited in your abilities unless you know a lot of math - or have a friend who does.  +Terence Tao, who is probably trying to prove Goldbach's conjecture, must know a lot of plausible lemmas from which Goldbach's conjecture would follow.  Being able to instantly tell which of these lemmas are true could speed up his work.  But that's because he knows what he's doing.

So the interesting thing is: the power to instantly know the answer to any well-posed math question does not instantly make one a good mathematician!
Mathematician here:

A couple of commenters have hit one one of the 2 main problems with this algorithm, namely those "low order terms" are almost certainly unthinkably huge (think somewhere on the order of googol).

The second (and in my opinion worse) problem is the little caveat " ignoring programs without correctness proof."  This is an issue because (by Godel's theorem), most theorems are unprovable (and hence most algorithms lack a correctness proof) given a well-defined set of axioms.  In other words, this program will always choose a slower (but provable) program over an exponentially faster (or even more so) program that requires additional logical axioms to prove its correctness.

So if you actually tried implementing this program, your results would be:
1) wait a googol years to find best provable algorithm
2) be beaten by someone who has a faster algorithm that can only be proven using an axiom your program didn't know about
How many people use algorithms that are fast that nobody has proved are correct?   Of course nobody has proved that most big commercial software packages are correct, and usually they have bugs.  But how about more mathematical algorithms, like algorithms for the travelling salesman problem or linear programming?  I know people use algorithms that supposedly have a high probability of getting a pretty good answer to an optimization problem.  But is there anyone using an algorithm that seems to give the right answer every time, but nobody can prove it?

Nitpick: I wouldn't say "most theorems are unprovable"; in logic a theorem is a statement that has a proof.  I'd say something like "most true statements are unprovable"... although personally I am reluctant to talk about truth if we have to make up new and possibly controversial axioms to prove something. 
Good point about "theorem" vs "true statement".  If we limit ourselves to statements that can be expressed using a countable alphabet, the number of true statements and provably true statements are both countably infinite, so in that sense they're equal.  
When I said most true statements aren't provable, the example I was thinking of was statements along the lines of "this string has Kolmogorov complexity at least K", which if I choose a binary string at random of length K+N is true with probability at least 1-1/(2^N).
This makes it easy to give examples of algorithms I can speed-up exponentially by including an axiom that is "probably" true, ala Godel's Speed-up Theorem.  

As far as algorithms that people use but which aren't proved, good examples of those would include any algorithm that depends on the Riemann Hypothesis.  For example, the Miller-Rabin Primality test.  Of course, we are all optimistic that Riemann is provable, but there's no reason why there shouldn't be "useful" algorithms that aren't provably correct (as opposed to the useless ones given as examples in Godel's Speedup).
"On the order of a googol" is a gross underestimate. Moser's number is closer (and only for small programs).
>you can ask if Goldbach's conjecture is true - that's a yes-or-no question

You consider that a well-posed mathematical question with a Yes or No answer?
By the way, in practice I feel confident that Goldbach's conjecture is true and suspect it will be proved in this century, thanks to the rapid progress we're seeing.  I hope y'all saw this post:

where in a comment Richard Helfgott confirms that at this point, it's just a matter of buying enough computer time to complete a proof of the weak Goldbach conjecture, which says that every odd number greater than 7 is the sum of three odd primes.
+John Baez  & +Daniel Lemire Having only read the comments I am now an expert on the field. Thus I am not sure how relevant what I write below is to any of this.

The way we currently practice science seems to naturally provide the interface between the human and the genie. Experimentalists constantly ask the genie all sorts of yes and no questions with at least some engineering success. This is one method of finding what questions to ask. Example do copper oxide compounds superconduct? With the answer YES "we" can then use this new fact to do other things, based on what we need and know already (example Maglev trains). What we can't do is say why (or even figure out what is the best sequence to use the new fact to optimize development or understanding). We'd have to break the "why" question down into many "smaller" binary one's (i.e., with yes or no answers) that can sequentially build us up from where we are to an answer we can understand. Then this new fact is feed back into the system of questions to do new things such as to get new facts and build new equipment or a compatible set of rules giving us the sensation that we are massively cross-checking the genie's answer (even when we are yet to ask it that particular question for which we think we already have an answer). Thus the questioner and the genie creates this scientific loop.

Another question experimentalists ask is "how much" in a yes or no form. A lot of physics is about asking how much and uses techniques in math to count how many of some unit (related to some fuzzy concept). How much of our questions can be put in such binary language, I don't know, but I vaguely remember glancing something along the lines that we cannot express all of our questions in binary (yes & no). I suppose someone will know more on this ghost memory of mine.

In this regard I don't see anything qualitatively repulsive about the description of the work. Perhaps sooner rather than later I would get the time to read the PhD thesis provided in one of +Alexander Kruel many links to get started and see what interest it holds for me. But no one (other than John) can read all the links (and the links' links) in John's post and (still) not be John. I hope this uninformed comment is not too far of the mark. 
+John Baez, no I didn't see that post from May!  So at least the weak Goldbach conjecture is (or has been reduced to) a well-defined mathematical question with a Yes or No answer, and there's good hope for the strong one as well.
There is an implicit limitation that some may find disappointing.  The algorithm M only works for computable problems with computable proofs.  These are countably infinite, but the number of non-computable problems is not countable and vastly larger.  (Actually, there are non-computable problems that are considered "Well-Defined", so the title of the paper is slightly incorrect.)  Non-computable programs are hard to describe and reason about, and there are naturally difficulties in using a computer to explore them.
Add a comment...