Shared publicly  - 
 
Hyperoperations and huge numbers

If you repeat multiplication you get exponentation:

2↑4 = 2 × (2 × (2 × 2))

If you repeat exponentiation you get tetration, which gives you a 'power tower':

2↑↑4 = 2 ↑ (2 ↑ (2 ↑ 2))

This is 2 to the power 2 to the power 2 to the power 2.   If you repeat tetration, you get pentation:

2↑↑↑4 = 2 ↑↑  (2 ↑↑ (2 ↑↑ 2))

Let's see how big this is! 

2 ↑↑ 2 = 2 ↑ 2 = 2 × 2 = 4

so

2 ↑↑ (2 ↑↑ 2) = 2 ↑↑ 4 = 2 ↑ (2 ↑ (2 ↑ 2)) = 2 ↑ (2 ↑ 4) = 2 ↑ 16 = 65536

so

2 ↑↑  (2 ↑↑ (2 ↑↑ 2)) = 2 ↑↑ 65536

In short, 2↑↑↑4 is a power tower with 65536 twos in it!   

This is a staggeringly large number.   It can't be written in binary on all the atoms in the observable universe.  Its number of binary digits can't either.  The number of binary digits in its number of binary digits can't either.  And so on... for over 65,000 rounds.

But when we hit the next operation, hexation, all hell breaks loose. 

2↑↑↑↑4 = 2 ↑↑↑  (2 ↑↑↑ (2 ↑↑↑ 2))

Let's see how big this is!   By definition,

2 ↑↑↑ 2 = 2 ↑↑ 2 = 2 ↑ 2 = 2 × 2 = 4

So, by our previous calculation:

2 ↑↑↑ (2 ↑↑↑ 2) = 2 ↑↑↑ 4 = 2 ↑↑ 65536

and then

2 ↑↑↑  (2 ↑↑↑ (2 ↑↑↑ 2)) = 2 ↑↑↑ (2 ↑↑ 65536)

Can you comprehend this number?  This is

2 ↑↑ (2 ↑↑ (2 ↑↑ (2 ↑↑ (2 ↑↑ (2 ↑↑ (2 ↑↑ (2 ↑↑ (2 ↑↑ .....

where there are 2 ↑↑ 65536 twos.   The mind boggles, but here's a good way to think of it:

2
2 ↑↑ 2
2 ↑↑ (2 ↑↑ 2)  
etc.

We start with some number, namely 2.  Then we replace that number with a bigger one: a power tower consisting of that many 2s.   Then we replace that number with a power tower consisting of that many 2s.  And we do this process 2 ↑↑ 65536 times!

The result can't be written in binary on all the atoms in the observable universe.  Its number of binary digits can't either.  The number of binary digits in its number of binary digits can't either.  And so on... for a number of rounds that can't be written in binary on all the atoms of the observable universe!

These operations ↑, ↑↑, ↑↑↑ etc. are called hyperoperations:

http://en.wikipedia.org/wiki/Hyperoperation

and we're using Knuth's up-arrow notation to describe them:

http://en.wikipedia.org/wiki/Knuth%27s_up-arrow_notation

I hope you use these ideas to impress your friends and make new enemies.  But soon I'll show you notations for vastly larger numbers.   And later I'll show you that infinity infects the finite.  The best, most systematic notations for truly enormous but still finite and computable numbers use infinite ordinals of the sort I've been discussing lately! 

I'm using the word 'computable' in a sense explained in the attached article by Scott Aaronson.  He ran some contests to see who could describe the biggest number.  If you ever do this, make sure to require that the descriptions be computable: otherwise it may be impossible to tell who won!

#bigness  
[This essay in Spanish] · [This essay in French]. In an old joke, two noblemen vie to name the bigger number. The first, after ruminating for hours, triumphantly announces "Eighty-three!" The second, ...
42
14
Andrew Reid's profile photoGyl Sky's profile photoFrank Willmore's profile photoKirill Karyakin's profile photo
39 comments
 
(doing my best D.P. Gumby impression) My brain hurts!
 
Ah am struggling to keep up with it.

Need a peece a paper and a pen.

Quick, someone give me a piece of paper!

Wait! It's ok, I'll print a blank page out from the printer.
 
I'm really enjoying this series.  I'm understanding less than half of it, but enjoying it a lot. Are you working on a book?
 
I am aware that there is a connection between computability in the sense of Turing and Church, which I believe is what we're talking about here - specifically that there is no "Biggest computable number" - and decidability in the sense of Godel - specifically, it seems like our problem can be addressed if we can prove the consistency of Peano arithmetic because then we can use the standard proof that there is no largest natural number. Is this something being built to be addressed, or is there a previous entry which covers the connection here that I've missed?

I ask, because I remember that some time recently you were talking about Godel incompleteness and that there is a way to formally write statements such as "This statement cannot be proven in ZFC in less than 1000 steps" and how we can make even bigger statements so that presumably we can construct a statement such as "This statement cannot be proven in ZFC using less than BB(11111) steps" and so on and so forth. Does this not address the Berry paradox and create a paradigm that surpasses and encompasses all of the other? What am I missing?

(edited for clarification)
 
Those numbers are imaginary. They might be describable but they can not be explicitly computed in this universe. HyperLarge numbers are even more troublesome than infinities. They try to act as if they are finite but they are practically indistinguishable from infinity. They do show how small the universe is though. =D
 
I find myself wondering about the terminology. Extrapolating backwards, I suppse exponentiation would also be called triation? And multiplication would be duation, and addition would be monation. Well, I suppse addition is at the base of all this, so it makes sense. Sort of.
 
(off-topic) I like this quote by Albert Bartlett that Scott mentioned in the blog post linked to above: The greatest shortcoming of the human race is our inability to understand the exponential function .
 
… and then make up a notation for a?b == a↑↑…<b times ↑>…↑a, and then yet another notation on top of that, and so on.
(and then define that process as inductive and make up a meta-notation and then a meta-meta-notation and so on. And then add a hypermeta notation—)
 
+Harald Hanche-Olsen wrote: "I find myself wondering about the terminology. Extrapolating backwards, I suppose exponentiation would also be called triation? And multiplication would be duation, and addition would be monation. Well, I suppse addition is at the base of all this, so it makes sense. Sort of."

Right, you've got it.   Reuben Louis Goodstein made up the term tetration by combining the words 'tetra' and 'iteration', because it's the fourth in this series:

+, x , ↑, ↑↑, ↑↑↑, ↑↑↑↑, ...

Goodstein is the guy who made up the math underlying the game Hercules and the Hydra - the Goodstein sequence.  Tetration is also called hyper-4

Donald Knuth, the famous computer scientist who invented TeX, is the one who invented the up-arrow notation ↑, ↑↑, ↑↑↑, ↑↑↑↑, ...  It's a bit annoying that the up-arrows only kick in after + and ×, so hyper-n is denoted with n-2 arrows... but we're stuck with that.

It's also worth noting that addition is gotten by iterating the successor operation: n+m is n with 1 added to it m times.  But the successor operation n' = n+1 is not a binary operation, so it doesn't fit comfortably in this sequence.
 
+Paul Boldra wrote: "I'm really enjoying this series.  I'm understanding less than half of it, but enjoying it a lot."

Thanks!  If you have questions you want answered, don't be shy to ask.  There's something macho about big numbers and big infinities, but I'd like everyone who wants to understand this stuff to be able to.  My explanations tend to be terse because Google+ likes short posts... and there's a long way to go!

"Are you working on a book?"

Maybe.  At the very least, a series of webpages.  I've been interested in big numbers and infinities for a long time.  In fact I got into mathematical logic as a kid when my dad checked out a book from the library called "From Frege to Gödel".  He thought they were names of big numbers!  It was a sourcebook of original papers on logic and I couldn't understand most of it, but I liked the cool symbols.  Turns out mathematical logic is the key to describing really big numbers and infinities, so it all paid off. 
 
+Abe Pectol wrote: "… and then make up a notation for a?b == a↑↑…<b times ↑>…↑a, and then yet another notation on top of that, and so on.  (and then define that process as inductive and make up a meta-notation and then a meta-meta-notation and so on. And then add a hypermeta notation—)"

Great!   You're getting ahead of the story line here!   We have infinitely many operations

hyper-1 = + ,
hyper-2 = ×,
hyper-3 = ↑,
hyper-4 = ↑↑,
hyper-5 = ↑↑↑, 

and then hyper-ω, which grows faster than all these, is obtained by diagonalization.  Then comes hyper-(ω+1), and so on. 

So, you've reinvented the idea of a fast-growing hierarchy:

http://en.wikipedia.org/wiki/Fast-growing_hierarchy

This is what I meant by saying "Infinity infects the finite.  The best, most systematic notations for truly enormous but still finite and computable numbers use infinite ordinals of the sort I've been discussing lately!"
 
+Deen Abiola wrote: "Those numbers are imaginary. They might be describable but they can not be explicitly computed in this universe."

The interesting thing is that we can write these numbers quite tersely, like 10↑↑↑↑10, and do some calculations with them - though other calculations become very hard.  Math is not about 'what really is', it's about what we can consistently reason about.  Often we can represent an object efficiently in one way even though it would be hopelessly difficult in some other way.   It's a bit like how we can write π and know a lot about it even though we can't write down all its decimals.
 
For example, it's straightforward to say what 10↑↑↑↑10 is modulo smallish n (where "small" means "factorable").
 
>But the successor operation n' = n+1 is not a binary operation, so it doesn't fit comfortably in this sequence.

And nothing can; that is, addition cannot be the iterate of a more primitive binary operation, because n + 1 ≠ n (and also because n + 0 is not constant).
 
Are recursive applications of the Ackermann function to its own output still within the realm of the computable?
 
+Jeremy Kent - Yes!  They just take an insanely long time to compute.  They can't be computed 'in practice', but they could be computed by an imaginary computer with sufficiently high speed and enough memory.  The busy beaver function BB, on the other hand, is not computable: there's no algorithm for evaluating it.  So, using it counts as 'unfair' in games where you have to name the biggest number or fastest-growing function.  The main reason is that once we get into the uncomputable, it's often impossible to tell who won the game!  But in the realm of computable functions, the game is quite fascinating, because 'infinity infests the finite'.  I'll get into that more in a while.
 
Is there an "analog" to a turing machine that uses a continuous range of states?  Would that machine still have a BB number?

Separate question: if BB(n) is the smallest non-computable number for a given machine, is BB(n) - 1 therefore computable for that machine?  
 
+Jeremy Kent I am not an expert, but people who work with complexity of numerical methods typically work with basic arithmetic on real numbers (addition, multiplication, division) as primitive operations. I don't know if they have models like Turing machines which can read and write real numbers on the tape, as well as store a fixed number of real numbers in memory as well as do basic arithmetic on them, but it would almost surprise me if they don't.
In the context of this series, however, it might be more interesting to let them work with countable ordinals.
 
+Jeremy Kent wrote: "Separate question: if BB(n) is the smallest non-computable number for a given machine, is BB(n) - 1 therefore computable for that machine?"

It's not true that you can't get that machine to print out the number BB(n).  You can write a program to get the machine to print out 1, you can write a program to get the machine to print out 2, etc.... so you can certainly write a program to get the machine to print out BB(n) for any specific value of n.  You may not, however, know which program that is!

What you definitely can't do is write a program which, when you enter an arbitrary number n, makes the machine print out BB(n). 

In other words, it's the function BB that's not computable.  Any individual natural number is computable, so we don't even bother to talk about computability for individual natural numbers.
 
Ah, I'd understood the explanation to be that any given machine with n states had its own BB number it couldn't reach, and BB(n) just happened to be a function that described that for all machines.
 
Do you by any chance have a link to your paper that doesn't have the last line of every page blanked out with a url box?
 
I also really want a version of the articles in some ebookable form. I've been too busy to absorb them all, and g+ doesn't make it easy to cache content offline.
 
+Sheila Miguez - thanks!  I'll definitely at least put them on my blog, and then on my website in HTML, slowly polishing them at each step, and maybe turn them into a real book someday. 

I'll announce any progress on that project here.  But I also intend to do this with my series of posts "Symmetry and the Fourth Dimension", and I'm moving rather slowly with that.  It's much easier to start books than finish them...
 
There's a sequence of symmetric operations including + and * that has (as one goes "lower") max as a limiting case.  The discrete version of # is called "Diffie-Hellman-Merkle key exchange" and was one of the first efficient public-key cryptosystems.
...
a ### b = exp(ln(a) ## ln(b))
a ## b = exp(ln(a) # ln(b))
a # b = exp(ln(a) * ln(b))
a * b = exp(ln(a) + ln(b))
a + b = ln(exp(a) * exp(b))
a @ b = ln(exp(a) + exp(b))
a @@ b = ln(exp(a) @ exp(b))
...
a max b
 
Another nice thing about that sequence is that any operation in the sequence distributes over all the operations below it and max, being the limiting case, distributes over itself.
 
Hey, that's a nice property.... I think that's a bit like the kind of structure Jim Dolan was pondering: a sequence of operations each of which distributes over another.  These operations of yours aren't all associative, though, are they?   If they were, I'd be really interested.
 
expⁿ(lnⁿ(expⁿ(lnⁿ(a) + lnⁿ(b)))+lnⁿ(c))
= expⁿ(lnⁿ(a) + lnⁿ(b) + lnⁿ(c))
so yeah, I think they're all associative, too.
 
The trouble with the operations above multiplication is that they're not defined for all reals.  The operation I've denoted hash (#), for example, doesn't work on negative numbers; double hash (##) doesn't work on numbers less than 1 (the second logarithm is undefined), and in general the nth hash doesn't work on numbers less than exp(n-2) because their nth log is undefined.
 
Wow, so they're all associative!?  Do they all have unit elements too?  Then we'd get a sequence of monoids, each distributing over the previous one.  That's literally true only if they're all defined on some domain that contains all their units... or at least a finite list of them.   But the question of their domain of definition can perhaps be finessed somehow.  For example, we could work in the category of sets and partially defined maps.
 
The unit for the nth hash is exp(n):
eⁿ #ⁿ  x = expⁿ(lnⁿ(eⁿ) * lnⁿ(x)) = expⁿ(1 * lnⁿ(x)) = x

The unit for the nth @, if it exists at all, is minus infinity:
-∞  @ⁿ x = lnⁿ(expⁿ(-∞) + expⁿ(x)) = lnⁿ(0 + expⁿ(x)) = x
 
Cool.   By the way, I love the self-referential link in blue in this comment! 
 
:D  Yay for automatic markup!
 
Ok, dumb question here. 

Repeated addition gives us multiplication, and repeated multiplication gives us exponentation ↑ and then repeated exponentation gives us tetration ↑↑ and then even higher repeated functions ↑↑↑, ↑↑↑↑, ↑↑↑↑↑, etc. 

But what about going in the opposite direction? 
For instance, logs change exponents to multiplication.
And they also change multiplication to addition. 
Is there any function "below" addition? 
 
+David Foster+Mike Stay's post with the # and @ operations has operations below addition (his @ operations), but that belongs to a different sequence (since the # operations are not the ↑ operations).  There can be no operation below addition that works in this sequence for the reasons in my comment in this thread.  In particular, if ⊙ is repeated ⊡ in this sense, then a ⊙ 1 = a, but a + 1 ≠ a.
Add a comment...