Post has attachment
Big numbers in knot theory

A link is a bunch of knots, possibly entangled with each other. How many steps does it take to get from one picture of a link to another picture of the same link? In 2011, two mathematicians gave this upper bound:

2 ↑↑ ((10 ↑ 1,000,000) ↑ n)

where n is the total number of crossings in both pictures.

In other words: take 2 to the power 2 to the power 2 to the power 2... where the number of 2's in this "tower of powers" is.... 10 to the n millionth power!

For more, read my blog article:

Add a comment...

Post has attachment
Some nonstandard integers are complex numbers

Let me explain - my headline is a bit vague. Peano arithmetic is a nice set of axioms describing the natural numbers. But thanks to Goedel's incompleteness theorem, these axioms can't completely nail down the structure of the natural numbers. So, there are lots of different "models" of Peano arithmetic.

These are often called nonstandard models. If you take a model of Peano arithmetic - say, your favorite "standard" model - you can get other models by throwing in extra natural numbers, larger than all the standard ones. These nonstandard models can be countable or uncountable.

Starting with any of these models you can define integers in the usual way (as differences of natural numbers), and then rational numbers (as ratios of integers). So, there are lots of nonstandard versions of the rational numbers. Any one of these will be a field: you can add, subtract, multiply and divide your nonstandard rationals, in ways that obey all the usual basic rules.

Now for the cool part: it turns out that if your nonstandard model of the natural numbers is small enough, your field of nonstandard rational numbers can be found somewhere in the field of complex numbers! In other words, it's a subfield of the complex numbers: a subset that's closed under addition, subtraction, multiplication and division by things that aren't zero.

This is counterintuitive at first, because we tend to think of nonstandard models of Peano arithmetic as spooky and elusive things, while we tend to think of the complex numbers as well-understood.

However, the field of complex numbers is actually very large, and it has room for many spooky and elusive things inside it. This is well-known to experts, and we're just seeing more evidence of that.

I said that all this works if your nonstandard model of the natural numbers is small enough. But what is "small enough"? Just the obvious thing: your nonstandard model needs to have a cardinality smaller than that of the complex numbers. So if it's countable, that's definitely small enough.

All of this is just a pop treatment of +Joel David Hamkins' post below. I was trying to take what he said and retell it for people who don't understand terms like "elementary extension", "algebraically closed", "categorical" or "sub-semiring". It really deserves to be well-known!

This fact was recently noticed by Alfred Dolich at a pub after a logic seminar at the City University of New York. The proof is very easy if you know this result: any field of characteristic zero whose cardinality is smaller than that of the continuum is isomorphic to some subfield of the complex numbers. So, unsurprisingly, this fact turned out to have been repeatedly discovered before.

The result I just mentioned follows from this: any two algebraically closed fields of characteristic zero that have the same uncountable cardinality must be isomorphic. So, say someone hands you a field F of characteristic zero whose cardinality is smaller than that of the continuum. You can take its algebraic closure by throwing in roots to all polynomials, and its cardinality won't get bigger. Then you can throw in even more elements to get a field whose cardinality is that of the continuum. The resulting field must be isomorphic to the complex numbers. So, F is isomorphic to a subfield of the complex numbers.

For more on this stuff, written at about the same level, try this post of mine:

I should admit that my use of the word "are" in the headline makes me feel dirty. By mathematicians' standards, this was an immoral piece of clickbait. I should have said "can be seen as": if there's any way to embed our nonstandard rationals into the complex numbers, there will be many ways.

Add a comment...

Post has attachment

This is the smallest known number that contains the digits from 1 to 6 in all possible orders. So, for example, if I make up something like 641523 or 162354, you can find it in here.

But here's the interesting part. This number has 872 digits.

What's so interesting about that? Well, the smallest number that contains the digits from 1 to 2 in all possible orders has

1! + 2! = 3

digits. The smallest number that contains the digits from 1 to 3 in all possible orders has

1! + 2! + 3! = 9

digits. The smallest number that contains the digits from 1 to 4 in all possible orders has

1! + 2! + 3! + 4! = 33

digits. The smallest number that contains the digits from 1 to 5 in all possible orders has

1! + 2! + 3! + 4! + 5! = 153

digits. So if you were very, very, very clever, you might guess that the smallest number that contains the digits from 1 to 6 in all possible orders has

1! + 2! + 3! + 4! + 5! + 6! = 873

digits. But it doesn't. You can get away with just 872. It might be possible to do even better. Nobody knows!

Thanks to Robin Houston for pointing this out:

Add a comment...

Post has attachment
Infinite chess

What's chess like on an infinite chessboard?

If you're a chess player, you might just try and see. But if you're also a mathematician, you can ask some tough questions... and try to solve them.

For example: if both sides have finitely many pieces, can you write a computer program that decides whether White can eventually checkmate Black, or vice versa?

As far as I can tell, nobody knows!

However, you can write a program where you can input any number n and the positions of the pieces, and it will decide whether White can checkmate Black in n moves. This was proved by Dan Brumleve, +Joel David Hamkins and Philipp Schlicht in 2012.

Weirder things happen if you allow infinitely many pieces. In the example shown here, White seems to have an crushing advantage — until you actually look at the pieces. In fact White is almost completely stuck — she can't move any pieces except her king! Black, too, can only move his king. If it's Black's move, he should start to run down the long, branching corridor. White should then try to chase the Black king and trap it in a mate.

Now you can do a strange trick. You can find a computer program that tells you where to put White's pieces, to create a long branching corridor such that no computer program can find an infinite path down this corridor. So, if both White and Black are computer programs, and Black moves first, Black is doomed to get trapped in a checkmate. But if Black can play in an uncomputable way, he can find a way down the corridor and never lose.

As +Joel David Hamkins explained:

Another interesting thing we noticed is that there is a computable position in infinite chess, such that in the category of computable play, it is a win for White — White has a computable strategy defeating any computable strategy of Black — but in the category of arbitrary play, both players have a drawing strategy. Thus, our judgment of whether a position is a win or a draw depends on whether we insist that players play according to a deterministic computable procedure or not.

The basic idea for this is to have a computable tree with no computable infinite branch. When Black plays computably, he will inevitably be trapped in a dead-end.

For more, read these papers:

Dan Brumleve, Joel David Hamkins, Philipp Schlicht, The mate-in-n problem of infinite chess is decidable,

C. D. A. Evans and Joel David Hamkins Transfinite game values in infinite chess,

A lot of these puzzles stem from a question by Richard Stanley on MathOverflow, here:

Decidability of chess on an infinite board,

Also see the discussion here:

Checkmate in omega moves?,

Add a comment...

Post has attachment
But Details Don't Matter

I can imagine a nice Onion article that starts with this headline and then goes on, the way Onion articles do, repeating and amplifying this joke by quoting various mathematicians who say that that this discovery is incredibly important but the details aren't worth explaining.

The article below is a bit like that. It also seems to be confusing people. I've been running around trying to straighten things out. Let me try here.

It says mathematicians have just proved two infinities called p and t are equal. Then it goes into a long review of basics, like "how can one infinity be bigger than another?" and "is there an infinite set bigger than the set of integers and smaller than the set of real numbers?" This is great stuff, but it's old stuff. The first question was answered by Cantor. The second question was asked by Cantor. Gödel and Cohen answered it like this: you'll never know, you pathetic humans - and we can prove you'll never know! (Roughly speaking.)

Unfortunately, because most people have the attention span of a small bug when it comes to math, a lot of them quit around here and conclude that p must be the number of integers and t must be the number of real numbers... or something like that.

Or, they conclude that mathematicians have finally answered Cantor's question... showing up Gödel and Cohen for the arrogant bastards they were.

No, no, no. The infinities p and t are something else. Cantor's question is just as unanswerable as it always was.

So what are p and t?

If you read down far enough in the article, it says a few things about this:

Some problems remained, though, including a question from the 1940s about whether p is equal to t. Both p and t are orders of infinity that quantify the minimum size of collections of subsets of the natural numbers in precise (and seemingly unique) ways.

The details of the two sizes don’t much matter.

I really dislike this. Quanta is one of the very best magazines around when it comes to explaining math. They have very high standards. So I won't pull my punches here:

The details do matter! It's math, for god's sake! In math, the details matter!

What the author means is that:

The details matter, but if I explained them your eyeballs would fall out, so I'm not gonna.

If they just said that, preferably very early on, I'd like this article a lot more.

In fact the Quanta article does make a stab at explaining p and t. But it's in bunch of text to the right of the main article, so you know it's your own fault if you read it and your eyeballls fall out... you've voided the warrantee! It says this:

Briefly, p is the minimum size of a collection of infinite sets of the natural numbers that have a “strong finite intersection property” and no “pseudointersection,” which means the subsets overlap each other in a particular way; t is called the “tower number” and is the minimum size of a collection of subsets of the natural numbers that is ordered in a way called “reverse almost inclusion” and has no pseudointersection.

If you feel bad for not understanding this, don't. I'm a mathematician and I don't understand it either. The reason is that it's not an explanation.

Why not? Because "in a particular way" doesn't mean anything. Also, "in a way called reverse almost inclusion" doesn't mean anything unless you already know what "reverse almost inclusion" means. On top of that, they don't say what the "strong finite intersection property" is.

So this is a non-explanation. Perhaps that's why they say "Briefly" at the beginning. "It would take too long to explain this stuff, so we'll briefly not explain it."

It's like saying this:

Briefly, here is how you make shrimp jambalaya. You take shrimp and other ingredients and combine them in a particular way. Then you perform an activity called "jambalayafication".

Now, perhaps I'm being too grumpy about this half-hearted attempt at explanation. In fact I definitely am — I'm really getting into it, unleashing my inner curmudgeon, which I usually keep chained up in the cellar. I'm gonna regret saying all this, I can feel it already. It's like when you drink one beer too many, and you know, as you're taking the first sip of that one beer too many, that you'll regret it the next day. So I'll temper my remarks a bit now: this non-explanation does at least let the reader see that something is going on with infinite sets of natural numbers, and it's something fairly technical. With just a few changes this article could have been much better.

Scientific American is worse: they quote the article from Quanta magazine, and just leave out this non-explanation. So all the reader learns is that "the details don't much matter".

You can see the definition of p and t here:

• Maryanthe Malliaris and Saharon Shelah, General topology meets model theory, on p and t, Proceedings of the National Academy of Sciences, available for free at

Do you want me to explain them? Maybe it really doesn't matter. But I would be glad to give it a try.

Add a comment...

Post has attachment
Easy as ABC? Not quite!

A brilliant mathematician named Shinichi Mochizuki claims to have proved the famous "abc conjecture" in number theory. That's great! There's just one problem: his proof is about 500 pages long, and almost nobody understands it, so mathematicians can't tell if it's correct.

Luckily another mathematician named Go Yamashita has just written a summary of the proof. That's great! There's just one problem: it's 294 pages long, and it looks very hard to understand.

I'm no expert on number theory, so my opinion doesn't really matter. What's hard for me to understand may be easy for an expert!

But it disturbs me that this new paper contains many theorems whose statements are over a page long... with the proof being just "Follows from the definitions."

Of course, every true theorem follows from the definitions. But the proof usually says how.

It's common to omit detailed proofs when one is summarizing someone else's work. But even a sketchy argument would help us understand what's going on.

This is part of a strange pattern surrounding Mochizuki's work. There was a conference in Oxford in 2015 aimed at helping expert number theorists understand it. Many of them found it frustrating. Brian Conrad wrote:

I don’t understand what caused the communication barrier that made it so difficult to answer questions in the final 2 days in a more illuminating manner. Certainly many of us had not read much in the papers before the meeting, but this does not explain the communication difficulties. Every time I would finally understand (as happened several times during the week) the intent of certain analogies or vague phrases that had previously mystified me (e.g., “dismantling scheme theory”), I still couldn’t see why those analogies and vague phrases were considered to be illuminating as written without being supplemented by more elaboration on the relevance to the context of the mathematical work.

At multiple times during the workshop we were shown lists of how many hours were invested by those who have already learned the theory and for how long person A has lectured on it to persons B and C. Such information shows admirable devotion and effort by those involved, but it is irrelevant to the evaluation and learning of mathematics. All of the arithmetic geometry experts in the audience have devoted countless hours to the study of difficult mathematical subjects, and I do not believe that any of us were ever guided or inspired by knowledge of hour-counts such as that. Nobody is convinced of the correctness of a proof by knowing how many hours have been devoted to explaining it to others; they are convinced by the force of ideas, not by the passage of time.

It's all very strange. Maybe Mochizuki is just a lot smarter than than us, and we're like dogs trying to learn calculus. Experts say he did a lot of brilliant work before his proof of the abc conjecture, so this is possible.

But, speaking as one dog to another, let me tell you what the abc conjecture says. It's about this equation:

a + b = c

Looks simple, right? Here a, b and c are positive integers that are relatively prime: they have no common factors except 1. If we let d be the product of the distinct prime factors of abc, the conjecture says that d is usually not much smaller than c.

More precisely, it says that if p > 1, there are only finitely many choices of relatively prime a,b,c with a + b = c and

d^p < c

It looks obscure when you first see it. It's famous because it has tons of consequences! It's equivalent to Szpiro's Conjecture and it implies the Fermat–Catalan conjecture, the Thue–Siegel–Roth theorem, the Mordell conjecture, Vojta's conjecture (in dimension 1), the Erdős–Woods conjecture (except perhaps for a finitely many counterexamples)... blah blah blah... etcetera etcetera.

Let me just tell you the Fermat–Catalan conjecture, to give you a taste of this stuff. In fact I'll just tell you one special case of that conjecture: there are at most finitely many solutions of

x^3 + y^4 = z^7

where x,y,z are relatively prime positive integers. The numbers 3,4,7 aren't very special - they could be lots of other things. But the Fermat–Catalan conjecture has some fine print in it that rules out certain choices of these exponents. In fact, if we rule out those exponents and also certain silly choices of x,y,z, it says there are only finitely many solutions even if we let the exponents vary! Here's a complete list of known solutions:

1^m + 2^3 = 3^2
2^5 + 7^2 = 3^4
13^2 + 7^3 = 2^9
2^7 + 17^3 = 71^2
3^5 + 11^4 = 122^2
33^8 + 1549034^2 = 15613^3
1414^3 + 2213459^2 = 65^7
9262^3 + 15312283^2 = 113^7
17^7 + 76271^3 = 21063928^2
43^8 + 96222^3 = 30042907^2

The first one is weird because m can be anything: we need some fine print to say this doesn't count as infinitely many solutions. This one is a story in itself: it's know that 2^3 = 8 and 3^2 = 9 are the only nontrivial powers of positive integers that differ by 1. This was Catalan's conjecture, and it was proved in 2002.

It's a long way from here to the very first paragraph in the summary at the start of Yamashita's paper:

By combining a relative anabelian result (relative Grothendieck Conjecture over sub-p-adic felds (Theorem B.1)) and "hidden endomorphism" diagram (EllCusp) (resp. "hidden endomorphism" diagram (BelyiCusp)), we show absolute anabelian results: the elliptic cuspidalisation (Theorem 3.7) (resp. Belyi cuspidalisation (Theorem 3.8)). By using Belyi cuspidalisations, we obtain an absolute mono-anabelian reconstruction of the NF-portion of the base field and the function field (resp. the base field) of hyperbolic curves of strictly Belyi type over sub-p-adic fields (Theorem 3.17) (resp. over mixed characteristic local fields (Corollary 3.19)). This gives us the philosophy of arithmetical holomorphicity and mono-analyticity (Section 3.5), and the theory of Kummer isomorphism from Frobenius-like objects to etale-like objects (cf. Remark 3.19.2).

And it's a long way from this – which still sounds sorta like stuff I hear
mathematicians say – to the scary theorems that crawl out of their caves around page 200!

Check out Yamashita's paper and see what I mean:

You can read Brian Conrad's story of the Oxford conference here:

You can learn more about the abc conjecture here:

And you can learn more about Mochizuki here:

He is the leader of and the main contributor to one of major parts of modern number theory: anabelian geometry. His contributions include his famous solution of the Grothendieck conjecture in anabelian geometry about hyperbolic curves over number fields. He initiated and developed several other fundamental developments: absolute anabelian geometry, mono-anabelian geometry, and combinatorial anabelian geometry. Among other theories, Mochizuki introduced and developed Hodge–Arakelov theory, p-adic Teichmüller theory, the theory of Frobenioids, and the etale theta-function theory.

Add a comment...

Post has attachment
Just ask Cleo

My real name is Cleo, I'm female. I have a medical condition that makes it very difficult for me to engage in conversations, or post long answers, sorry for that. I like math and do my best to be useful at this site, although I realize my answers might be not useful for everyone.

There's a website called Math StackExchange where people ask and answer questions. When hard integrals come up, Cleo often does them - with no explanation! She has a lot of fans now.

The integral here is a good example. When you replace ln³(1+x) by ln²(1+x) or just ln(1+x), the answers were already known. The answers involve the third Riemann zeta value:

ζ(3) = 1/1³ + 1/2³ + 1/3³ + 1/4³ + ...

They also involve the fourth polylogarithm function:

Li₄(x) = x + x²/2⁴ + x³/3⁴ + ...

Cleo found that the integral including ln³(1+x) can be done in a similar way - but it's much more complicated. She didn't explain her answer... but someone checked it with a computer and showed it was right to 1000 decimal places. Then someone gave a proof.

The number

ζ(3) = 1.202056903159594285399738161511449990764986292...

is famous because it was proved to be irrational only after a lot of struggle. Apéry found a proof in 1979. Even now, nobody is sure that the similar numbers ζ(5), ζ(7), ζ(9)... are irrational, though most of us believe it. The numbers ζ(2), ζ(4), ζ(6)... are much easier to handle. Euler figured out formulas for them involving powers of pi, and they're all irrational.

But here's a wonderful bit of progress: in 2001, Wadim Zudilin proved that at least one of the numbers ζ(5), ζ(7), ζ(9), and ζ(11) must be irrational. Sometimes we can only snatch tiny crumbs of knowledge from the math gods, but they're still precious.

For Cleo's posts, go here:

For more on ζ(3), go here:'s_constant

This number shows up in some physics problems, like computing the magnetic field produced by an electron! And that's just the tip of an iceberg: there are deep connections between Feynman diagrams, the numbers ζ(n), and mysterious mathematical entities glimpsed by Grothendieck, called 'motives'. Very roughly, a motive is what's left of a space if all you care about are the results of integrals over surfaces in this space.

The world record for computing digits of ζ(3) is currently held by Dipanjan Nag: in 2015 he computed 400,000,000,000 digits. But here's something cooler: David Broadhurst, who works on Feynman diagrams and numbers like ζ(n), has shown that there's a linear-time algorithm to compute the nth binary digit of ζ(3):

• David Broadhurst, Polylogarithmic ladders, hypergeometric series and the ten millionth digits of ζ(3) and ζ(5), available at

He exploits how Riemann zeta values ζ(n) are connected to polylogarithms... it's easy to see that

Liₙ(1) = ζ(n)

but at a deeper level this connection involves motives. For more on polylogarithms, go here:

Thanks to +David Roberts for pointing out Cleo's posts on Math StackExchange!

Add a comment...

Post has attachment
The hot inner core of the mathematical universe

Set theory starts out as a very simple way of organizing our thoughts — something every student should learn. But it gets more tricky when we start pondering infinite sets. And when we start pondering the universe — the collection of all sets — it gets a lot harder. Mathematicians have learned that there are obstacles to fully understanding the universe.

The collection of all sets can't be a set — Bertrand Russell and other logicians discovered this over a century ago. But more importantly, Gödel's theorem puts limits on how well any axioms can pin down the properties of the universe. Most mathematicians like to use the Zermelo-Fraenkel axioms together with the axiom of choice. But there are many questions left unsettled by these axioms.

Knowing this, you might give up on trying to fully understand the universe. That's actually what most mathematicians do. Frankly, the questions left unsettled by the ZFC axioms don't seem very urgent to most of us!

But set theorists don't give up. They've developed a lot of fascinating ways to make progress despite the obstacles.

In the 1960s, Paul Cohen introduced forcing. This is a way to make the universe larger, by making up a bunch of new sets, without violating the axioms you're using.

If I think the universe is U, you can use forcing to say "fine, but it's equally consistent to assume the universe is some larger collection V". Cohen used this to show the axiom of choice couldn't be proved from the other axioms in ZFC. Given a universe U where the Zermelo-Fraenkel axioms hold, he used forcing to build a bigger universe V where those axioms still hold, but the axiom of choice does not!

As an undergrad, I gave up my studies of set theory before I learned forcing. It was too hard to understand, and probably too badly explained: I don't think anyone even said what I just told you! I moved on to other things - there's a lot of fun stuff to learn. But for modern set theorists, forcing is utterly basic.

So what's new?

One new thing is set-theoretic geology. In this approach to set theory, instead of making the universe larger, you make it smaller. You try to 'dig down' and find the smallest possible universe!

So, starting with some universe V, we look for a smaller universe U that can give rise to V by forcing. If this is true, we call U a ground for V.

There can be lots of grounds for a universe V. This raises a big question: if we have two grounds for V, is there a ground that's contained in both?

In 2015, Toshimichi Usuba showed this is true! In fact he showed that for any set of grounds of V, there's a ground contained in all of these.

This raises another big question: is there a smallest ground, a ground contained in all other grounds? If so, this is called the bedrock of our universe V.

Usuba showed that the bedrock exists if a certain kind of infinite number exists! There are different sizes of infinity, and this particular kind is called 'hyper-huge'. It's so huge that it's not even explained in the Wikipedia article on huge cardinals. So, I can't explain it to you, or even to myself.

But still, I think I get the basic idea: if we have a large enough infinity, digging down infinitely far that much will get us down to the bedrock of the universe.

Naively, I tend to favor small universes. So, the bedrock appeals to me. However, you need a big universe to have large infinities like 'hyper-huge cardinals'. So, my minimalist philosophy runs into a problem, because your universe needs to contain big infinities for you to 'have time' to dig deep enough to hit bedrock!

Is this a paradox? Certainly not in the literal sense of a logical contradiction. But how about in the sense of something bizarre that makes no sense?

Probably not. There's a way to take the universe and divide into 'levels', called the von Neumann hierarchy. If you assert the existence of large cardinals, you're making the universe 'taller' — you're adding extra levels. But if you stick in extra sets by forcing, you might be making the universe 'wider' — that is, adding more sets at existing levels. So, you may need a super-huge cardinal to have enough time to chip away at the stuff in all these levels until you hit bedrock.

This is just my guess; I'm no expert. For more information from an actual expert try the blog article I'm linking to, by Joel David Hamkins.

He talks about a concept called the 'mantle', without explaining it. But he explains it in a comment to his post: the mantle of the universe is the intersection of all grounds. If there's a hyper-huge cardinal, this must be the bedrock. If not, other things can happen.

Add a comment...

Post has attachment
Insanely large Rubik's cube

Unlike the stereotype of a mathematician, I don't care much about Rubik's cubes. But I do care about the truth. So when the University of Michigan loudly announced THE WORLD'S LARGEST FREESTANDING RUBIK'S CUBE:

I had to feel a bit sorry for Tony Fisher here, who built the largest Rubik's cube, which he stores in his back-yard garage:

It's not clear what "free-standing" means, but the University of Michigan report clarifies things near the end:

A giant Rubik’s Cube newly installed on the University of Michigan’s North Campus is believed to be the world’s largest hand-solvable, stationary version of the famous puzzle. Since it was invented in 1974, the Rubik’s cube has become the world’s best-selling puzzle game—one that introduced and promoted mathematical thinking to generations. Solving it involves recognizing patterns and developing and implementing algorithms. The colorful, new cube is meant to be touched and solved. The students worked hard to figure out a movement mechanism that would enable that. They realized they couldn’t simply scale up the approach a handheld cube relies on because the friction would be too great. So to keep friction minimal, they devised a setup that utilizes rollers and transfer bearings. “There is no other human-manipulable cube like this, to the best of our knowledge. That said, it is not technically the largest cube. We're aware of a larger cube that requires the user to literally roll it on the ground to solve and rotate the faces. None of that is required by our stationary design. So to be very precise, it is the world's largest stationary, human manipulable Rubik's cube.”

The math of Rubik's cubes is big too: contains a history of the proof that all 43,252,003,274,489,856,000 positions of the cube can be brought to the standard position in just 20 moves. Finding the proof took about thirty years, and finishing it off required computer assistance - about 35 CPU years of it!
Animated Photo
Add a comment...

Post has attachment
Life on the Infinite Farm

This is a great book about infinity - for kids.   For example, there's a cow named Gracie with infinitely many legs.  She likes new shoes, but she wants to keep wearing all her old shoes.  What does she do?

Life on the Infinite Farm is by Richard Evan Schwartz, and it's free here:

Later it will be published on paper by the American Mathematical Society.  I really like turning the pages when I'm reading a book to a child.  Is that old-fashioned?  What do modern parents think?

Gracie's tale is just a retelling of the first Hilbert Hotel story.  There's a hotel with infinitely many rooms.  Unfortunately they're all full.  A guest walks in.  What do you do? 

You move the guest in room 1 to room 2, the guest in room 2 to room 3, and so on.  Now there's a room available!

The Hilbert Hotel stories were introduced by the great mathematician David Hilbert in a 1924 lecture, and popularized by George Gamow in his classic One Two Three... Infinity.   That book made a huge impression on me as a child: one of my first times I tasted the delights of mathematics.    

But that book is not good for children just learning to read.  Life on the Infinite Farm is.  And there's nothing that smells like "education" in this book.  It's just fun.

You can read more Hilbert Hotel stories here:'s_paradox_of_the_Grand_Hotel

But it's probably more fun to read Gamow's One Two Three... Infinity.   He was an excellent astrophysicist who in 1942 figured out how the first elements were created - the theory of Big Bang nucleosynthesis.    He was also a coauthor of the famous Alpher-Bethe-Gamow paper on this topic, also known as the αβγ paper.    Alpher was a grad student of Gamow, and they added the famous nuclear physicist Hans Bethe as a coauthor just for fun - since 'Bethe' is pronounced like the Greek letter 'beta':

It seemed unfair to the Greek alphabet to have the article signed by Alpher and Gamow only, and so the name of Dr. Hans A. Bethe was inserted in preparing the manuscript for print. Dr. Bethe, who received a copy of the manuscript, did not object, and, as a matter of fact, was quite helpful in subsequent discussions. There was, however, a rumor that later, when the alpha, beta, gamma theory went temporarily on the rocks, Dr. Bethe seriously considered changing his name to Zacharias.

Gamow also had a real knack for explaining things in fun ways, with the help of charming pictures.   I don't do many advertisements for commercial products, but I will for this!  You can get his book for as little as $2.98 plus shipping:

You should have read it by the time you were a teenager - but if you didn't, maybe it's not too late.

For more about Gamow, see:αβγ_paper

Add a comment...
Wait while more posts are being loaded