Shared publicly  - 
 
A foundations of mathematics for the 21st century

It's here!   For decades, mathematicians been dreaming of an approach to math where different proofs that x = y can be seen as different paths in a space.   It's finally been made precise, thanks to Vladimir Voevodsky and a gang of mathematicians who holed up for a year at the Institute for Advanced Studies, at Princeton. 

I won't try to explain it, since that's what the book does.  I'll just mention a few of the radical new features:

•  It includes set theory as a special case, but it's founded on more general things called 'types'.  Types include sets, but also propositions.  Proving a proposition amounts to constructing an element of a certain type.  So, proofs are no longer 'outside' the mathematics being discussed, they're inside it just like everything else.

• The logic is - in general - 'constructive', meaning that to prove something exists amounts to giving a procedure for constructing it.  As a result, the system can be and is being computerized with the help of programs like Coq and AGDA.

• Types can be seen as 'spaces', and their elements as 'points'.  A proof that two elements of a type are equal can be seen as constructing a path between two points.  Sets are just a special case: the '0-types', which have no interesting higher-dimensional aspect.  There are also types that look like spheres and tori!  Technically speaking, the branch of topology called homotopy theory is now a part of logic!  That's why the subject is called homotopy type theory.

• Types can also be seen as infinity-groupoids.  Very roughly, these are structures with elements, isomorphisms between elements, isomorphisms between isomorphisms, and so on ad infinitum.  So, a certain chunk of the important new branch of math called 'higher category theory' is now part of logic, too.

• The most special contribution of Voevodsky is the univalence axiom.  Very very roughly, this expands the concept of 'equality' so that it's just as general as the hitherto more flexible concept of 'isomorphism' - or, if you know some more fancy math, 'equivalence'.   Mathematicians working on homotopy theory and higher category theory have known for decades that equality is too rigid a concept to be right - for certain applications.  The univalence axiom updates our concept of equality so that it's good again!    (However, nobody knows if this axiom is constructive - this is a big open question.)

Since this is all about foundations, and it's all quite new, please don't ask me yet what its practical applications are.  Ask me in a hundred years.  For now, I can tell you that this is the 'upgrade' that the foundations of math has needed ever since the work of Grothendieck.  It's truly 21-century math.

It's also a book for the 21st century, because it's escaped the grip of expensive publishers!  While it's 600 pages long, a hardback copy costs less than $27.  Paperback costs less than $18, and an electronic copy is free!  

#spnetwork #homotopytheory #ncategories #logic #foundations  
131
58
Edward O'Callaghan's profile photoTopos Quantum Theory's profile photoMark Redeman's profile photoFlorin Gogianu's profile photo
42 comments
 
That's pretty amazing.  Now to download the PDF and see if I can understand more than 3 words at a stretch!
 
Excellent.  Will this bring us closer to resolving some of the Gödelian paradoxes of set theory?
 
+Jeremy Kent - I think you'll enjoy the long introduction, which expands on the points I briefly made.  It's written in English.
 
+Craig Lennox I don't see how it could; as I understand it those paradoxes are results, not notational quirks that keep mathematicians from obtaining results.
 
Fascinating. Should make some good trip reading.
 
+Craig Lennox - - I hope +Michael Shulman addresses your question, taking it very broadly to mean "does this new stuff shed any interesting light on classic concerns of foundations, like Russell's paradox, Goedel's theorems, large cardinals, and all that jazz?"
 
"So, proofs are no longer 'outside' the mathematics being discussed, they're inside it just like everything else."

Didn't Godel do that like 60 years ago, with "interesting" results?
 
Goedel showed how to encode proofs as natural numbers and do interesting things with this.  I mean something a bit different.  In ordinary set theory, like the Zermelo-Fraenkel axioms, all the axioms are about sets.  The axioms are propositions, but the axioms aren't about propositions.   In homotopy theory, the axioms are all about types.  Both sets and propositions are examples of types, and the concept of 'element of a set' is on exactly the same footing as 'proof of a proposition'. 
 
I should add that this idea, of treating propositions and sets as examples of types, is not new.  The term 'type' goes back to Russell and Whitehead, but a crucial idea came with Haskell Curry in 1934 and was formalized by William Howard in 1969: the Curry-Howard correspondence between formulas and types, and between proofs and programs.  Another important figure is Per Martin-Löf, who introduced intutionistic type theory.

https://en.wikipedia.org/wiki/History_of_type_theory

Types are by now very important in computer science.  The really new thing is homotopy type theory.
 
Are there not missing/forbidden axioms in constructive theories, +John Baez? Hence, some theorems would be missing/unprovable. I'm sure it's all incredibly general, but then I can't imagine how "constructive" is a meaningful label. Also, any logical system of rules of inference can be modelled on a computer, so I'm not sure what advantage any "constructive" approach brings per se.

I know your explanation is at a high level and I'm very thankful for it. These are merely the first things that jumped out and puzzled me.
 
I had been waiting for quite some time for a book like this! There were always books dealing specifically with type theory, and then there were books that dealt with "foundations", but more or less only scattered blog posts attempted to give a unified treatment of the two areas. For a non-professional math student, such as myself, it was quite difficult to study and then develop an integrated understanding of the aforesaid areas on one's own. I was really excited this morning over the publication of this book. I told my wife that set theory (as we understand it, today) won't be the same! She just nodded and smiled.
 
OK I searched the introduction and was able to find "Fortunately, homotopy type theory gives a finer analysis of this situation, allowing various different kinds of logic to co- exist and intermix". It seems the major novelties of their approach address these constructive questions. Thanks again for the post!
 
+John Baez Thanks for this link! I'll definitely check this out in detail to assess how it relates to the implementation of the Cell Platform (http://s23m.org/S23M/cell-platform.html) that is grounded in category theory, denotational semantics, and a bit of model theory.

I'm particularly intruiged by your comments on the construction of proofs within the system and your comments on equality.

The Cell Platform is constructed as a self-contained system (containing a formal representation of itself), and it provides the ability to define domain-specific semantic equivalence functions, for which there are many practical use cases. I would not be surprised to find a similar approach in homotopy theory.
 
One practical consequence is it might make mathematical theorem proving in Coq and Agda much less incredibly tedious. If it provides some kind of foundation that makes manipulating mathematical structures less ad hoc and brittle (e.g. type classes in Coq).

+Robert Byrne 
Technically, computers can only verifiably do constructive proofs. 

Missing things? Maybe you're thinking of excluded middle and proof by contradiction but there's ways round that. Stuff like analysis is made harder but you gain more immediately checkable and usable proofs.
 
+Robert Byrne -  When you do mathematics constructively you can't use certain tricks to prove things exist without actually getting your hands on them, like the law of excluded middle ("if it didn't exist there would be a contradiction") and the axiom of choice ("presto, we choose a function without knowing what it is").  This means you get less, but what you get you really have in a very concrete way, suitable for computing with it. 

Homotopy Type Theory is fundamentally constructive, but you can add on extra assumptions when you want to make the system more like classical logic.  This gives you greater control.  It's like turning on the faucet when you really need water, instead of letting it just run all the time.

These quotes from the introduction may give you a bit more feeling for these issues:

Indeed, one can even have useful systems in which only certain types satisfy such further “classical” principles [like the law of excluded middle and axiom of choice], while types in general remain “constructive.”

It is worth emphasizing that univalent foundations does not require the use of constructive or intuitionistic logic. Most of classical mathematics which depends on the law of excluded middle and the axiom of choice can be performed in univalent foundations, simply by assuming that these two principles hold (in their proper, (-1)-truncated, form). However, type theory does encourage avoiding these principles when they are unnecessary, for several reasons.

First of all, every mathematician knows that a theorem is more powerful when proven using fewer assumptions, since it applies to more examples. [...]

Secondly, one of the additional virtues of type theory is its computable character. In addition to being a foundation for mathematics, type theory is a formal theory of computation, and can be treated as a powerful programming language. From this perspective, the rules of the system cannot be chosen arbitrarily the way set-theoretic axioms can: there must be a harmony between them which allows all proofs to be “executed” as programs.  [...]

Fortunately, constructive reasoning is not as hard as it may seem. In some cases, simply by rephrasing some definitions, a theorem can be made constructive and its proof more elegant. Moreover, in univalent foundations this seems to happen more often.
 
+Deen Abiola I don't have experience with theorem provers or verifiers, and I don't question your assertion that this system could improve them.

But for "verifiably do constructive proofs" I think we're talking at cross purposes and getting tangled on the notion of verifiability (which given unproven compilers, complex software - even hardware bugs - is a vexed question). For a reasonable notion of verifiable, computers can come up with correct existence proofs, e.g. by permuting rules of inference until a path to a desired or interesting existence statement is found.

I sense you know more about this than me, but that is what I mean.
 
+Robert Byrne - I agree that for your purposes, verifiability is not the main point. Everything +Deen Abiola said is correct, but I think what you mainly want to grapple with is the distinction between constructive and nonconstructive proofs.  This may help:

http://en.wikipedia.org/wiki/Constructive_proof

Besides some background, it gives an example of a constructive proof and a nonconstructive proof of the existence of irrational numbers.  The constructive proof lets you exhibit a specific example and write computer programs to compute things about this example.  The nonconstructive proof does not.
 
Hey +Robert Byrne, Proof Verification has a specific meaning where which a prover checks whether the sequence of actions are well founded with respect to a set of axioms, rules of inference and proven context. Yeah there's a problem of turtles but the systems the logic and code themselves are verified. You could verify it on a verified kernel on a verified chipset but no one really holds that level of paranoia.  

You can do classical proofs by defining the requisite axioms in and showing consistency is maintained . But if you want to evaluate the proof to say extract a program then if one has axioms and definition that are non computable then the checker will hang if it ends up on one of those axioms during execution. Also, I was talking about tedium, some of the higher level techniques the experts of type theoretic proofs use to make things more compact, reusable and less cumbersome break outside constructability. 
 
There is certainly a hope that HoTT, and in particular the univalence axiom, will be useful in manipulating mathematical structures in proof assistants. However, in order for that hope to be realized, we need a constructive understanding of the univalence axiom, which is a hard problem -- some people call it the most important open problem in HoTT. There are other ways that HoTT may make proof assistants easier to work with -- for instance, it eliminates the need for setoids -- but again, for these to be truly constructive we need to improve the base theory. Right now, the HoTT-specific parts of type theory are not implemented any more constructively than LEM, but there is a widespread expectation that they could be.
 
+Craig Lennox - I think type theory has some interesting things to contribute regarding Cantorian/Russellian paradoxes and consistency proofs, but I don't think the homotopy version of type theory has any more to add. For instance, one might naively hope that while we can't have a set of all sets, we could maybe have a groupoid of all sets, but this turns out not to work either, at least not in any way I've been able to think of.
 
Okay, thanks - I was getting carried away with the constructivity of HoTT.   I will now recklessly change my tune and bet that univalence cannot be implemented constructively.  I'd be happy to be proved wrong, and I hope my bet pisses people off enough to speed research into this question, and I don't understand this stuff very well... but univalence feels, to me, a bit too magical to be constructive. 
 
There's a problem that people mean all sorts of different things by the word 'constructive'. Univalence is constructive in the sense that it doesn't imply the axiom of choice, the law of excluded middle, or most other logical principles that people think of as nonconstructive. We know it doesn't, because it has models in higher toposes where these principles are false. But that's different from being computable which is the meaning of 'constructive' here.

There is some evidence that univalence is computably constructive. Dan Licata and +Robert Harper have given a version of '2-dimensional' type theory (where types can be 1-groupoids, but nothing higher) which has good constructive properties that type theorists want. And chapter 2 of the book describes some of how we 'compute' with univalence that we hope to be able to make into actual computation. Interestingly, one of the obstacles to this seems to be choosing a nice globular algebraic definition of infinity-groupoid! Thierry Coquand is working on a different approach with constructivising (semi)simplicial sets, which also has had some success in low dimensions.
 
Very cool stuff, Mike.  By constructive I simply meant something like 'computable': can you really get your hands on it, or are you just bluffing?
 
Sorry. I mean... that's very interesting. 
 
That's how Southerners say "HoTT".
 
I've mentioned this to Mike before, but one specific thing that I would hope that HoTT can do is to give a "constructive" proof that H^n(F(X); A) = 0 for n ≥ 2, where F(X) is the free group on a set X, and A is any module. In this case, it's clear what constructive should mean: If I give you an n-cocycle, you should be able to hand me an (n - 1)-cochain with that cocycle as its coboundary. This is easy to do by hand for n = 2. (Hint: A 2-cocycle is the same as a set-theoretically split extension. Use the universal property of the free group (aka "induction") to split the extension group-theoretically.)

To see why HoTT should hopefully be relevant, let's rephrase the problem for n = 3. H^3(F(X); A) classifies 2-groups with π1 = F(X) and π2 = A (you just need to pick an associator). Saying that H^3(F(X); A) = 0 means that any such 2-group is trivial. To exhibit this constructively would be to say that if you have any (skeletal, to avoid choice issues) 2-group whose object group is F(X) and whose π2 is A, then we can write down an equivalence between this and the trivial such monoidal groupoid.

Another way of saying this is that if we have a set map from X to π1 of any 2-group, so that we get a homomorphism from F(X) to π1 of this 2-group (which induces, via "pullback," a 2-group with π1 = F(X)), this map can be extended to a map from F(X) (thought of as a discrete 2-group) to the 2-group. This extended map is unique up to coherent monoidal isomorphism. (If I have two cochains with the same coboundary, their difference is a cocycle.) So what this all says is that the free group on a set X, viewed as a discrete 2-group, is the same as the free 2-group on a set X. The point is that this is not clear a priori: why should we expect that the free 2-group on a set should have no nontrivial morphisms? From my very crude understanding of this whole business, this seems like the sort of question that one should be able to answer very cleanly by formulating notions like "free 2-group" in terms of higher inductive types.
 
+John Baez I cannot seem to find this post on selectedpapers.net, I am probably overlooking something.
Thanks again for the great work on spnetwork.
 
Wow, I've just read a year old interview in Russian with Vladimir Voevodsky about the work he was conducting on foundations of mathematics and his story about severe hallucinations he suffered for a year and a half. It is a very instructive, personal interview, if someone knows Russian I recommend reading it http://baaltii1.livejournal.com/198675.html
 
+John Baez "...holed up for a year at the Institute for Advanced Studies, at Princeton." An acquaintance (Dan Asimov, in another forum to which I reposted your article) points out: "The Institute for Advanced Study is in Princeton, N.J., but it is not at Princeton University. They're separate institutions, even though they have some cooperation agreements like shared library resources."
 
+Evan Jenkins - I have no memory of you mentioning this before, so I'm glad you brought it up again, because I think we already have a partial answer: it is true as long as the set X has decidable equality. (Hence it is true for all X if we assume LEM, but then of course it's no longer constructive.) I think this should be attributed to Peter Lumsdaine and Guillaume Brunerie. Basically, you can define a bouquet of X circles as a HIT, and then essentially by construction its loop space will be the free oo-group on X. The proof that Omega(S¹)=Z generalizes to the bouquet of circles as long as X has decidable equality, so that the free oo-group on X is in fact a set, which is the free 1-group on X. Whether this is provable for general X is an intriguing open problem!
 
+Michael Shulman : I emailed you about a year ago, so it's not too surprising that you don't remember. But that sounds great! I'll definitely want to take a look at that once I have a bit of free time.
 
+Michael Shulman - I was curious to see if an spnetwork link would register without an arXiv, PubMed or DOI number.  I'm not sure what 'should' happen.  I hope you guys put a copy on the arXiv, and maybe the spnetwork should allow ISBN numbers.  (Does the book have an ISBN number?)
 
Arxiv is on the to-do list. I don't know about an ISBN.
 
Despite the #spnetwork  hash tag, I can't find this post on http://selectedpapers.net, maybe because there is no arxiv or doi reference.  This same thing happens with lots of older books that I've wanted to mention.  Any ideas for how to get around this?  (In this case, maybe the authors could upload to the arxiv, but there is a general problem here to be addressed.)

I've created issue #51 for this, and discussion should probably go there:  https://github.com/cjlee112/spnet/issues/51
 
Thanks, +Kam-Yung Soh!  The article mostly misses the point of homotopy type theory, but that's typical of New Scientist, and "all publicity is good publicity".
 
Is anything actually incorrect abouy the New Scientist article?

( Even if they missed the immediate point how this breakthrough impacts mathmatics beyond computable proofs ? )
 
My complaint about the New Scientist article is not exactly that it has factual errors, but that it mostly misses the point of this new book.  Type theory is old news, and so are computer-assisted proofs, and using type theory for computer-assisted proofs.  What's new about this book is that it's about homotopy type theory.  I tried to explain what that means in my post here. The New Scientist article doesn't say a word about that. 

This New Scientist article would have made more sense if it were about the computer-assisted proof of the Feit-Thompson theorem:

https://plus.google.com/u/0/117663015413546257905/posts/4BZRibN6iKQ
 
I have been following this book for a year, it has been a truly amazing year indeed! I highly recommend everyone here buys a copy! oh btw, its Coq and Agda 2 now ;) don't forget Idris with dependent types +John Baez 
Add a comment...