Cover photo
Terence Tao
Works at UCLA
Attended Princeton University
Lives in Los Angeles
28,016 followers|2,114,065 views


Terence Tao

Shared publicly  - 
It's unusual to see a playable game with what is essentially a Turing-complete user interface.  My son and I had a lot of fun getting through the first dozen or so levels so far.
Truly geeky fun.
Vít Tuček's profile photoMugizi Rwebangira's profile photoMike Stay's profile photoYi-Shiuan Hwang's profile photo
That was a very clever game.
Add a comment...

Terence Tao

Shared publicly  - 
The American Mathematical Society has issued a call for proposals for the von Neumann symposium for 2016 (a week long summer conference on a mathematical topic of current interest).  [I'm serving on the selection committee for this symposium.]
oviaivo moroplogo's profile photoJoseph Michela's profile photoFrançois Dorais's profile photoPete Royston's profile photo
Add a comment...
This paper clears up what was an odd gap between the blowup and global regularity theory for certain simplified toy models of the Navier-Stokes equation.  To vary the comparative strength between the nonlinear and dissipative components of Navier-Stokes, one can replace the dissipative Laplacian term in the true Navier-Stokes equation with a hyperdissipative term (a power of the Laplacian with exponent alpha larger than one) or a hypodissipative term (a power with exponent alpha less than one).  The larger alpha is, the more powerful the dissipation, and the more likely one believes global regularity holds.

In three spatial dimensions, the critical exponent is alpha=5/4, and it is known that for alpha at or above this level one has global regularity.  A few years ago, I observed that one could shave a small number of logarithms from the critical dissipation (which roughly corresponds to setting alpha to be "infinitesimally" below 5/4) and still have global regularity.  On the other hand, if one shaves off too many logarithms, then an argument in a more recent paper of mine shows that one can construct a toy (non-autonomous) dyadic model of Navier-Stokes which exhibits finite time blowup.  However, there was a puzzling intermediate region in which neither global regularity nor finite time blowup was clear.

What these authors have done is performed a finer analysis of the energy flow between dyadic scales to show that (a simplified dyadic model of) the Navier-Stokes equation exhibits global regularity in this intermediate regime; they are currently working on extending these results to the non-dyadic Navier-Stokes equation (with slightly supercritical hyperdissipation).

Unfortunately, this work does not directly impact the true Navier-Stokes equations (in which alpha=1), but it does improve our understanding of where the threshold between critical and genuinely supercritical behaviour lies.

  #spnetwork #recommend arXiv:1403.2852
Selected Papers Network
Use your Google sign-in to tag, share, discuss and recommend papers: Share This Paper. Share on Google+. Global regularity for a logarithmically supercritical hyperdissipative dyadic equation. David Barbato, Francesco Morandin, Marco Romito. We prove global existence of smooth solutions for a ...
Miller Ceron Gomez's profile photoNingchen Ying's profile photo
Add a comment...

Terence Tao

Shared publicly  - 
[Another repost from my former Google Buzz feed, which I first posted on Aug 16, 2010.]

Formally, a mathematical proof consists of a sequence of mathematical statements and deductions (e.g. "If A, then B"), strung together in a logical fashion to create a conclusion. A simple example of this is a linear chain of deductions, such as "A -> B -> C -> D -> E", to create the conclusion "A -> E". In practice, though, proofs tend to be more complicated than a linear chain, often acquiring a tree-like structure (or more generally, the structure of a directed acyclic graph), due to the need to branch into cases, or to reuse a hypothesis multiple times. Proof methods such as proof by contradiction, or proof by induction, can lead to even more intricate loops and reversals in a mathematical argument.

Unfortunately, not all proposed proofs of a statement in mathematics are actually correct, and so some effort needs to be put into verification of such a proposed proof. Broadly speaking, there are two ways that one can show that a proof can fail. Firstly, one can find a "local", "low-level" or "direct" objection to the proof, by showing that one of the steps (or perhaps a cluster of steps, see below) in the proof is invalid. For instance, if the implication C -> D is false, then the above proposed proof "A -> B -> C -> D -> E" of "A -> E" is invalid (though it is of course still conceivable that A -> E could be proven by some other route).

Sometimes, a low-level error cannot be localised to a single step, but rather to a cluster of steps. For instance, if one has a circular argument, in which a statement A is claimed using B as justification, and B is then claimed using A as justification, then it is possible for both implications A -> B and B -> A to be true, while the deduction that A and B are then both true remains invalid. (Note though that there are important and valid examples of near-circular arguments, such as proofs by induction, but this is not the topic of my discussion today.)

Another example of a low-level error that is not localisable to a single step arises from ambiguity. Suppose that one is claiming that A->B and B->C, and thus that A->C. If all terms are unambiguously well-defined, this is a valid deduction. But suppose that the expression B is ambiguous, and actually has at least two distinct interpretations, say B1 and B2. Suppose further that the A->B implication presumes the former interpretation B=B1, while the B->C implication presumes the latter interpretation B=B2, thus we actually have A->B1 and B2->C. In such a case we can no longer validly deduce that A->C (unless of course we can show in addition that B1->B2). In such a case, one cannot localise the error to either "A->B" or "B->C" until B is defined more unambiguously. This simple example illustrates the importance of getting key terms defined precisely in a mathematical argument.

The other way to find an error in a proof is to obtain a "high level" or "global" objection, showing that the proof, if valid, would necessarily imply a further consequence that is either known or strongly suspected to be false. The most well-known (and strongest) example of this is the counterexample. If one possesses a counterexample to the claim A->E, then one instantly knows that the chain of deduction "A->B->C->D->E" must be invalid, even if one cannot immediately pinpoint where the precise error is at the local level. Thus we see that global errors can be viewed as "non-constructive" guarantees that a local error must exist somewhere.

A bit more subtly, one can argue using the structure of the proof itself. If a claim such as A->E could be proven by a chain A->B->C->D->E, then this might mean that a parallel claim A'->E' could then also be proven by a parallel chain A'->B'->C'->D'->E' of logical reasoning. But if one also possesses a counterexample to A'->E', then this implies that there is a flaw somewhere in this parallel chain, and hence (presumably) also in the original chain. Other examples of this type include proofs of some conclusion that mysteriously never use in any essential way a crucial hypothesis (e.g. proofs of the non-existence of non-trivial integer solutions to a^n+b^n=c^n that mysteriously never use the hypothesis that n is strictly greater than 2, or which could be trivially adapted to cover the n=2 case).

While global errors are less constructive than local errors, and thus less satisfying as a "smoking gun", they tend to be significantly more robust. A local error can often be patched or worked around, especially if the proof is designed in a fault-tolerant fashion (e.g. if the proof proceeds by factoring a difficult problem into several strictly easier pieces, which are in turn factored into even simpler pieces, and so forth). But a global error tends to invalidate not only the proposed proof as it stands, but also all reasonable perturbations of that proof. For instance, a counterexample to A->E will automatically defeat any attempts to patch the invalid argument A->B->C->D->E, whereas the more local objection that C does not imply D could conceivably be worked around.

(There is a mathematical joke in which a mathematician is giving a lecture expounding on a recent difficult result that he has just claimed to prove. At the end of the lecture, another mathematician stands up and asserts that she has found a counterexample to the claimed result. The speaker then rebuts, "This does not matter; I have two proofs of this result!". Here one sees quite clearly the distinction of impact between a global error and a local one.)

It is also a lot quicker to find a global error than a local error, at least if the paper adheres to established standards of mathematical writing.
To find a local error in an N-page paper, one basically has to read a significant fraction of that paper line-by-line, whereas to find a global error it is often sufficient to skim the paper to extract the large--scale structure. This can sometimes lead to an awkward stage in the verification process when a global error has been found, but the local error predicted by the global error has not yet been located. Nevertheless, global errors are often the most serious errors of all.

It is generally good practice to try to structure a proof to be fault tolerant with respect to local errors, so that if, say, a key step in the proof of Lemma 17 fails, then the paper does not collapse completely, but contains at least some salvageable results of independent interest, or shows a reduction of the main problem to a simpler one. Global errors, by contrast, cannot really be defended against by a good choice of proof structure; instead, they require a good choice of proof strategy that anticipates global pitfalls and confronts them directly.

One last closing remark: as error-testing is the complementary exercise to proof-building, it is not surprising that the standards of rigour for the two activities are dual to each other. When one is building a proof, one is expected to adhere to the highest standards of rigour that are practical, since a single error could well collapse the entire effort. But when one is testing an argument for errors or other objections, then it is perfectly acceptable to use heuristics, hand-waving, intuition, or other non-rigorous means to locate and describe errors. This may mean that some objections to proofs are not watertight, but instead indicate that either the proof is invalid, or some accepted piece of mathematical intuition is in fact inaccurate. In some cases, it is the latter possibility that is the truth, in which case the result is deemed "paradoxical", yet true. Such objections, even if they do not invalidate the paper, are often very important for improving one's intuition about the subject.
Michael He's profile photoAmeera Chowdhury's profile photoJOHN M. MARWA's profile photoPhilippe Beaudoin's profile photo
At the risk of being part of the many spammy comments, I should note that my proofs often tend to not have the acyclic property :)

I've been thinking about how to relate the classical theory of formal proofs to the mathematical practice of proving things. I'm still convinced that informal mathematical practice can be informed by the formalism. Alexandre Miquel wrote a paper on the "reasonable effectiveness of mathematical proof" ([link]( ). It outlines how a physical refutation of a mathematical theory of physics can lead to a specific counter example to a mathematical hypothesis.

Conceivably this could be applied to a "paper" mathematical proof: the high level counter-example can be "mined" to find incorrect assumptions of the erroneous proof, using classical techniques, e.g. methods used by Ulrich Kohlenbach.

Anyhow the point I'm trying to make is that there is still hope for the classical theory of proofs to make sense of the mathematical practice of "high-level counter-proofs" and other mathematical tricks.
Add a comment...

Terence Tao

Shared publicly  - 
The IHES summer school (July 9-23 2014) on analytic number theory (covering, among other things, the recent progress of Zhang and others on prime gaps) is now accepting applications.
Helger Lipmaa's profile photoWei Fan's profile photo
Add a comment...
Have him in circles
28,016 people

Terence Tao

Shared publicly  - 
A picture is worth up to 64K bytes.

[It makes me wonder if there are any difficult mathematical topics of public interest which would also be worth representing in comic form.  I know a few examples, e.g. Lob's theorem , but perhaps there could be others.]
Evelyn Mitchell's profile photoSrikumar Subramanian's profile photoJames Britt's profile photoJakob Sch.'s profile photo
+Victor Porton Though not a consequence of the theorem itself, here is an interesting exploration that turns it into a kind of spreadsheet evaluator -
Add a comment...

Terence Tao

Shared publicly  - 
I ended my term on the Abel committee last year and was not directly involved in the decision this time around, but Sinai is certainly a worthy choice for this award.
Shivnarayan Dhuppar's profile photogeorge oloo's profile photoCheng Soon Ong's profile photoOmid Hatami's profile photo
Add a comment...

Terence Tao

Shared publicly  - 
The NSF is calling for proposals for week-long CBMS regional research conferences (based around a single lecturer giving an intensive series of lectures on one topic, with the notes to be converted into a book).  I gave one of these in Park City (on dispersive PDE) back in 2003; it was extremely work-intensive (two lectures daily for five days), but very productive and enjoyable.
moroplogo mrplgo's profile photoPaul Bryan's profile photoAhmed Amer's profile photoWei Fan's profile photo
I just ordered your book "Solving Mathematical Problems: A Personal Perspective". No doubt I would grasp only a fraction of it, but looking forward to reading it.
Add a comment...

Terence Tao

Shared publicly  - 
Somewhat late on this, but: the 2014 Wolf prize in mathematics is awarded to Peter Sarnak.
Sameer Kulkarni's profile photolavy koilpitchai's profile photo
Add a comment...

Terence Tao

Shared publicly  - 
There is a lot of discussion in various online mathematical forums currently about the interpretation, derivation, and significance of Ramanujan's famous (but extremely unintuitive) formula

1+2+3+4+... = -1/12   (1)

or similar divergent series formulae such as

1-1+1-1+... = 1/2 (2)


1+2+4+8+... = -1. (3)

One can view this topic from either a pre-rigorous, rigorous, or post-rigorous perspective (see this page of mine for a description of these three terms:  ).  The pre-rigorous approach is not particularly satisfactory: here one is taught the basic rules for manipulating finite sums (e.g. how to add or subtract one finite sum from another), and one is permitted to blindly apply these rules to infinite sums.  This approach can give derivations of identities such as (1), but can also lead to derivations of even more blatant absurdities such as 0=1, which of course makes any similar derivation of (1) look quite suspicious.

From a rigorous perspective, one learns in undergraduate analysis classes the notion of a convergent series and a divergent series, with the former having a well defined limit, which enjoys most of the same laws of series that finite series do (particularly if one restricts attention to absolutely convergent series).  In more advanced courses, one can then learn of more exotic summation methods (e.g. Cesaro summation, p-adic summation or Ramanujan summation) which can sometimes (but not always) be applied to certain divergent series, and which obey some (but not all) of the rules that finite series or absolutely convergent series do.  One can then carefully derive, manipulate, and use identities such as (1), so long as it is made precise at any given time what notion of summation is in force.  For instance, (1) is not true if summation is interpreted in the classical sense of convergent series, but it is true for some other notions of summation, such as Ramanujan summation, or a real-variable analogue of that summation that I describe in this post:

From a post-rigorous perspective, I believe that an equation such as (1) should more accurately be rendered as

1+2+3+4+... = -1/12 + ...

where the "..." on the right-hand side denotes terms which could be infinitely large (or divergent) when interpreted classically, but which one wishes to view as "negligible" for one's intended application (or at least "orthogonal" to that application).  For instance, as a rough first approximation (and assuming implicitly that the summation index in these series starts from n=1 rather than n=0), (1), (2), (3) should actually be written as

1+2+3+4+... = -1/12  + 1/2 infinity^2   (1)'

1-1+1-1+... = 1/2 - (-1)^{infinity} /2 (2)'


1+2+4+8+... = -1 + 2^{infinity}  (3)'

and more generally

1+x+x^2+x^3+... = 1/(1-x) + x^{infinity}/(x-1)

where the terms involving infinity do not make particularly rigorous sense, but would be considered orthogonal to the application at hand (a physicist would call these quantities unphysical) and so can often be neglected in one's manipulations.  (If one wanted to be even more accurate here, the 1/2 infinity^2 term should really be the integral of x dx from 0 to infinity.)  To rigorously formalise the notion of ignoring certain types of infinite expressions, one needs to use one of the summation methods mentioned above (with different summation methods corresponding to different classes of infinite terms that one is permitted to delete); but the above post-rigorous formulae can still provide clarifying intuition, once one has understood their rigorous counterparts.  For instance, the formulae (1)' and (3)' are now consistent with the left-hand side being positive and diverging to infinity, and the formula (2)' is consistent with the left-hand side being indeterminate in limit, with both 0 and 1 as limit points.  The fact that divergent series often do not behave well with respect to shifting the series can now be traced back to the fact that the infinite terms in the above identities produce some finite remainders when the infinity in those terms is shifted, say to infinity+1.

For a more advanced example, I believe that the "field of one element" should really be called "the field of 1+... elements", where the ... denotes an expression which one believes to be orthogonal to one's application.
José L. Torres's profile photoAaron Wood's profile photoDavid Foster's profile photophilippe roux's profile photo
It seems physicist regard it correct somehow but mathematician do the opposite. It is very strange.
Add a comment...
Have him in circles
28,016 people
  • Princeton University
    Mathematics, 1992 - 1996
  • Flinders University
    Mathematics, 1989 - 1992
Basic Information
  • UCLA
    Mathematician, present
Map of the places this user has livedMap of the places this user has livedMap of the places this user has lived
Los Angeles
Adelaide, Australia
Other profiles
Contributor to