Public
In 1936 Tarski proved a fundamental theorem of logic: the undefinability of truth. Roughly speaking, this says there's no consistent way to extend arithmetic so that it talks about 'truth' for statements about arithmetic. Why not? Because if we could, we could cook up a statement that says "I am not true." This would lead to a contradiction, the Liar Paradox: if this sentence is true then it's not, and if it's not then it is.
This is why the concept of 'truth' plays a limited role in most modern work on logic... surprising as that might seem to novices!
However, suppose we relax a bit and allow probability theory into our study of arithmetic. Could there be a consistent way to say, within arithmetic, that a statement about arithmetic has a certain probability of being true?
We can't let ourselves say a statement has a 100% probability of being true, or a 0% probability of being true, or we'll get in trouble with the undefinability of truth. But suppose we only let ourselves say that a statement has some probability greater than X and less then Y of being true, where 0 < X < Y < 1. Is that okay?
Yes it is, according to this draft of a paper by Paul Christiano, Eliezer Yudkowsky, Marcello Herresho and Mihaly Barasz!
But there's a catch, or two. First there are many self-consistent ways to assess the probability of truth of arithmetic statements. This suggests that the probability is somewhat 'subjective' . But that's fine if you think probabilities are inherently subjective.
A bit more problematic is this: their proof that there exists a self-consistent way to assess probabilities is not constructive. In other words, you can't use it to actually get your hands on a consistent assessment.
Fans of game theory will be amused to hear why: the proof uses Kakutani's fixed point theorem! This is the result that John Nash used to prove games have equilibrium solutions, where nobody can improve their expected payoff by changing their strategy. And this result is not constructive.
If you don't know Tarski's original result, try this:
http://en.wikipedia.org/wiki/Tarski%27s_undefinability_theorem
This is why the concept of 'truth' plays a limited role in most modern work on logic... surprising as that might seem to novices!
However, suppose we relax a bit and allow probability theory into our study of arithmetic. Could there be a consistent way to say, within arithmetic, that a statement about arithmetic has a certain probability of being true?
We can't let ourselves say a statement has a 100% probability of being true, or a 0% probability of being true, or we'll get in trouble with the undefinability of truth. But suppose we only let ourselves say that a statement has some probability greater than X and less then Y of being true, where 0 < X < Y < 1. Is that okay?
Yes it is, according to this draft of a paper by Paul Christiano, Eliezer Yudkowsky, Marcello Herresho and Mihaly Barasz!
But there's a catch, or two. First there are many self-consistent ways to assess the probability of truth of arithmetic statements. This suggests that the probability is somewhat 'subjective' . But that's fine if you think probabilities are inherently subjective.
A bit more problematic is this: their proof that there exists a self-consistent way to assess probabilities is not constructive. In other words, you can't use it to actually get your hands on a consistent assessment.
Fans of game theory will be amused to hear why: the proof uses Kakutani's fixed point theorem! This is the result that John Nash used to prove games have equilibrium solutions, where nobody can improve their expected payoff by changing their strategy. And this result is not constructive.
If you don't know Tarski's original result, try this:
http://en.wikipedia.org/wiki/Tarski%27s_undefinability_theorem
View 71 previous comments
+Tim Wesson
But what's about possible collisions between those statements? E.g. if you assume two statements true because they have "probability" >0.9999999999999 , which is quite true enough, is it truly unlikely that you'll hit some sort of contradiction? I don't see that proven truly unlikely, though I may be missing something. edit: also this is why I am not sure those numbers can be called probabilities yet.Apr 9, 2013
+Dmytry Lavrov Starting with your second statement: why should these things be called 'probabilities' - well, if they aren't unique, then that is a very good question. One that I came here hoping for some insight on. Unfortunately, none of the big guns have turned up, it seems :o(
Okay, to your first question: it's in the construction, specifically the question of coherence (section 2.1). That there is coherence implies that a near-contradiction arises exceedingly rarely. For example, a one-in-a-million falsehood in L will turn up typically one time in a million attempts to make langage L determinate. What is more, any one such falsehood could easily be constructed not to occur by fixing it as true ahead of its normal position in the queue, eg. right after the base axioms, at T1, instead of waiting until making the determination directly or indirectly at Ti for some i>1. It is worth noting that making L determinate still does not fix L'\L, that is: statements involving the probability predicate.
Those statements of high probability in L are those which come out true in almost all models in which statements of L are bivalent - that is, deterministically true or false. However, your high probability statement is no longer operating in such a scenario, but rather one where only the direct deductions and contradictions from the axioms are bivalent. Thus your scenario of high probability falsehoods does not apply, since the only truths and falsehoods are already predetermined, and no more will arise. Instead, statements deductively 'near' high probability statements will themselves have high probability, and this is enforced by axiom 1 of Theorem 1.
Statements in L'\L are determined not by this process, but rather by the use of the fixed point theorem in the proof of the consistency of the reflection principle. This process also fixes the probabilities of each choice in the initial determination, but is non-constructive, so you cannot know the probabilities, although you can reason with and about them: the probability relations remain.
My original question is how do we know that the fixed point (that fixes all the probabilities) is unique? This result might still be useful without uniqueness, but 'probability' would be a misnomer.Apr 10, 2013
+Ian simpson Do you have any thoughts on this?Apr 10, 2013
Fascinated but beyond me at the momentApr 10, 2013
Apr 10, 2013
Sure thing!Apr 10, 2013