Metaculus - a prediction websiteMetaculus
is a website where you can ask about future events and predict their probabilities. The "wisdom of crowds" says that this is a pretty reasonable way to divine the future. But some people are better predictors than others, and this skill can be learned. Check it out:http://www.metaculus.com/questions/
Metaculus was set up by two professors at U.C. Santa Cruz. Anthony Aguirre, a physicist, is a co-founder of the Foundational Questions Institute, which tries to catalyze breakthrough research in fundamental physics, and the Future of Life Institute, which studies disruptive technologies like AI. Greg Laughlin, an astrophysicist, is an expert at predictions from the millisecond predictions relevant to high-frequency trading to the ultra-long-term stability of the solar system.
I've asked and answered a few questions there. It's fun, and it will get more fun as more people take it seriously! Here's some stuff from their latest report:Dear Metaculus Users,We recently logged our 10,000th prediction. Not quite Big Data (which will take lots more growth), but we’re making progress! With this milestone passed, it seems like a good time to share an overview of our results
.First, the big picture. This can be summarized with a single histogram that shows the distribution of the first 10,042 predictions on our first 146 questions. Unambiguously, the three most popular predictions are 1%, 50% and 99%, with spikes of varying strength at each multiple of 5%. There’s a definite overall skew toward lower percentages. This phenomenon stems in part from the fact that the subset of provocative low-probability questions is most naturally worded in a way that the default outcome is negative, e.g., Question: Will we confirm evidence for megastructures orbiting the star KIC 8462852? (Answer: No.) The histogram also makes the point that while 99% confidence — the equivalent of complete confidence -- is very common, it’s very rare that anyone is ever 98% sure about anything. One takeaway from the pileup at 1% and 99% is that we could use more possible values there, so we plan to introduce an expanded range, from 0.1% to 99.9% soon — but as cautioned below, be careful in using it. Excluding the 1% and 99% spikes and smoothing a bit, the prediction distribution turns out to be a pretty nice gaussian, illustrating the ubiquitous effect of the law of large numbers.The wheels of Metaculus are grinding slowly, but they grind very fine. Almost 80% of the questions that have been posed on site are still either active (open), or closed (pending resolution) We are starting, however, to get meaningful statistics on questions that have resolved to date — a collection that spans a wide range of topics (from Alpha Go to LIGO and from VIX to SpaceX). We’ve been looking at different metrics to evaluate collective predictive success. A simple approach is to chart the fraction of outcomes that actually occurred, after aggregating over all of the predictions in each percentage bin. In the limit of a very large number of optimally calibrated predictions on a very large number of questions, the result would be the straight line shown in gold on Figure 2 below. It’s clear that the optimal result compares quite well to the aggregation produced by the Metaculus user base. Error bars are 25% and 75% confidence intervals, based on bootstrap resampling of the questions. The only marginally significant departure from the optimal result comes at the low end: as a whole, the user base has been slightly biased toward pessimism, assigning a modest overabundance of low probabilities to events that actually wound up happening. In particular, the big spike in the 1% bin in Figure 1 isn’t fully warranted. (This is also somewhat true at 99%: these predictions have come true 90% of the time.) Take-away: if you’re inclined to pull the slider all the way to the left or even right, give it a second thought...It has been demonstrated that the art of successful prediction is a skill that can be learned. Predictors get better over time, and so it’s interesting to look at the performance of the top predictors on Metaculus, as defined by users with a current score greater than 500. The histogram of predictions for the subset of top users shows some subtle differences with the histogram of all the predictions. The top predictors tend to be more equivocal. The 50% bin is still highly prominent, whereas the popularity of 1% votes is quite strongly diminished.
I recently predicted - not on Metaculus - that Hillary Clinton has a 99% chance of getting the Democratic nomination. Maybe I should have said 98%. But I definitely should put my prediction on Metaculus! This could develop into a useful resource.
If you want to become a "super-forecaster", you need to learn about the Good Judgment Project. Start here:http://www.npr.org/sections/parallels/2014/04/02/297839429/-so-you-think-youre-smarter-than-a-cia-agent
A little taste:For the past three years, Rich and 3,000 other average people have been quietly making probability estimates about everything from Venezuelan gas subsidies to North Korean politics as part of the Good Judgment Project, an experiment put together by three well-known psychologists and some people inside the intelligence community.According to one report, the predictions made by the Good Judgment Project are often better even than intelligence analysts with access to classified information, and many of the people involved in the project have been astonished by its success at making accurate predictions.
Then read Philip Tetlock's books Expert Political Judgment
and Superforecasting: The Art and Science of Prediction.
I haven't! But I would like to become a super-forecaster.