Post has attachment
These are bad predictions. Theyre vague and dramatic which makes them hard to test or refute but easy to spread. Moreover they tell charming and dramatic stories about a future that isn't possible in just 12 months because they presume new infrastructure and a society capable of universal adoption of new technologies.

The annoying thing is that there will be many rigged demos that look superficially like these predictions so I expect the Nostradamus effect (where people retrospectively change their interpretations of your predictions) to kick in.


Post has shared content
2017 prediction:

Among the "anti-regressive" pundits I follow, some shamelessly accommodated or even courted Donald Trump / his followers / his election, like Dave Rubin or Carl "Sargon of Akkad" Benjamin. Others, like Sam Harris, have clearly opposed Trump, even at the cost of supporting Hillary Clinton while despising her.

Prediction: this year won't be over before the people in the first group are forced to either make a big about-face – or maybe come out of the closet as conservatives. Meanwhile the people in the second group will be just fine, no 'spainin' needed.

Post has shared content
The IEEE Computer Society is honest enough to revisit its predictions for 2016.
IEEE Computer Society Grades Its 2016 Technology Predictions – Gets a B+
Last year IEEE Computer Society released its 2016 technology predictions, bullish on the future of 5G, virtual and augmented reality, nonvolatile memory, cyber physical systems, data science, capability-based security, and more. Now we’ve graded our future view check out how we did and our hits and misses:

Post has attachment
Posting this to remind myself to check on this article on an annual basis to see if publishers really do "take it from here"


Post has attachment

Post has attachment
“You can point to iPhone sales, which this year will decline for the first time since the 2007 launch of Apple’s crown jewel.” @austinfrank


Post has attachment
This is a large collection of random predictions and emerging technologies 

Post has attachment
#prediction202012 The Parliamentary Labour Party has split into at least 2 distinct parties.

Post has shared content
Metaculus - a prediction website

Metaculus is a website where you can ask about future events and predict their probabilities.  The "wisdom of crowds" says that this is a pretty reasonable way to divine the future.  But some people are better predictors than others, and this skill can be learned.  Check it out:

Metaculus was set up by two professors at U.C. Santa Cruz.  Anthony Aguirre, a physicist, is a co-founder of the Foundational Questions Institute, which tries to catalyze breakthrough research in fundamental physics, and the Future of Life Institute, which studies disruptive technologies like AI.  Greg Laughlin, an astrophysicist, is an expert at predictions from the millisecond predictions relevant to high-frequency trading to the ultra-long-term stability of the solar system.

I've asked and answered a few questions there.  It's fun, and it will get more fun as more people take it seriously!   Here's some stuff from their latest report:

Dear Metaculus Users,

We recently logged our 10,000th prediction. Not quite Big Data (which will take lots more growth), but we’re making progress! With this milestone passed, it seems like a good time to share an overview of our results
First, the big picture. This can be summarized with a single histogram that shows the distribution of the first 10,042 predictions on our first 146 questions. Unambiguously, the three most popular predictions are 1%, 50% and 99%, with spikes of varying strength at each multiple of 5%. There’s a definite overall skew toward lower percentages. This phenomenon stems in part from the fact that the subset of provocative low-probability questions is most naturally worded in a way that the default outcome is negative, e.g., Question: Will we confirm evidence for megastructures orbiting the star KIC 8462852? (Answer: No.) The histogram also makes the point that while 99% confidence — the equivalent of complete confidence -- is very common, it’s very rare that anyone is ever 98% sure about anything. One takeaway from the pileup at 1% and 99% is that we could use more possible values there, so we plan to introduce an expanded range, from 0.1% to 99.9% soon — but as cautioned below, be careful in using it. Excluding the 1% and 99% spikes and smoothing a bit, the prediction distribution turns out to be a pretty nice gaussian, illustrating the ubiquitous effect of the law of large numbers.

The wheels of Metaculus are grinding slowly, but they grind very fine. Almost 80% of the questions that have been posed on site are still either active (open), or closed (pending resolution) We are starting, however, to get meaningful statistics on questions that have resolved to date — a collection that spans a wide range of topics (from Alpha Go to LIGO and from VIX to SpaceX). We’ve been looking at different metrics to evaluate collective predictive success. A simple approach is to chart the fraction of outcomes that actually occurred, after aggregating over all of the predictions in each percentage bin. In the limit of a very large number of optimally calibrated predictions on a very large number of questions, the result would be the straight line shown in gold on Figure 2 below. It’s clear that the optimal result compares quite well to the aggregation produced by the Metaculus user base. Error bars are 25% and 75% confidence intervals, based on bootstrap resampling of the questions. The only marginally significant departure from the optimal result comes at the low end: as a whole, the user base has been slightly biased toward pessimism, assigning a modest overabundance of low probabilities to events that actually wound up happening. In particular, the big spike in the 1% bin in Figure 1 isn’t fully warranted. (This is also somewhat true at 99%: these predictions have come true 90% of the time.) Take-away: if you’re inclined to pull the slider all the way to the left or even right, give it a second thought...

It has been demonstrated that the art of successful prediction is a skill that can be learned. Predictors get better over time, and so it’s interesting to look at the performance of the top predictors on Metaculus, as defined by users with a current score greater than 500. The histogram of predictions for the subset of top users shows some subtle differences with the histogram of all the predictions. The top predictors tend to be more equivocal. The 50% bin is still highly prominent, whereas the popularity of 1% votes is quite strongly diminished.

I recently predicted - not on Metaculus - that Hillary Clinton has a 99% chance of getting the Democratic nomination.  Maybe I should have said 98%.  But I definitely should put my prediction on Metaculus!  This could develop into a useful resource.

If you want to become a "super-forecaster", you need to learn about the Good Judgment Project.  Start here:

A little taste:

For the past three years, Rich and 3,000 other average people have been quietly making probability estimates about everything from Venezuelan gas subsidies to North Korean politics as part of the Good Judgment Project, an experiment put together by three well-known psychologists and some people inside the intelligence community.

According to one report, the predictions made by the Good Judgment Project are often better even than intelligence analysts with access to classified information, and many of the people involved in the project have been astonished by its success at making accurate predictions.

Then read Philip Tetlock's books Expert Political Judgment and Superforecasting: The Art and Science of Prediction.  I haven't!   But I would like to become a super-forecaster.

Post has attachment
Wait while more posts are being loaded