Press question mark to see available shortcut keys

Below is a rough first draft of a talk about "The Future of Agile", which I'll be giving as a 30" plenary session at Agile Tour Lille next Thursday and that I hope to improve iteratively over the next several months.

I'd love feedback on this, please.

==========


For the past year I've been traveling with a talk that looked at some of the history behind the Agile movement, including the forty-year history of Software Engineering itself, and such phrases as the "software crisis" or "software factories". I've been thinking and writing about our past quite a bit.

Now people are telling me I should write and talk about the future of Agile. So I did what anybody would do in that situation: I typed "future of agile" in Google and took a peek at what everyone else was saying. Here are some of the predictions I found in the first few hits. (See if you can spot the odd one out. The answer will come much later.)
- "Agile is here to stay."
- "The future is Lean."
- "We will see an end to Scrum hegemony."
- "There will be more science."
- "Teams will start focusing on business value."
- "Companies will no longer need Agile coaches."
- "Agile will transition from fad to something everybody must know."
- "Within 3-4 years Agile practices will make it into the PMBOK."

Wait, what? Is this the best we can do when pontificating about the "future of Agile"? Maybe my data collection was biased, but what this most reminds me of is how we now have fun looking at the 50's predictions for the "next century" - that is, the time we live in now: you know, the one that was supposed to have flying cars and free energy for all (and much improved microwave ovens for the ladies).

Well, this much we know: prediction is hard. Just the other day, I was caught out in the rain when I popped out to buy a few things at the store: that's one kind of prediction failure.

Prediction failures also happen at the big-picture scale. Social scientist Philip Tetlock studied the predictions made by professional experts in political and economic matters. His findings are pretty devastating: these people, paid to forecast the futures, do barely better than random. One of the amusing tidbits is that when looking at the category of events that experts consider "absolutely impossible", these are found to happen about 20 to 30% of the time. (These "impossible but true" events unfortunately often turn out to be disasters of some kind: what Nicholas Nassim Taleb, whose insights often echo Tetlock's, famously labels "black swans".) Why does that happen? Because there's no accountability for these "professional" experts: no one counts how often they get it very wrong, and gives them a bad mark in their yearly performance review.

Another kind of prediction failure we are more familiar with: predicting project ship dates. Back in the early 90's, working on an interactive CD-ROM project, one of the first where I had some "senior" responsibility, I predicted we would be done in three months. It took well over a year and two rewrites. Everyone of us has war stories of this type, beginners to veterans. They're endemic to the industry. Some reach legendary status, like the Duke Nukem project, started in 1996 and released in 2011: possibly the record holder for number and magnitude of schedule slips.

It's very easy to verify that "prediction is hard" - not just for people on the street, but also for experts, and also for us in our professional capacity. And then our managers ask us for estimates, and demand that we treat these estimates as firm commitments! No wonder this prediction business causes so much suffering especially for software professionals. Prediction is hard - it's one of these built-in limitations of the brain, or cognitive biases, that I've been talking about a lot lately but that "traditional" software engineering seems to assume doesn't exist.

So what can we do about this? One first thing we can do is separate the topic of prediction from that of uncertainty. The full quote about prediction goes like this, from Yogi Berra: "Prediction is hard, especially about the future." This may sound silly at first - until you realize that "predicting the past" has another name, namely "taking a guess".

Here is a question, to illustrate. Note that the first rule of this game is: you're not allowed to access Google! Not on your mobile phone or iPad or netbook right now, and not on your PC if you're reading this at home after my talk. No, the idea is precisely to note how panicky this question makes you feel.

OK, here goes: what's your best estimate of the year that Newton published his magnum opus, the book that contained his laws of motion and the law of gravity?

Probably most of you don't have the exact answer. And that is fine: one of the best things we can do to become more effective at predictions in general is to let go of our obsession with "right" answers; to learn to live with uncertainty. The nice thing about predicting the past, though, is that in many cases we can go back and check what the right answer was, and this affords us feedback that we can use to improve.

So instead of trying to get the "right" answer, let's try this. Try to think instead of an interval: before date X, and after date Y. And now try to make this interval just big enough that you have one chance in two, or 50%, of getting the right answer. This means it's OK if you get it wrong! If you're very sure of your idea, you'll choose a narrow interval; if not, you can make it very large. Not too large: for instance, if you say "from the start of the Christian era to the present day", you're almost totally sure that your interval contains the right answer. That's too easy; not informative enough.

The key idea I want to tell you about today is "calibration" - the idea that you have a good assessment of your own uncertainty. If you were all well-calibrated, then about 50% of you will find that the correct answer is within the interval you gave. (At this point in a conference session, I'd take a show of hands, and because I know from research and experience that humans are generally not well calibrated, I'd expect almost anything but 50%.) Similarly if you (singular) as a person are well-calibrated, 50% of the time when giving a guess of that sort you would get it right, and the rest of the time wrong. If not, that means you can - and probably should - improve your prediction skills. In general we tend to be overconfident; our intervals are not large enough.

If you applied this back to the topic of software and task estimates, one thing you can do easily to improve on the usual kind of estimates, "one-point estimates", is to add an uncertainty term to them. So instead of saying "this will take me 4 days or less" you'd say "I'm 50% sure this will take me 4 days or less". Now you have a target for measurement and improvement: you should be on time or early 50% of the time. If not, that means your estimates are poorly calibrated; in all likelihood (because when was the last time you completed a task early, honestly?) they tend to be overoptimistic.

So the idea we're getting at here is that by quantifying our uncertainty, and establishing a feedback loop - and then practicing a lot - we can improve at this kind of thing, and compensate for our biases. For that we need to make adequate predictions:

- they must be like unit tests: specific, unambiguous ("yes or no"), or expressed in quantitative terms
- they have specific expiration dates, ranges, etc.
- they must come with a percentage, or a number between 0 and 1, representing our uncertainty
- they must be recorded when we make them, and all must be judged: otherwise we'll forget and select

It's hard to get all three conditions met. The predictions I mentioned at the start were written, but vague and ambiguous, and without certainty judgments. And often we make predictions on specific events - even implicit ones, such as going out despite a chance of rain - that we almost always forget to record.

Here are some tools for playing with guesses, or recording predictions, or measuring your performance with predictions:

- PredictionBook: http://predictionbook.com/
- Guessum: http://yootles.com/calibration/guessum/index.php
- CPA: http://calibratedprobabilityassessment.org
- Intrade markets: http://intrade.com/
- Inkling markets: http://home.inklingmarkets.com/
- The Good Judgment tournament: http://goodjudgment.info

I encourage you to play with those; I have, but I have also found them all lacking in several ways, and in particular none of them are specifically aimed at the software professional interested in training these skills to apply them to the context of software projects; I'm currently in the planning stages of a new site, and you can register to learn more about it here:

- Uncertaintize : http://uncertaintize.com

Now I need to fulfill my contract by looping back to the original topic of Agile. I happen to think that the next important topic in Agile will be education. So, one prediction according to the above criteria might be:

- by 2015 at least three major universities in France will have programs with 30% of the content covering items from one accepted list of Agile practices: Shore's "Art of Agile" or the Agile Institute's own "Guide"

However I will only do that for one prediction, and I will leave it up to you, as an exercise, to make your own predictions and record them in an appropriate registry.

What I really want to leave you with is a twist on a well-known saying attributed to Alan Kay:

"The best way to predict the future is to invent it."

And it's one I totally agree with, but it is incomplete. By insisting that our statements about the future be more falsifiable; by allowing that the name of the game isn't to "get the right answer" but to arrive at a honest assessment of our ignorance and our uncertainty; by closing the feedback loop on our predictions, we can improve our models of the world around us, and this in turn makes us more effective in choosing the futures we like. So I want to add this to it:

"One great way to steer the future is to predict it."

Now go forth and predict.
Shared publiclyView activity