Shared publicly  - 
 
Irrational exuberance about a single data point
A blog-style, skeptical post about cell phone outages and accident rates

A number of news outlets have commented on the linked news story about the drop in accidents in Abu Dhabi during the Blackberry outage last week. According to The National, accident rates in Dubai dropped 20% from their average, and the rate dropped 40% in Abu Dhabi.

The story headline, "Blackberry cuts made roads safer, police say," implies a causal link between reduced use of Blackberrys and accidents, supposedly because people weren't talking/texting while driving. Although I'm a firm believer that cell phone conversations and texting impair driving performance—the evidence for that is definitive—these data are wholly inadequate to draw any firm conclusions about traffic accidents and Blackberry outages.

To conclude that the outage actually made the roads safer, we need to know not just how much the rates were below average, but how variable those rates are for a typical 2-day period. That is, are the accident rates over these 2 days within the range of normal? According to the article, Dubai averages an accident every 3 minutes. That means in a 2-day period, the average number of accidents would be 960, and a 20% drop would be a drop of 192 accidents, or a total of 768 (that seems high to me, but I guess it's possible).

Presumably there is tremendous variability for any 2-day period. If so, then a drop of 20% might mean nothing whatsoever. The accident rate could average 960 every two days, but might range from 100 to 2500 accidents. If so, an accident rate of 768 would be entirely within the realm of normal. Without knowing both the mean and variance (and estimate of how much the accident rate changes from day to day), we can't conclude that the Blackberry outage had any effect at all. Moreover, the news article explicitly stated that the journalists were not shown the actual accident statistics, so these were just reports from the police officials who are actively campaigning against cell phone use during driving (a good cause, of course).

I think this example illustrates a much broader point. Even skeptical thinkers tend to apply their incisive critiques more to those whose views they oppose than those whose views they support. We all have a bias to favor evidence that supports our perspective, and claims like this one inspire exuberance rather than skepticism (assuming you believe that cell phone distraction does cause accidents).

In the long run, such misplaced exuberance about belief-consistent claims can be detrimental to the very causes we support (like getting people to stop using their phones while driving). When weak evidence is held up in the public eye in support of a case, it provides an easy rhetorical attack for opponents. By showing how weak the touted evidence actually is, they can undermine what might otherwise be a strong case (based on other evidence). I think this sort of overly enthusiastic promotion of new, and sometimes weak, findings can undermine the public trust in scientific conclusions. (Churnalism about such reports amplifies the problem).

This finding might well turn out to be legitimate, and accident rates might well drop as a result of a Blackberry outage. My point is that the evidence as presented is wholly inadequate to draw ANY firm conclusions. If the variability in accident rates turns out to be tiny, and a 20% drop falls well outside the normal range, then this might well be a significant finding. But, if 20% drops happen all the time, this data point would be effectively meaningless — it would provide no compelling evidence that the Blackberry outage had any effect at all.


(H/T +Bearman Cartoons & +Stephen Ng for sharing various secondary write-ups of the linked story in The National.
9
6
Gary Kahn's profile photoChryle Elieff's profile photoAndrey Chetverikov's profile photoDaniel Simons's profile photo
4 comments
 
Very true. It reminds me of a classic example of misinterpreting road accidents data provided by D. Campbell. In addition to analysis of means and variance one should also take a look at trends and compare situation in Abu Dhabi and Dubai to similar places, although it won't be easy.
 
+Daniel Simons , I was just talking about this during the correlation = causation lecture I gave in my lab course yesterday. It is not only very challenging for some students to understand what correlations mean, but also what they are. Some do think that a small noticeable trend or a single data point is a correlation. (I collected so many humorous examples related to this for my students that I had to keep telling them that I was not making them up)
 
+Chryle Elieff -- Yeah. I do the same thing in the research methods part of my intro experimental class. Correlations just aren't intuitive for a sizable subset of students. Inferring cause appropriately is even harder. And, it's particularly hard when the implied causes makes intuitive sense.
 
Thanks, Dan, an important reminder for well-meaning cause crusaders who want to stay on the right side of evidenced-based strategic considerations.
Add a comment...