Nate Silver 2012, The Signal and the Noise:
# ch12
Worse yet, the beer is expensive: the high taxes on alcohol and pretty much everything else in Denmark help to pay for a green-technology infrastructure that rivals almost anywhere in the world. Denmark consumes no more energy today than it did in the late 1960s,28 in part because it is environmentally friendly and in part because of its low population growth. (By contrast, the United States' energy consumption has roughly doubled over the same period.29) The implicit message seemed to be that an energy-efficient future would be cold, dark, and expensive.
It is little wonder, then, that the mood at Copenhagen's Bella Center ranged far beyond skepticism and toward outright cynicism. I had gone to the conference, somewhat naively, seeking a rigorous scientific debate about global warming. What I found instead was politics, and the differences seemed irreconcilable.
Delegates from Tuvalu, a tiny, low-lying Pacific island nation that would be among the most vulnerable to rising sea levels, roamed the halls, loudly protesting what they thought to be woefully inadequate targets for greenhouse-gas reduction. Meanwhile, the large nations that account for the vast majority of greenhouse-gas emissions were nowhere near agreement.
President Obama had arrived at the conference empty-handed, having burned much of his political capital on his health-care bill and his stimulus package. Countries like China, India, and Brazil, which are more vulnerable than the United States to climate change impacts because of their geography but are reluctant to adopt commitments that might impair their economic growth, weren't quite sure where to stand. Russia, with its cold climate and its abundance of fossil-fuel resources, was a wild card. Canada, also cold and energy-abundant, was another, unlikely to push for any deal that the United States lacked the willpower to enact.30
The criticisms that Armstrong and Green make about climate forecasts derive from their empirical study of disciplines like economics in which there are few such physical models available49 and the causal relationships are poorly understood. Overly ambitious approaches toward forecasting have often failed in these fields, and so Armstrong and Green infer that they will fail in climate forecasting as well.
The goal of any predictive model is to capture as much signal as possible and as little noise as possible. Striking the right balance is not always so easy, and our ability to do so will be dictated by the strength of the theory and the quality and quantity of the data. In economic forecasting, the data is very poor and the theory is weak, hence Armstrong's argument that "the more complex you make the model the worse the forecast gets."
In climate forecasting, the situation is more equivocal: the theory about the greenhouse effect is strong, which supports more complicated models. However, temperature data is very noisy, which argues against them.
One of the more forthright early efforts to forecast temperature rise came in 1981, when James Hansen and six other scientists published a paper in the esteemed journal Science.72 These predictions, which were based on relatively simple statistical estimates of the effects of CO2 and other atmospheric gases rather than a fully fledged simulation model, have done quite well. In fact, they very slightly underestimated the amount of global warming observed through 2011.73
Hansen is better known, however, for his 1988 congressional testimony as well as a related 1988 paper74 that he published in the Journal of Geophysical Research. This set of predictions did rely on a three-dimensional physical model of the atmosphere.
Hansen told Congress that Washington could expect to experience more frequent "hot summers." In his paper, he defined a hot summer as one in which average temperatures in Washington were in the top one-third of the summers observed from 1950 through 1980. He said that by the 1990s, Washington could expect to experience these summers 55 to 70 percent of the time, or roughly twice their 33 percent baseline rate.
In fact, Hansen's prediction proved to be highly prescient for Washington, DC. In the 1990s, six of the ten summers75 qualified as hot (figure 12-6), right in line with his prediction. About the same rate persisted in the 2000s and Washington experienced a record heat wave in 2012. In his paper, Hansen had also made these predictions for three other cities: Omaha, Memphis, and New York. These results were more mixed and go to illustrate the regional variability of the climate. Just 1 out of 10 summers in Omaha in the 1990s qualified as "hot" by Hansen's standard, well below the historical average rate of 33 percent. But 8 out of 10 summers in New York did, according to observations at LaGuardia Airport.
Uncertainty in forecasts is not necessarily a reason not to act-the Yale economist William Nordhaus has argued instead that it is precisely the uncertainty in climate forecasts that compels action,86 since the high-warming scenarios could be quite bad. Meanwhile, our government spends hundreds of billions toward economic stimulus programs, or initiates wars in the Middle East, under the pretense of what are probably far more speculative forecasts than are pertinent in climate science.87
And in contrast to other fields in which poor predictions are quickly forgotten about, errors in forecasts about the climate are remembered for decades.
One common claim among climate critics is that there once had been predictions of global cooling and possibly a new ice age. Indeed, there were a few published articles that projected a cooling trend in the 1970s. They rested on a reasonable-enough theory: that the cooling trend produced by sulfur emissions would outweigh the warming trend produced by carbon emissions.
These predictions were refuted in the majority of the scientific literature. [88. Thomas C. Peterson, William M. Connolley, and John Fleck, "The Myth of the 1970s Global Cooling Scientific Consensus," Bulletin of the American Meteorological Society, September 2008. http://scienceblogs.com/stoat/Myth-1970-Global-Cooling-BAMS-2008.pdf.] This was less true in the news media. A Newsweek story in 1975 imagined that the River Thames and the Hudson River might freeze over and stated that there would be a "drastic decline" in food production89-implications drawn by the writer of the piece but not any of the scientists he spoke with.
If the media can draw false equivalences between "skeptics" and "believers" in the climate science debate, it can also sometimes cherry-pick the most outlandish climate change claims even when they have been repudiated by the bulk of a scientist's peers.
What is the baseline in the case of the climate? If the critique of global warming forecasts is that they are unrealistically complex, the alternative would be a simpler forecast, one grounded in strong theoretical assumptions but with fewer bells and whistles.
Suppose, for instance, that you had attempted to make a climate forecast based on an extremely simple statistical model: one that looked solely at CO2 levels and temperatures, and extrapolated a prediction from these variables alone, ignoring sulfur and ENSO and sunspots and everything else. This wouldn't require a supercomputer; it could be calculated in a few microseconds on a laptop. How accurate would such a prediction have been?
In fact, it would have been very accurate-quite a bit better, actually, than the IPCC's forecast. If you had placed the temperature record from 1850 through 1989 into a simple linear regression equation, along with the level of CO2 as measured in Antarctic ice cores93 and at the Mauna Loa Observatory in Hawaii, it would have predicted a global temperature increase at the rate of 1.5°C per century from 1990 through today, exactly in line with the actual figure (figure 12-9).
Another technique, only slightly more complicated, would be to use estimates that were widely available at the time about the overall relationship between CO2 and temperatures. The common currency of any global warming forecast is a value that represents the effect on temperatures from a doubling (that is, a 100 percent increase) in atmospheric CO2. There has long been some agreement about this doubling value.94 From forecasts like those made by the British engineer G. S. Callendar in 193895 that relied on simple chemical equations, to those produced by today's supercomputers, estimates have congregated96 between 2°C and 3°C of warming from a doubling of CO2.
Given the actual rate of increase in atmospheric CO2, that simple conversion would have implied temperature rise at a rate of between 1.1°C and 1.7°C per century from 1990 through the present day. The actual warming pace of 0.015°C per year or 1.5°C per century fits snugly within that interval.
James Hansen's 1981 forecasts, which relied on an approach much like this, did quite a bit better at predicting current tempertaures than his 1988 forecast, which relied on simulated models of the climate.
The Armstrong and Green critique of model complexity thus looks pretty good here. But the success of the more basic forecasting methods suggests that Armstrong's critique may have won the battle but not the war. He is asking some good questions about model complexity, and the fact that the simple models do pretty well in predicting the climate is one piece of evidence in favor of his position that simpler models are preferable. However, since the simple methods correctly predicted a temperature increase in line with the rise in CO2, they are also evidence in favor of the greenhouse-effect hypothesis.
This type of framing can sometimes be made in bad faith. For instance, if you set the year 1998 as your starting point, which had record-high temperatures associated with the ENSO cycle, it will be easier to identify a cooling "trend." Conversely, the decadal "trend" from 2008 through 2018 will very probably be toward warming once it is calculated, since 2008 was a relatively cool year. Statistics of this sort are akin to when the stadium scoreboard optimistically mentions that the shortstop has eight hits in his last nineteen at-bats against left-handed relief pitchers-ignoring the fact that he is batting .190 for the season.100
Neither Armstrong nor Schmidt was willing to hedge very much on their predictions about the temperature trend. "We did some simulations from 1850 up to 2007," Armstrong told me. "When we looked one hundred years ahead it was virtually certain that I would win that bet."101 Schmidt, meanwhile, was willing to offer attractive odds to anyone betting against his position that temperatures would continue to increase. "I could easily give you odds on the next decade being warmer than this decade," he told me. "You want 100-to-1 odds, I'd give it to you."
The statistical forecasting methods that I outlined earlier can be used to resolve the dispute-and they suggest that neither Armstrong nor Schmidt has it quite right. If you measure the temperature trend one decade at a time, it registers a warming trend about 75 percent of the time since 1900, but a cooling trend the other 25 percent of the time. As the growth rate of atmospheric CO2 increases, creating a stronger greenhouse signal, periods of flat or cooling temperatures should become less frequent. Nevertheless, they are not impossible, nor are the odds anything like 100-to-1 against them. Instead, if you assume that CO2 levels will increase at the current pace of about 2 ppm per year, the chance that there would be no net warming over the course of a given decade would be about 15 percent102 according to this method.
The street-fighter mentality, nevertheless, seems to be predicated on the notion that we are just on the verge of resolving our political problems, if only a few more people could be persuaded about the science. In fact, we are probably many years away. "There's a point when I come to the conclusion that we're going to have to figure out how to take the carbon out," Richard Rood told me in Copenhagen, anticipating that there was almost no way the 193 members of the United Nations would agree to mutually acceptable terms.
Meanwhile, the American public's confidence that global warming is occurring has decreased somewhat over the past several years.109 And even if there were 100 percent agreement on the effects of climate change, some states and some countries would make out better than others in any plan to mitigate carbon emissions. "We have some very progressive Democratic governors in coal states," I was told by the governor of Washington, Christine Gregoire. "Boy, are they nervous about all this."
# ch12
Worse yet, the beer is expensive: the high taxes on alcohol and pretty much everything else in Denmark help to pay for a green-technology infrastructure that rivals almost anywhere in the world. Denmark consumes no more energy today than it did in the late 1960s,28 in part because it is environmentally friendly and in part because of its low population growth. (By contrast, the United States' energy consumption has roughly doubled over the same period.29) The implicit message seemed to be that an energy-efficient future would be cold, dark, and expensive.
It is little wonder, then, that the mood at Copenhagen's Bella Center ranged far beyond skepticism and toward outright cynicism. I had gone to the conference, somewhat naively, seeking a rigorous scientific debate about global warming. What I found instead was politics, and the differences seemed irreconcilable.
Delegates from Tuvalu, a tiny, low-lying Pacific island nation that would be among the most vulnerable to rising sea levels, roamed the halls, loudly protesting what they thought to be woefully inadequate targets for greenhouse-gas reduction. Meanwhile, the large nations that account for the vast majority of greenhouse-gas emissions were nowhere near agreement.
President Obama had arrived at the conference empty-handed, having burned much of his political capital on his health-care bill and his stimulus package. Countries like China, India, and Brazil, which are more vulnerable than the United States to climate change impacts because of their geography but are reluctant to adopt commitments that might impair their economic growth, weren't quite sure where to stand. Russia, with its cold climate and its abundance of fossil-fuel resources, was a wild card. Canada, also cold and energy-abundant, was another, unlikely to push for any deal that the United States lacked the willpower to enact.30
The criticisms that Armstrong and Green make about climate forecasts derive from their empirical study of disciplines like economics in which there are few such physical models available49 and the causal relationships are poorly understood. Overly ambitious approaches toward forecasting have often failed in these fields, and so Armstrong and Green infer that they will fail in climate forecasting as well.
The goal of any predictive model is to capture as much signal as possible and as little noise as possible. Striking the right balance is not always so easy, and our ability to do so will be dictated by the strength of the theory and the quality and quantity of the data. In economic forecasting, the data is very poor and the theory is weak, hence Armstrong's argument that "the more complex you make the model the worse the forecast gets."
In climate forecasting, the situation is more equivocal: the theory about the greenhouse effect is strong, which supports more complicated models. However, temperature data is very noisy, which argues against them.
One of the more forthright early efforts to forecast temperature rise came in 1981, when James Hansen and six other scientists published a paper in the esteemed journal Science.72 These predictions, which were based on relatively simple statistical estimates of the effects of CO2 and other atmospheric gases rather than a fully fledged simulation model, have done quite well. In fact, they very slightly underestimated the amount of global warming observed through 2011.73
Hansen is better known, however, for his 1988 congressional testimony as well as a related 1988 paper74 that he published in the Journal of Geophysical Research. This set of predictions did rely on a three-dimensional physical model of the atmosphere.
Hansen told Congress that Washington could expect to experience more frequent "hot summers." In his paper, he defined a hot summer as one in which average temperatures in Washington were in the top one-third of the summers observed from 1950 through 1980. He said that by the 1990s, Washington could expect to experience these summers 55 to 70 percent of the time, or roughly twice their 33 percent baseline rate.
In fact, Hansen's prediction proved to be highly prescient for Washington, DC. In the 1990s, six of the ten summers75 qualified as hot (figure 12-6), right in line with his prediction. About the same rate persisted in the 2000s and Washington experienced a record heat wave in 2012. In his paper, Hansen had also made these predictions for three other cities: Omaha, Memphis, and New York. These results were more mixed and go to illustrate the regional variability of the climate. Just 1 out of 10 summers in Omaha in the 1990s qualified as "hot" by Hansen's standard, well below the historical average rate of 33 percent. But 8 out of 10 summers in New York did, according to observations at LaGuardia Airport.
Uncertainty in forecasts is not necessarily a reason not to act-the Yale economist William Nordhaus has argued instead that it is precisely the uncertainty in climate forecasts that compels action,86 since the high-warming scenarios could be quite bad. Meanwhile, our government spends hundreds of billions toward economic stimulus programs, or initiates wars in the Middle East, under the pretense of what are probably far more speculative forecasts than are pertinent in climate science.87
And in contrast to other fields in which poor predictions are quickly forgotten about, errors in forecasts about the climate are remembered for decades.
One common claim among climate critics is that there once had been predictions of global cooling and possibly a new ice age. Indeed, there were a few published articles that projected a cooling trend in the 1970s. They rested on a reasonable-enough theory: that the cooling trend produced by sulfur emissions would outweigh the warming trend produced by carbon emissions.
These predictions were refuted in the majority of the scientific literature. [88. Thomas C. Peterson, William M. Connolley, and John Fleck, "The Myth of the 1970s Global Cooling Scientific Consensus," Bulletin of the American Meteorological Society, September 2008. http://scienceblogs.com/stoat/Myth-1970-Global-Cooling-BAMS-2008.pdf.] This was less true in the news media. A Newsweek story in 1975 imagined that the River Thames and the Hudson River might freeze over and stated that there would be a "drastic decline" in food production89-implications drawn by the writer of the piece but not any of the scientists he spoke with.
If the media can draw false equivalences between "skeptics" and "believers" in the climate science debate, it can also sometimes cherry-pick the most outlandish climate change claims even when they have been repudiated by the bulk of a scientist's peers.
What is the baseline in the case of the climate? If the critique of global warming forecasts is that they are unrealistically complex, the alternative would be a simpler forecast, one grounded in strong theoretical assumptions but with fewer bells and whistles.
Suppose, for instance, that you had attempted to make a climate forecast based on an extremely simple statistical model: one that looked solely at CO2 levels and temperatures, and extrapolated a prediction from these variables alone, ignoring sulfur and ENSO and sunspots and everything else. This wouldn't require a supercomputer; it could be calculated in a few microseconds on a laptop. How accurate would such a prediction have been?
In fact, it would have been very accurate-quite a bit better, actually, than the IPCC's forecast. If you had placed the temperature record from 1850 through 1989 into a simple linear regression equation, along with the level of CO2 as measured in Antarctic ice cores93 and at the Mauna Loa Observatory in Hawaii, it would have predicted a global temperature increase at the rate of 1.5°C per century from 1990 through today, exactly in line with the actual figure (figure 12-9).
Another technique, only slightly more complicated, would be to use estimates that were widely available at the time about the overall relationship between CO2 and temperatures. The common currency of any global warming forecast is a value that represents the effect on temperatures from a doubling (that is, a 100 percent increase) in atmospheric CO2. There has long been some agreement about this doubling value.94 From forecasts like those made by the British engineer G. S. Callendar in 193895 that relied on simple chemical equations, to those produced by today's supercomputers, estimates have congregated96 between 2°C and 3°C of warming from a doubling of CO2.
Given the actual rate of increase in atmospheric CO2, that simple conversion would have implied temperature rise at a rate of between 1.1°C and 1.7°C per century from 1990 through the present day. The actual warming pace of 0.015°C per year or 1.5°C per century fits snugly within that interval.
James Hansen's 1981 forecasts, which relied on an approach much like this, did quite a bit better at predicting current tempertaures than his 1988 forecast, which relied on simulated models of the climate.
The Armstrong and Green critique of model complexity thus looks pretty good here. But the success of the more basic forecasting methods suggests that Armstrong's critique may have won the battle but not the war. He is asking some good questions about model complexity, and the fact that the simple models do pretty well in predicting the climate is one piece of evidence in favor of his position that simpler models are preferable. However, since the simple methods correctly predicted a temperature increase in line with the rise in CO2, they are also evidence in favor of the greenhouse-effect hypothesis.
This type of framing can sometimes be made in bad faith. For instance, if you set the year 1998 as your starting point, which had record-high temperatures associated with the ENSO cycle, it will be easier to identify a cooling "trend." Conversely, the decadal "trend" from 2008 through 2018 will very probably be toward warming once it is calculated, since 2008 was a relatively cool year. Statistics of this sort are akin to when the stadium scoreboard optimistically mentions that the shortstop has eight hits in his last nineteen at-bats against left-handed relief pitchers-ignoring the fact that he is batting .190 for the season.100
Neither Armstrong nor Schmidt was willing to hedge very much on their predictions about the temperature trend. "We did some simulations from 1850 up to 2007," Armstrong told me. "When we looked one hundred years ahead it was virtually certain that I would win that bet."101 Schmidt, meanwhile, was willing to offer attractive odds to anyone betting against his position that temperatures would continue to increase. "I could easily give you odds on the next decade being warmer than this decade," he told me. "You want 100-to-1 odds, I'd give it to you."
The statistical forecasting methods that I outlined earlier can be used to resolve the dispute-and they suggest that neither Armstrong nor Schmidt has it quite right. If you measure the temperature trend one decade at a time, it registers a warming trend about 75 percent of the time since 1900, but a cooling trend the other 25 percent of the time. As the growth rate of atmospheric CO2 increases, creating a stronger greenhouse signal, periods of flat or cooling temperatures should become less frequent. Nevertheless, they are not impossible, nor are the odds anything like 100-to-1 against them. Instead, if you assume that CO2 levels will increase at the current pace of about 2 ppm per year, the chance that there would be no net warming over the course of a given decade would be about 15 percent102 according to this method.
The street-fighter mentality, nevertheless, seems to be predicated on the notion that we are just on the verge of resolving our political problems, if only a few more people could be persuaded about the science. In fact, we are probably many years away. "There's a point when I come to the conclusion that we're going to have to figure out how to take the carbon out," Richard Rood told me in Copenhagen, anticipating that there was almost no way the 193 members of the United Nations would agree to mutually acceptable terms.
Meanwhile, the American public's confidence that global warming is occurring has decreased somewhat over the past several years.109 And even if there were 100 percent agreement on the effects of climate change, some states and some countries would make out better than others in any plan to mitigate carbon emissions. "We have some very progressive Democratic governors in coal states," I was told by the governor of Washington, Christine Gregoire. "Boy, are they nervous about all this."
Shared publicly