I had a meeting this morning with someone from the international relations department who is designing a curriculum on climate change for use in LA public high schools. She wanted me to clarify some science material for her, and asked me a couple of questions to which I didn't have an answer off the top of my head. I spent the afternoon doing some research and calculation, and came up with the figures she wanted. It's pretty interesting stuff.
She wanted to know: (1) how much electricity LA uses, and how that compares to output from different kinds of powerplants; (2) Exactly what emissions targets we should be aiming for at the Paris talks and why; (3) how the carbon cycle factors in to emissions estimates. Here's what I came up with:
In 2013 the city of Los Angeles used 68,000 million kilowatt-hours of energy. Assuming the load was spread more-or-less evenly over the 8,760 hours in a year, that works out to an average of about 7800000 kW-h per hour, so the average instantaneous load on the city's power supply was 7800000 kW, or 7,800 megawatts. That's a lot of watts. To put that in perspective, the average incandescent light bulb draws about 60 W, so that's enough power to light about 130,000,000 light bulbs at once; in 2013 the city of LA used enough power to light 130,000,000 light bulbs 24 hours a day.
So how does that stack up to power plants? The recently closed nuclear power plant near San Diego had two reactors that each could put out 1,127 MW of power when running at full capacity, so when going full out it would supply 2,254 MW of power, or about 29% of LA's needs. For scale, the reactor on Three Mile Island only put out 800 MW, just over 10% of LA's needs. The output of renewable energy plants is similarly variable. The Desert Sunlight Solar Farm in the Mojave desert has a peak capacity of 550 MW when conditions are perfect (sunny summer day near noon). In contrast, the Avenal solar facility in Kings County has a peak output of only 45 MW. The San Gorgonio wind farm 45 minutes east of LA can put out 615 MW at peak (again, given the right conditions). The big problem with both solar and wind is that they're very condition-dependent, which makes it difficult to rely on them to power a city. Part of why Elon Musk's battery technology is so exciting is that it makes it possible for renewable plants to more easily "bank" excess energy produced during optimal conditions, then parcel it out later when demand outstrips supply (e.g. at night for a solar plant).
Output of natural gas facilities in California is roughly comparable to mid-sized nuclear plants. AES Alamitos in Long Beach runs on natural gas, and has a peak output of 2,000 MW. Most of our local power comes from natural gas (which is cleaner than coal, though not totally clean). Large coal-powered plants (of which California has none) are roughly comparable, running at about 2,000 MW.
There are some interesting outliers. The Three Gorges Dam hydroelectric station in China has a staggering capacity of 22,000 MW--enough to power all of Los Angeles almost three times over, just by itself.
OK, now that we're comfortable talking about watts, what about the different emission scenarios. The RCP scenarios given by the IPCC (and depicted on those handouts you have) are calculated based on the expected increase of what climate scientists call radiative forcing by the year 2100. RCP 8.5, for instance, is the scenario in which the radiative forcing in 2100 is 8.5 W per square meter higher than it is now. Radiative forcing is just what it sounds like: it's the amount of energy (watts) absorbed by the earth over a certain area (square meters). An increase in radiative forcing by 8.5 W/m^2 is the equivalent of hanging an 8.5 watt light bulb over every square meter in the world (that's a lot of energy!), except the increased forcing comes not from more light, but from a stronger greenhouse effect.
The change in radiative forcing as we add more CO2 to the atmosphere is very easily calculated, and relies on pretty basic facts of radiative physics. However, we're more interested in the change in temperature than the change in radiative forcing--we want to know how much warmer we can expect the planet to be as a result of a given increase in radiative forcing. This calculation is much less straightforward, and is the source of a significant amount of uncertainty in climate predictions and policy. The relevant value is called the "climate sensitivity," and it reflects how much warming we can expect given an increase in radiative forcing of about 3.7 W/m^2, which corresponds roughly to a doubling of CO2 levels in the atmosphere from pre-industrial levels. This value is notoriously difficult to pin down, because the global climate is a spectacularly complex system. There are very many interlinked feedback loops that can either make things warmer than we expect or make things cooler than we expect, given a certain change in radiative forcing.
For example, a small increase in radiative forcing might raise the global average temperature by (say) two degrees C because of an increased greenhouse effect. However, this small temperature increase might also significantly reduce the amount of land that's covered in ice, exposing the bare rock (or water) beneath. Since ice is white, it reflects a lot of energy that hits it straight back to space without absorbing it (climate people would say it has a high albedo). Water and rock are much darker (i.e. have a lower albedo), and so absorb more of the incoming energy, warming up. This can potentially lead to "run-away" warming in which a small temperature increase melts some ice, which raises the temperature a little more, which melts more ice, which raises the temperature more, and so on. These "feedbacks" are among the most difficult things to model accurately in climate science, and are a source of tremendous uncertainty; we just don't know exactly how they'll all interact with one another.
The end result of this is that our best estimates right now for climate sensitivity are given as a range. The most recent IPCC report (AR5) pegs the value at 1.5-4.5 degrees C, with a roughly 90% confidence interval (that is, a Bayesian probability estimation--this might be a good hook for more advanced math classes to dig into). It's highly unlikely to be below 1 degree C or above 6 degrees C. That still leaves a huge range of values, though, and a 4.5 degrees C warmer world is likely to look very different than a 1.5 degrees C warmer world. The best consensus estimate is that the value is probably somewhere between 2 and 3 degrees C. This is part of why climate science is so hard.
So, if the climate sensitivity is about 2.5 degrees C and we want to limit warming to about that much, we need to avoid any scenario that exerts much more than about 4 W/m^2 additional radiative forcing. The only RCP that's below that is the lowest one, RCP 2.6. The next highest, RCP 4.5, is pretty close. This gives us a concrete target: we want to avoid going over about 550 ppm by volume of CO2. Right now we're at about 398, giving us about 152 ppm of breathing room (so to speak). An increase of 1 ppm is equivalent to an increase of about 7.8 gigatonnes of CO2 in the atmosphere. If we want to avoid 152 ppm, we need to avoid adding more than about 1,186 gigatonnes of net CO2 to the atmosphere.
OK, last thing. How does "net CO2" work? As you said, not all of the carbon we emit actually sticks around in the atmosphere. Some of it gets dissolved into the ocean, some of it gets "fixed" by plants using it for photosynthesis, some of it reacts chemically with exposed rock, and so on. That's the carbon cycle. Again, putting precise numbers on this process is difficult because of feedback mechanisms. More CO2 in the atmosphere, for instance, can boost plant growth resulting in more CO2 being pulled from the atmosphere. On the other hand, the amount of CO2 that can be held by the planet's oceans decreases as the oceans warm up (this is why leaving a bottle of soda in a hot car can make the bottle explode).
Thankfully, most of these calculations already include some basic assumptions about the carbon cycle. If it remains stable from year to year, we can basically just ignore it (yay algebra), since its overall impact is zero. The number I gave above (1,186 Gt) is calculated with the carbon cycle already accounted for: if we want to stay below 550 ppm CO2, we should avoid emitting more than about another 1,000 Gt of CO2. For comparison, we've added about 1,900 Gt since the dawn of human civilization, and at our current rates we'll add another 1,000 Gt in about 20 years. This provides a concrete target for policy: we should try very, very hard to curb our emissions before we've emitted another 1,000 Gt of CO2, and should endeavor to transition away from fossil fuels entirely before we hit that number.
Even 550 ppm is not without risks, though, for the reasons I discussed above. It's possible that such a concentration might bring us to 4.5 degrees C warming (or even more). It's similarly possible that even 2 degrees C warming would have disastrous consequences that we haven't foreseen. There's a tremendous amount of uncertainty endemic to this field, which (as Oreskes and Conway rightly point out) is part of why we've been paralyzed on the policy front. We're waiting for relatively concrete, certain answers before we act. Unfortunately, there are unlikely to ever been certain answers here.
#climatechange #climate #globalwarming