Pages

Wednesday, 30 April 2008

Why doesn't it get hotter every year?

One of the most common questions encountered in discussions on climate change especially by those who don't accept the science is, "Why doesn't it get hotter every year?" or other variants, such as, "Why hasn't it warmed since 1998?" (pick your favorite year). The real question underlying all this is: "If there is a warming trend of so many degrees per century, why can't we see that exact trend if we compare any two years, or any sequence of e.g. ten years?"

The answer is pretty simple. The climate change trend is a long-term effect. If it warms by 3°C per century, that means an average of 0.03°C per year. That amount is well within the limits of accuracy of measurement techniques. What's more, without a significant global warming effect, much higher variation than this occurs. The critical difference is that ordinarily, that sort of variation is short term and the highs and lows pretty much cancel out. It's only if there's a major change in the atmosphere (greenhouse gases causing warming, volcanic output causing cooling) or solar input to the system (change in the earth's orbit, change in solar activity) that you get a break out of this sort of random fluctuation -- and only a really big external change causes a real climate shift (like into or out of an ice age).

One can of course (and should, if doing serious scientific studies) apply sophisticated data analysis techniques to the problem, but what I aim to do here is to provide a feel for how to understand the difference between short-term and long-term effects with simple approaches you can apply yourself if you know how to drive a spreadsheet. All you need is the ability to plot points, and to get a trend line. The other critical thing is to ask the spreadsheet (or graphing or stats package) to give you the r2 value for the trend line, which is a measure of how much you can read into it.

I took a look at the first 50 years of the HadCRUT3 dataset, from 1850 to 1899, before significant CO2 emissions could have caused a greenhouse effect, and plotted the annual averages (HadCRUT3 uses anomalies relative to a base period of 1961-1990) to see if I could discern a trend. Over this time, data from Oak Ridge National Labs showed that annual emissions of CO2 increased from 54 to 507 (measured as million tonnes of carbon); the latest figure is less than 10% of current levels. The effect of this increase in emissions was that atmospheric CO2 increased less than 4%, not enough to have any measurable effect on temperature. So let's take this period as indicative of "natural variability". Solar activity over that period was on the way down, probably not enough to have a significant effect on climate.

What can we see from this picture? Aside from that there are many short-term bumps, the trend isn't too clear. The fitted trend line shows a small upwards drift (an annual trend of 0.0008°C, or 0.08° per century), but the crucial thing is to note the r2 value, which shows that the trend isn't very strong. Statisticians interpret r2 as an indication of how much of the variance in the data can be explained by the trend line; in this case, a bit over 1%, so the trend is not significant; you would expect an r2 value in this ballpark for random numbers.

What has the trend been over the last 50 years (1958-2007)? Let's plot the graph as well and see how it looks. Note that in all graphs, I keep the vertical scale the same (-0.6 to 0.6) so the graphs are comparable. It is important to get this right: I've seen many attempts at comparing values using inconsistent scales. I do however adjust the horizontal scale when dealing with shorter sequences of years: be careful not to compare these with the longer sequences.



How does the recent data compare with the picture of 1850-1899? The trend is now much clearer. The r2 value is 0.7347, meaning that the trend line is a very good fit to the data. The annual trend is an increase of 0.0129°. I haven't taken into account here that the trend may be accelerating, which is missed by a simple linear regression over the entire period, so do not get too excited that the trend is on the low side of IPCC projections.

It is a simple matter to add an annual trend of an increase of 0.0121 to each temperature record from 1850 to 1899, to see what the resulting picture looks like relative to today's. Why is this interesting? Because if I do this, I know there is a trend of 1.21° per century over and above natural variability -- and this figure will make the trend match that of the last 50 years. This allows us to explore the argument that artificial warming is behaving like a slow, long-term adjustment to natural variability which would otherwise be pretty much random. We will also be able to see whether it's possible to discern this trend at all scales, answering the question posed at the start. So let's see how the adjusted nineteenth century data looks.




Remarkably (or perhaps not) the adjusted nineteenth century picture has statistical properties very similar to the modern picture. The rate of increase is the same because I made that so. The r2 value is 0.7613, showing a similar degree of fit to the data to the modern (unaltered) data.

Is that cheating? Yes and no. I forced the old data to look like the new data. But the point is that the old data looked pretty close to random. That's the nature of weather. You got hot days, you get cold days. You get a heat wave, you get snow in summer. Looked at short-term, the weather is hard to predict; look at it over a span of years, and it doesn't look much different to random, if you filter out seasons (as you do with an annual average). So adding a linear trend onto the 1850-1899 data is not too different to the predicted effect of adding global warming onto natural variability.

So what of talk we have lately of how global warming has flatlined, or temperatures have dropped over the last 10 years? Let's look for a comparable case we can find in the adjusted nineteenth century data. If we look at a period of ten years of the adjusted data when the graph looks relatively flat, 1880-1889, the trend is 0.008° per year, with r2 = 0.1227: somewhat better than random, but not a convincing fit to the data.



Remember, this data has been explicitly constructed so that there was a hundred-year warming trend of 1.21° and we are testing the argument that if such a trend exists, you should be able to find it in any sequence of years.

Now move the trend period back two years to 1878-1887 and what do you get? A trend of a decrease of 0.0166° per year, or a decrease of 1.66° per century, the opposite to the trend I artificially added to this data and of bigger magnitude! What's more, r2 is now at a significantly more convincing level of 0.2947 than the 1880-1899 (adjusted) trend.

Remember, this is data that was explicitly constructed to add a trend of increasing temperature on top of data which was statistically random.

If you take any period of ten years, you can see similar effects: most go up in varying degrees, some are flat, a few go down.

So what can we conclude from all this?

If we have a data series with a natural variability significantly above a new source of artificial change, even if that change is consistently applied over a long time, we cannot expect to discern that change accurately by looking at a short time sequence. That change will, however, be clear if you keep looking for long enough. In other words, anyone demanding that temperatures increase every year -- or even over a period of ten years -- is not testing the theory of anthropogenic global warming, but their own understanding of data analysis.

Finally, let's apply one more data analysis technique, familiar to economists, at least when they do market analysis, even if some forget it when they consider climate change: the moving average. The way this works is you plot the average of the past n years at each data point as a way of smoothing out short-term fluctuations. NASA's GISS temperature graphs generally show a 5-year average. Here, I'll use a 10-year average to smooth out the bumps even more.

The black line with squares on the data points is the temperature data; the red line without markers on the data points is the 10-year average.

Can you see a downhill trend over the last 10 years now? Not convinced? Look at the data points since 1998 (the big spike near the end). How far back can you go before all the temperature measures are lower than any since 1998? Click on the picture if you want a larger version. To make it easier, I've put in a dotted line marking the lowest temperature since 1998. You'll see that it's higher than any temperature before 1995.

Additional Reading


For those who find my stats treatment a bit too lowbrow, plus some other interesting points:

Exercise


Repeat the experiment on the modified nineteenth century data with 20-year sequences. You will see that although the trend is more consistently up and closer to the 50-year trend, you can still find at least one patch where the trend appears to be down -- though with a low r2.

No comments: