Pages

Thursday 10 April 2008

Climate Science Predictive Power

I must say up front that I would be ecstatic if the anti-anthropogenic global warming position was shown to be correct, because the probability of action to stop CO2 emissions in time to avert some of the predicted consequences is close to zero. Unfortunately, the universe does not subvert itself to my will, so I have to work with the next best thing: understanding reality as well as possible, and working to manage my life and that of others to fit reality.

In my last article, "Who's putting the 'political' in climate science, now?", the most serious allegation by commenters was that, "To put it simply, models have got better at being tweaked to match historical climate but no-one has the faintest idea of how good they are at predicting future climate." In other words, the models have no predictive power.

To investigate this claim, I went back to one of the earliest global climate model papers [Hansen et al. 1988].

If this allegation is true, this paper should be wide of the mark; after all, if refinements to the science have gone nowhere, a paper from 20 years ago should be nowhere close to predicting future climate. On the other hand, if the allegation is false, while we can expect some errors, some predictions made by the paper should be reasonably close to reality -- certainly close enough that we could have used them for broad policy decisions, if not for detailed planning.

Let's start by looking at temperature trends as predicted in the paper. The paper uses three scenarios, A, B and C. A assumes exponential growth in CO2 outputs, scenario B a slowdown to linear increase, and C a slowdown to no increase. The authors indicate that they considered B the most likely, and focused their analysis on that case (though the others were covered too, to provide a range of scenarios).

Let's start by looking at the 1988 paper's temperature trend prediction.


As with current work, temperatures are reported as an "anomaly" versus the average from 1951-1980.

In 1990, the scenario B predicted temperature anomaly was about 0.4 degrees; in 2000, the scenario B prediction was 0.55°C (approximately: I had to eyeball the graph since no data tables are provided). If we look at the 5-year mean in current data, what do we see? In 1990, the anomaly was 0.27; in 2000 it was 0.45.

Here's the most recent temperature trend from NASA. You should look at the 5-year mean, the red line, as representing the trend (smoothing out short-term variation). The green bars represent uncertainty bars (95% confidence limits), allowing for gaps in measurement.


How far out is the 1988 paper? The number for 1990 was 48% over the measured number. In 2000, the prediction was 22% out. It is also worthy of note that these differences are within the range of uncertainty in the measurement.

How big a deal is this? Certainly, it would have been a huge surprise if a model with as many omissions as were reported (e.g. a very crude model of the ocean) in the paper was spot on. But remember, the allegation is that the models used have no predictive power, and can only be retrofitted to the past. What we have found instead is that while the model over-predicts compared with reality, it does so within the range of uncertainty in the subsequent measurement.

Let's look at the distribution of temperature change as predicted then, and as subsequently observed. First, let's look at a map from the paper, then a similar map generated from NASA's current data.





The temperature scale on the older map goes -3 to 5, but with similar significance to the colour scale (no warming is white, warming is yellow through orange to red).

The newer map is from NASA's GISS site, with the parameters in the next picture, covering the same period as the illustrated 1988 model run.


Exercise for the reader: rerun my example at GISS for 2000-2007. It looks a lot closer to the 1988 paper's "2010s" picture, except the Antarctic hasn't warmed to the predicted extent. I'm not including this though, because we should really have a whole decade to compare with the paper.

What can we see from these two maps? Clearly, there are differences. The distribution of warming is not the same. In particular, the original study had more of the warming away from the tropics, and had more warming in the Antarctic. However, in general terms, the 1988 paper cannot be said to have no predictive power. The range of warming temperatures is approximately right, the real distribution of warming does show some bias towards the north and there are few areas which actually cooled over the 1990s, as opposed to none in the model.

Are these predictions of any use? Clearly, if you were using them to predict where to invest in agriculture over the long term, you would have made some serious mistakes. If on the other hand you were using the model predictions to decide whether anthropogenic climate change was a real effect, the predictive power is more than sufficient.

For those who say it's all natural effects, just the sun, etc.: please provide me with a 1988 paper that came as close as this one to predicting the future climate. Too hard? How about any "it's all the sun" paper that has done better than fit to the past. (Or whatever else you think is the sole driver of climate; I really do want to be convinced.)

So what's the bottom line?

It's not too surprising that this early model, with less data and computing power than is available today, should have some inaccuracies. However, the allegation that models have never been able to predict anything but are only capable of being fitted to the past is clearly false. One commenter on my previous article insisted that it's up to the climate change modellers to convince everyone, not the doubters. I don't think so. This study plus others subsequent to it have been making the case for twenty years. I suggest the denial crew get a job with Robert Mugabe's election procrastination team.

Update


RealClimate has a useful article on comparing models with predictions, in case anyone thinks I'm the only one doing this. I've also spotted their own analysis of Hansen's 1988 paper versus reality since I wrote this (I deliberately didn't seek it out before so I could keep my opinion independent). There's also been a recent study (April 2008) showing that climate models on the whole are doing pretty well. My main motivation for doing this myself was to check for myself, rather than rely on climate scientists who may, according the the denial mantra, have a vested interest in making the models look good.

As a courtesy to other readers, please provide references (not just a web site or an organisation please: something that can be found) if you have information to add. I do not feel obligated to respond to unsubstantiated claims to the same extent.

Reference


[Hansen et al. 1988] Hansen, J., I. Fung, A. Lacis, D. Rind, Lebedeff, R. Ruedy, G. Russell, and P. Stone, 1988: Global climate changes as forecast by Goddard Institute for Space Studies three-dimensional model. J. Geophys. Res., 93, 9341-9364, doi:10.1029/88JD00231

3 comments:

Anonymous said...

I am afraid you have fallen for Gavin Schmidt's spin.

There have been hundreds of models produced to predict the future climate and inevitablly one of them got it right. Since that model was produced it has been "improved." If we re-ran these scenarios with the improved version would we get the same results? If so they are pretty poor improvements.

All the models underestimated the rate at which the Arctic ice is melting. All the models have trouble reproducing the tropical lapse rate measured by radiosondes. All the models cannot explain the melting of the glaciers in the Alps. Only "toy models" can reproduce the last glacial maximum. They all have trouble with the ITCZ producing either two or none.

Of course everyone wants to believe the models are correct. The modellers' carreers would be ruined if they were proved wrong. But for nearly 20 years they have been predicting a sensitivity that ranges from 1.5 K to 4.5 K. That is a three hundred percent variation!

If you conduct an investigation into where they are going wrong, instead of where they are going right, I think you will be surprised how the former overwhelms the latter.

This does not mean that the current models are over-estimating the problem, and we have nothing to fear. In fact they are under estimating it, for instance the melting Arctic sea ice, and the Australian drought.

But just like the scientific sceptics, the modelers are in denial. They have been indoctrinated into believing that only the academics can be right. We are heading for the same fate as the Easter Islanders, who trusted their ruling elite!

Cheers, Alastair.

Philip Machanick said...

In this case, the temperature range is dictated by the CO_2 inputs. As for Gavin Schmidt's spin, I've seen discussion of this point at RealClimate but I deliberately did not go there and did the analysis myself.

Presuming by "sensitivity" you mean to doubling CO_2 (as usual), I've seen a paper that demonstrates this kind of sensitivity range over hundreds of millions of years. I don't recall seeing a recent one that does not constrain the most probable range more tightly than that, even though there are some who have given wider ranges (with low probability). The figure I see most often is close to 3 degrees.

You may note that I did point out areas of discrepancy, specifically the geographical distribution. Of course there are still big holes in the models like full treatment of clouds and oceans.

The issue is whether the models have no predictive power as alleged by a comment on the previous article.

I take your point that flaws in the models can just as well result in under-predicting as over-predicting. This is one of my big areas of dispute with the denial camp. If they want to be called "sceptics" they need to consider that option as well. Another big problem I have with the denial camp is the refusal to refute clearly bogus arguments, e.g., that growing wine in England is proof that the medieval warm period was much warmer than it is today. If there is solid evidence for the WMP, use that.

Chris said...

I'm sick of the whole 'models need to do better argument'. It comes down to simple physics, the fact that greenhouse gases absorb outgoing long wave radiation and that according to the Stephan Boltzmann equation the change in temperature is proportional to the radiation balance. More radiation out = lower temps and less radiation out = higher temps. Arrhenius did this calculation about 100 yrs ago and got an answer for GHG's of 1-2 degrees, an answer the models have been reproducing ever since. Challenging the models is pure obfuscation and time wasting.