One of the Big Lies in the campaign to confuse the public about climate science is that true science proceeds from exact information applied to an exact formula, producing an exact result. Because results reported by climate models are inexact, the Lie goes, the theory must therefore be flawed. Members of the public without scientific experience can be excused for getting this wrong; experienced scientists who propagate such views should hang their heads in shame. Real science is nothing like this. Exact results only apply in artificially constructed situations (and even then, you need to allow for errors in measurement). The real world is noisy: instruments have errors, multiple sources of information interact in ways that can’t always be disentangled with precision, and precise calculations on a real-world scale may be impractical. Just as with any other branch of science, the theory of anthropogenic greenhouse gas-driven warming

*is* based on a precise formula: increases in CO

_{2} result in increases in temperature logarithmic in the increase in CO

_{2}. As with many other physical theories (including gravitation, thermodynamics and electromagnetism), this theory is testable in the lab under idealistic conditions. As with any other theory, real-world application involves dealing with noisy data and interactions with other aspects of the total system.

A few days ago, in

online comments to a letter of mine in

*The Australian*, someone made the claim that climate science can’t be any good because it is inexact:

Philip. With all due respect, if anyone, it is you who are confused; at least about what science is, and in that, you are certainly not alone. Science is not based on correlation but on causation, and an understanding that if A causes B, then A and B move in absolute lockstep according to a precise mathematical law; like Newton’s F=ma. It is not F~ma. Where there is the most minute variation, science says that there is a causal factor for that variation, and a further law to absolutely and completely describe that variation; eg via Einstein’s E=mc*c. Note too that this does not change the original law; it still applies, but it introduces and accounts for another factor which can affect a factor of the original law. It is also worth noting that this new law actually predicted variations from Newton’s laws that were so minute they had not yet been measured. With both together then, everything remains exact. That is true science.

Contrast that to the greenhouse “science” laws (based on correlation) to which you refer. They predict that for a doubling of CO2, there will be between a 1.5 and 4.5C T rise. To in any way equate that to true science is just wrong; for comparison, it would be like having Newton’s law saying ma<F<3ma. Einstein’s Law would then have to be something like E=ms*s (s=speed of a snail crawling over sand) to account for that sort of variation. A hydrogen bomb would then not create enough energy to lift your hat, and the sun would be so weak that earth T would be about -273K, even when the variation in gravitational force brought our orbit within a few million k of the sun!

I know this sounds silly, but I use it to emphasise that real science is exact and absolute, because there is no correlation in it; only causation. That is what climate science needs to be before it can be taken seriously outside of political and religious circles; based on causation, not correlation.

First, greenhouse gas theory is

*not* based on correlation. It is based on radiative physics. The logarithmic relationship between increasing CO

_{2} levels and increased temperature was discovered by Arrhenius and demonstrated in the lab in 1897. The radiative physics needed to calculate the effect accurately was discovered early in the twentieth century. There is therefore as exact a measure of the effect of increasing CO

_{2} as any of Newton’s Laws. What makes things more complicated is the fact that we are dealing with a real-world application, where exact measurement is not possible, and there are many other confounding factors to take into account in making exact predictions.

Rather that go into all this again, I will illustrate just how far from reality the commenter’s view of how “exact” science is in another area. Newton’s law of gravitation is about as exact a formula as you could want. While general relativity corrects it, if we are doing something as mundane as designing a bridge or navigating a space probe, Newton’s law is so close to accurate that we can assume it is exact. In principle, navigating a space probe is not terribly hard. The most efficient way of doing it is to burn a rocket until the probe achieves escape velocity, while making sure it points in the right direction, then leave it to drift. To make things simple, let’s assume we only want to navigate accurately past any one location, and anything else on the way is a bonus (except a collision, but let’s ignore that to keep things simple).

We have a formula for gravitation (thanks to Newton) that says we can calculate the force on any two bodies in space as a constant times the two objects’ masses over the square of the distance between them:

*G* is a constant,

*M*_{1} and

*M*_{2} are the masses of the two bodies, and

*r*^{2} is the square of the distance between their centres of mass.

For our navigation problem therefore, things look very straightforward – until we get to the detail. We don’t just want to know the forces that apply to the space probe at an instant but how its motion is affected over its entire journey. To make things harder, the space probe isn’t the only thing moving. The entire solar system is in motion under the influence of gravitational forces of everything else in the solar system (and other more distant objects, but the inverse squared law makes that an insignificant correction). In theory we could set up a bunch of differential equations to solve exactly but there is a practical problem with doing this for so many different bodies. There are over a million asteroids for a start and even without them, the differential equations would not be practical to solve. So in practice what you need to do is to approximate the parameters at a given time, calculate where everything will be at some later time (soon enough not to loose too much accuracy, but not so soon that the amount of calculation is prohibitive), and keep applying these steps until you arrive at the time when you need to know where your probe will be.

Let’s just look at how difficult this is for one body in space. Assume we have a series of measurements of where this body is, culminating in the positions I’ve labelled here as

*A* and

*B*, measured at times

*t*_{A} and

*t*_{B} – with the aim of working out where it will be at a future time,

*t*_{C}:

So how do we work out where the body will be at time

*t*_{C}? Based on previous measurements, we estimate the speed and acceleration of the body as it passes through position

*B*, and apply Newton’s formula to adjust its acceleration, resulting in calculating that the body will be at position

*C*. Unfortunately because our previous measurements were not 100% accurate, the actual position the body ends up in at time

*t*_{C} is position

*D*, a little out from our calculation.

How could this happen? The previous measurements of where the body was hadn’t taken into account gravitational forces fully. At some point, you don’t know where every body is going to move next, and have to make some approximations

*before you can start exact calculations*. Why? Because to calculate the effect of applying a force to an object you need to know three things at the time immediately

*before* you apply the force:

- its position

- its speed (velocity)

- its acceleration

Applying the force alters the body’s acceleration and (assuming no other forces are involved), if you know all four quantities precisely (force, acceleration, velocity, position) you can calculate where it will go next precisely (until the parameters change). However if there is any error in any of the parameters to the calculation (including the force, which for gravitation relies on having the positions and masses of all other objects right), the answer will not be exactly right – despite the use of an exact formula.

Worse still, because time

*t*_{C} is in the future, between time

*t*_{B} and time

*t*_{C},

**everything else has moved**, making any calculation based on the gravitational effects of the positions of everything else at time

*t*_{B} inaccurate. Point

*C* should be marked as a circle representing the uncertainty in the calculation, or a fuzzy blob if you want to represent the fact that the most likely location is at point

*C* with diminishing probability of the position being at a location further out from point

*C*.

Even assuming you can arrive at a tight enough approximation to the position, velocity and acceleration of all objects in the solar system to sufficient accuracy at a given time, you need to recalculate all the parameters for successive time intervals, applying the gravitational force each time to get fresh parameters. This is where things get really hairy. We have around a dozen objects big enough to call planets or large moons and over a million asteroids. Even with a large-scale computer to recompute the position in space of each object along with its new velocity and acceleration, we would have to do trillions of calculations just work out where everything has moved, even if we only do this once. Why? Because for each body, we must calculate the effect of every other body. If there were exactly 1-million such bodies, that would mean almost a million times a million applications of Netwon’s formula. This is clearly impractical, especially if we have to repeat the calculations many times to get an accurate projection of the probe’s trajectory.

Fortunately, there’s a better way. If a group of bodies is far enough away, treating them as a single body with a position based on their centre of mass is a reasonably accurate approximation to their contribution to the gravity computation [Barnes and Hut 1986]. This picture illustrates the basic concept (in two dimensions to make the picture easier to understand):

Depending on the sensitivity parameters of the calculation, it may be possible when calculating the forces at

*A* to proceed as if there were a single body at each of locations

*C* and

*E*. The body at

*D* on the other hand may be too close to

*C* to allow this approximation.

The upshot of all this is that although we can in theory calculate gravitational forces extremely precisely, in the real world, any practical calculation has to contain errors. We can limit those errors by taking more measurements, taking more precise measurements, and reducing the approximations in the calculations at the cost of slower computation. Our space probe can be placed to within some reasonably accurate window of its intended destination, but it had better have a little fuel on board for course corrections.

Back now to climate models.

The situation is really not so different. The relationship between CO

_{2} increases and temperature increases can be measured accurately in the lab, but effects on the real world require approximations because measurement is inexact, we have fewer data points than we’d like and accurate computation would take too long. But as long as we have a handle on the scale of the errors, we can work these into the computation, and produce an answer with a central value and a calculation of how much the actual answer could vary from that central value. This is

*not* some strange new principle invented by climate scientists. Any science of the real world is necessarily inexact, for similar reasons to those that apply to the gravitation computation. As with any other area of science that requires complex accounting for the real world, an answer may be inexact, but not so inexact as to be useless for policy makers [Knutti 2008].

So what of the “correlation is not causation” mantra, which so often accompanies objections to climate science? Simulating the whole earth’s climate is

*not* done using correlations. To claim this is ignorant or dishonest. The simulations are based on laws of physics and measured data (with the sort of simplification, handling of noisy data and approximation needed to do any real-world science) to predict a trend. Comparing that predicted trend against actual measures certainly can be done using correlation, but that is not the only test of the theory – nor should it be. In any case, if you have a mechanism and then look for a correlation, that correlation can hardly be said to lack causation. To claim blindly that correlation is not causation (as opposed to the more reasonable position that you should be cautious to claim causation if correlation is your sole evidence) is more or less to say that whenever you find a correlation, you must dismiss the possibility that there is a causal connection, which is clearly absurd.

### References

[Barnes and Hut 1986] J.E. Barnes and P. Hut. A hierarchical O(N Log N) force calculation algorithm.

*Nature*, 324(4):446-449, December 1986

[Knutti 2008] Reto Knutti. Should we believe model predictions of future climate change?

*Phil. Trans. R. Soc. A*, 366(1885):4647–4664, 28 December 2008 [

PDF]