District 9 has been picking up pretty good reviews on the whole, but another whole layer has been missed by many reviewers who don’t understand the South African setting. I lived in South Africa until 2002, and have been a long-time follower of the local SF culture. I’ve been a member of the national SF club, SFSA, since the 1970s, and the fact that there is this sort of creative talent in South Africa is not so much a surprise as the fact that it’s resulted in a major movie with a big worldwide launch – without turning it into an American-centric story, where the kid saves the day (actually, there is a minor element of the kid saves the day, but it’s not as unbelievable as in the average movie where a human kid does this).
The movie starts with a mystery: it has no hint as to why the aliens arrived. All we know is that they have lost their ability to control most of their vastly superior technology (not quite all: a few of their weapons work but only in contact with alien DNA, and their space craft is able to remain in a fixed position for decades). There appears to be some linkage between their genetic make up and their ability to control their technology, but much of this is left a mystery.
By a combination of hand cam shots meant to represent an official record of events, news-like footage, surveillance camera-style footage and the occasional realistic scene, it’s hard not to become involved and have a real sense of an actual story unfolding, even though the earlier events are in the past.
Imagine humans in a like situation. A million humans in a colony ship arrive at a distant planet, and our technology breaks down. Aliens who are obviously less advanced than us “rescue” us from our disabled ship and treat us like dirt. How would we cope? How many of us would have the advanced scientific knowledge to fix our broken space craft? Think Star Trek episodes and beaming down to the nearest planet to find some broken part or find a missing chemical. Totally unlikely. What we saw in District 9 is a much more likely scenario for a space ship breaking down far from home. Possibly this is why (though the last Star Trek movie was passably good, give or take the odd plot hole you could drive the Enterprise through) my favourite SF movies tend to be spoofs like Mars Attacks! and Galaxy Quest.
The real genius of this movie is in its use of role reversals. The aliens have arrived in a disabled space craft under squalid conditions (Australians, think refugees in leaky boats). They are treated with utmost condescension, and things that are obviously not for their own good are done to “improve” conditions for them. Suddenly you are put in a position of seeing this all from their point of view. You need a good understanding of South Africa to get all the references but for a foreign audience it adds to the “alienness” feeling of the movie. To add to the subtle feeling that you are looking at things backwards, one of the aliens is called “Christopher Johnson” (typical of the way colonial overlords renamed the natives when they couldn’t pronounce the native-language name), a name less “alien” to a non-South African audience than Van der Merwe.
Let’s examine some of those South African references. First, the very title harks back to the dark years of apartheid. District 6 in Cape Town (in a different part of South Africa) was a ghetto for Coloured (mixed-race) South Africans, which, despite poverty, had a strong sense of community. The government decided to clear out the residents because having Coloured people too near the city centre was inconvenient. Clearing out District 6 was a big running sore in apartheid history; forced removals were bitterly opposed, and the cleared land was left largely undeveloped until the fall of apartheid, when rights of former residents to return were recognized. Forced removals, in general, were a key feature of the apartheid system. One study of the effects was called the Surplus People Project; a movie depicting the effects was titled Last Grave at Dimbaza.
Second, the attitude towards the aliens is consistent with current attitudes in South Africa to “illegal aliens” of the human kind. There have been riots over the presence of such foreigners from poorer parts of Africa, and the attitudes expressed in the movie are absolutely typical, and a sad rejection of the apartheid past, when the rest of Africa rallied to support of the anti-apartheid cause, and accepted South African refugees with open arms. The aliens are accused of all kinds of things like causing crime, when the only evidence we see of criminality is from Nigerian gangs and the MNU company that is desperately trying to make money out of the aliens by any means, not matter how unscrupulous. MNU is a bit of a composite, not reflective of a real company. Americans may relate it to private contractors in Iraq. South Africa does indeed have a large armaments company, government-owned Denel (a relic of the apartheid era) but it does not do the sort of private security work depicted in the movie – there are other South Africa companies that do that sort of thing – nor is it as big in the world market as the fictitious MNU.
Third, and this is where the subtleties really accumulate, the attitude of protagonist Wikus van der Merwe to the aliens is exactly the way apartheid officials treated Black South Africans in the darkest apartheid years. Telling one of the aliens not to use so many clicks is a direct reference to South African languages, several of which include click sounds (the San languages are almost entirely composed of clicks, and Xhosa and Zulu have a few click sounds). That Van der Merwe, who obviously gets on well with his Black colleagues, can get away with this treatment of the aliens with no sense of irony or objection from the Black members of the team shows how little the South Africans represented in the story learnt from their apartheid experience.
More sensitive viewers may dislike the violence (especially in the second half, which turns into frenetic action scenes) but it is integral to the story and not gratuitous. The dialogue also includes some of the more serious Zulu cursing I’ve heard in decades but that would go over the heads of most foreign audiences.
No doubt the computer game heritage of the movie gives it some of its mass market appeal, but this is a real classic, as much a game-changer (in the other sense) as the Matrix movies, and a whole lot more intelligent. That there are almost no American (or even UK English) accents in the dialogue, some African language dialogue without subtitling and that the protagonist has a name almost unpronounceable to non-South Africa English speakers are brave moves but add to the movie’s appeal as something different from the usual Hollywood dross.
A little help for the foreigners: “Wikus” is pronounced something like Vee-cuss. “Van der Merwe” is pronounced something like Fun-deh-meh-vuh.
Sequel? Very likely. With the box office this one is generating, the unexplained details and the potential for follow-up developments left open at the end, a sequel is almost 100% on. I hope it’s as good as the first. At last, after a long drought, an SF movie that’s better than a parody. It’s been a long wait.
Sunday, 23 August 2009
Sunday, 16 August 2009
Science in the Real World
One of the Big Lies in the campaign to confuse the public about climate science is that true science proceeds from exact information applied to an exact formula, producing an exact result. Because results reported by climate models are inexact, the Lie goes, the theory must therefore be flawed. Members of the public without scientific experience can be excused for getting this wrong; experienced scientists who propagate such views should hang their heads in shame. Real science is nothing like this. Exact results only apply in artificially constructed situations (and even then, you need to allow for errors in measurement). The real world is noisy: instruments have errors, multiple sources of information interact in ways that can’t always be disentangled with precision, and precise calculations on a real-world scale may be impractical. Just as with any other branch of science, the theory of anthropogenic greenhouse gas-driven warming is based on a precise formula: increases in CO2 result in increases in temperature logarithmic in the increase in CO2. As with many other physical theories (including gravitation, thermodynamics and electromagnetism), this theory is testable in the lab under idealistic conditions. As with any other theory, real-world application involves dealing with noisy data and interactions with other aspects of the total system.
A few days ago, in online comments to a letter of mine in The Australian, someone made the claim that climate science can’t be any good because it is inexact:
First, greenhouse gas theory is not based on correlation. It is based on radiative physics. The logarithmic relationship between increasing CO2 levels and increased temperature was discovered by Arrhenius and demonstrated in the lab in 1897. The radiative physics needed to calculate the effect accurately was discovered early in the twentieth century. There is therefore as exact a measure of the effect of increasing CO2 as any of Newton’s Laws. What makes things more complicated is the fact that we are dealing with a real-world application, where exact measurement is not possible, and there are many other confounding factors to take into account in making exact predictions.
Rather that go into all this again, I will illustrate just how far from reality the commenter’s view of how “exact” science is in another area. Newton’s law of gravitation is about as exact a formula as you could want. While general relativity corrects it, if we are doing something as mundane as designing a bridge or navigating a space probe, Newton’s law is so close to accurate that we can assume it is exact. In principle, navigating a space probe is not terribly hard. The most efficient way of doing it is to burn a rocket until the probe achieves escape velocity, while making sure it points in the right direction, then leave it to drift. To make things simple, let’s assume we only want to navigate accurately past any one location, and anything else on the way is a bonus (except a collision, but let’s ignore that to keep things simple).
We have a formula for gravitation (thanks to Newton) that says we can calculate the force on any two bodies in space as a constant times the two objects’ masses over the square of the distance between them:
G is a constant, M1 and M2 are the masses of the two bodies, and r2 is the square of the distance between their centres of mass.
For our navigation problem therefore, things look very straightforward – until we get to the detail. We don’t just want to know the forces that apply to the space probe at an instant but how its motion is affected over its entire journey. To make things harder, the space probe isn’t the only thing moving. The entire solar system is in motion under the influence of gravitational forces of everything else in the solar system (and other more distant objects, but the inverse squared law makes that an insignificant correction). In theory we could set up a bunch of differential equations to solve exactly but there is a practical problem with doing this for so many different bodies. There are over a million asteroids for a start and even without them, the differential equations would not be practical to solve. So in practice what you need to do is to approximate the parameters at a given time, calculate where everything will be at some later time (soon enough not to loose too much accuracy, but not so soon that the amount of calculation is prohibitive), and keep applying these steps until you arrive at the time when you need to know where your probe will be.
Let’s just look at how difficult this is for one body in space. Assume we have a series of measurements of where this body is, culminating in the positions I’ve labelled here as A and B, measured at times tA and tB – with the aim of working out where it will be at a future time, tC:
So how do we work out where the body will be at time tC? Based on previous measurements, we estimate the speed and acceleration of the body as it passes through position B, and apply Newton’s formula to adjust its acceleration, resulting in calculating that the body will be at position C. Unfortunately because our previous measurements were not 100% accurate, the actual position the body ends up in at time tC is position D, a little out from our calculation.
How could this happen? The previous measurements of where the body was hadn’t taken into account gravitational forces fully. At some point, you don’t know where every body is going to move next, and have to make some approximations before you can start exact calculations. Why? Because to calculate the effect of applying a force to an object you need to know three things at the time immediately before you apply the force:
Applying the force alters the body’s acceleration and (assuming no other forces are involved), if you know all four quantities precisely (force, acceleration, velocity, position) you can calculate where it will go next precisely (until the parameters change). However if there is any error in any of the parameters to the calculation (including the force, which for gravitation relies on having the positions and masses of all other objects right), the answer will not be exactly right – despite the use of an exact formula.
Worse still, because time tC is in the future, between time tB and time tC, everything else has moved, making any calculation based on the gravitational effects of the positions of everything else at time tB inaccurate. Point C should be marked as a circle representing the uncertainty in the calculation, or a fuzzy blob if you want to represent the fact that the most likely location is at point C with diminishing probability of the position being at a location further out from point C.
Even assuming you can arrive at a tight enough approximation to the position, velocity and acceleration of all objects in the solar system to sufficient accuracy at a given time, you need to recalculate all the parameters for successive time intervals, applying the gravitational force each time to get fresh parameters. This is where things get really hairy. We have around a dozen objects big enough to call planets or large moons and over a million asteroids. Even with a large-scale computer to recompute the position in space of each object along with its new velocity and acceleration, we would have to do trillions of calculations just work out where everything has moved, even if we only do this once. Why? Because for each body, we must calculate the effect of every other body. If there were exactly 1-million such bodies, that would mean almost a million times a million applications of Netwon’s formula. This is clearly impractical, especially if we have to repeat the calculations many times to get an accurate projection of the probe’s trajectory.
Fortunately, there’s a better way. If a group of bodies is far enough away, treating them as a single body with a position based on their centre of mass is a reasonably accurate approximation to their contribution to the gravity computation [Barnes and Hut 1986]. This picture illustrates the basic concept (in two dimensions to make the picture easier to understand):
Depending on the sensitivity parameters of the calculation, it may be possible when calculating the forces at A to proceed as if there were a single body at each of locations C and E. The body at D on the other hand may be too close to C to allow this approximation.
The upshot of all this is that although we can in theory calculate gravitational forces extremely precisely, in the real world, any practical calculation has to contain errors. We can limit those errors by taking more measurements, taking more precise measurements, and reducing the approximations in the calculations at the cost of slower computation. Our space probe can be placed to within some reasonably accurate window of its intended destination, but it had better have a little fuel on board for course corrections.
Back now to climate models.
The situation is really not so different. The relationship between CO2 increases and temperature increases can be measured accurately in the lab, but effects on the real world require approximations because measurement is inexact, we have fewer data points than we’d like and accurate computation would take too long. But as long as we have a handle on the scale of the errors, we can work these into the computation, and produce an answer with a central value and a calculation of how much the actual answer could vary from that central value. This is not some strange new principle invented by climate scientists. Any science of the real world is necessarily inexact, for similar reasons to those that apply to the gravitation computation. As with any other area of science that requires complex accounting for the real world, an answer may be inexact, but not so inexact as to be useless for policy makers [Knutti 2008].
So what of the “correlation is not causation” mantra, which so often accompanies objections to climate science? Simulating the whole earth’s climate is not done using correlations. To claim this is ignorant or dishonest. The simulations are based on laws of physics and measured data (with the sort of simplification, handling of noisy data and approximation needed to do any real-world science) to predict a trend. Comparing that predicted trend against actual measures certainly can be done using correlation, but that is not the only test of the theory – nor should it be. In any case, if you have a mechanism and then look for a correlation, that correlation can hardly be said to lack causation. To claim blindly that correlation is not causation (as opposed to the more reasonable position that you should be cautious to claim causation if correlation is your sole evidence) is more or less to say that whenever you find a correlation, you must dismiss the possibility that there is a causal connection, which is clearly absurd.
[Barnes and Hut 1986] J.E. Barnes and P. Hut. A hierarchical O(N Log N) force calculation algorithm. Nature, 324(4):446-449, December 1986
[Knutti 2008] Reto Knutti. Should we believe model predictions of future climate change? Phil. Trans. R. Soc. A, 366(1885):4647–4664, 28 December 2008 [PDF]
A few days ago, in online comments to a letter of mine in The Australian, someone made the claim that climate science can’t be any good because it is inexact:
Philip. With all due respect, if anyone, it is you who are confused; at least about what science is, and in that, you are certainly not alone. Science is not based on correlation but on causation, and an understanding that if A causes B, then A and B move in absolute lockstep according to a precise mathematical law; like Newton’s F=ma. It is not F~ma. Where there is the most minute variation, science says that there is a causal factor for that variation, and a further law to absolutely and completely describe that variation; eg via Einstein’s E=mc*c. Note too that this does not change the original law; it still applies, but it introduces and accounts for another factor which can affect a factor of the original law. It is also worth noting that this new law actually predicted variations from Newton’s laws that were so minute they had not yet been measured. With both together then, everything remains exact. That is true science.
Contrast that to the greenhouse “science” laws (based on correlation) to which you refer. They predict that for a doubling of CO2, there will be between a 1.5 and 4.5C T rise. To in any way equate that to true science is just wrong; for comparison, it would be like having Newton’s law saying ma<F<3ma. Einstein’s Law would then have to be something like E=ms*s (s=speed of a snail crawling over sand) to account for that sort of variation. A hydrogen bomb would then not create enough energy to lift your hat, and the sun would be so weak that earth T would be about -273K, even when the variation in gravitational force brought our orbit within a few million k of the sun!
I know this sounds silly, but I use it to emphasise that real science is exact and absolute, because there is no correlation in it; only causation. That is what climate science needs to be before it can be taken seriously outside of political and religious circles; based on causation, not correlation.
First, greenhouse gas theory is not based on correlation. It is based on radiative physics. The logarithmic relationship between increasing CO2 levels and increased temperature was discovered by Arrhenius and demonstrated in the lab in 1897. The radiative physics needed to calculate the effect accurately was discovered early in the twentieth century. There is therefore as exact a measure of the effect of increasing CO2 as any of Newton’s Laws. What makes things more complicated is the fact that we are dealing with a real-world application, where exact measurement is not possible, and there are many other confounding factors to take into account in making exact predictions.
Rather that go into all this again, I will illustrate just how far from reality the commenter’s view of how “exact” science is in another area. Newton’s law of gravitation is about as exact a formula as you could want. While general relativity corrects it, if we are doing something as mundane as designing a bridge or navigating a space probe, Newton’s law is so close to accurate that we can assume it is exact. In principle, navigating a space probe is not terribly hard. The most efficient way of doing it is to burn a rocket until the probe achieves escape velocity, while making sure it points in the right direction, then leave it to drift. To make things simple, let’s assume we only want to navigate accurately past any one location, and anything else on the way is a bonus (except a collision, but let’s ignore that to keep things simple).
We have a formula for gravitation (thanks to Newton) that says we can calculate the force on any two bodies in space as a constant times the two objects’ masses over the square of the distance between them:
G is a constant, M1 and M2 are the masses of the two bodies, and r2 is the square of the distance between their centres of mass.
For our navigation problem therefore, things look very straightforward – until we get to the detail. We don’t just want to know the forces that apply to the space probe at an instant but how its motion is affected over its entire journey. To make things harder, the space probe isn’t the only thing moving. The entire solar system is in motion under the influence of gravitational forces of everything else in the solar system (and other more distant objects, but the inverse squared law makes that an insignificant correction). In theory we could set up a bunch of differential equations to solve exactly but there is a practical problem with doing this for so many different bodies. There are over a million asteroids for a start and even without them, the differential equations would not be practical to solve. So in practice what you need to do is to approximate the parameters at a given time, calculate where everything will be at some later time (soon enough not to loose too much accuracy, but not so soon that the amount of calculation is prohibitive), and keep applying these steps until you arrive at the time when you need to know where your probe will be.
Let’s just look at how difficult this is for one body in space. Assume we have a series of measurements of where this body is, culminating in the positions I’ve labelled here as A and B, measured at times tA and tB – with the aim of working out where it will be at a future time, tC:
So how do we work out where the body will be at time tC? Based on previous measurements, we estimate the speed and acceleration of the body as it passes through position B, and apply Newton’s formula to adjust its acceleration, resulting in calculating that the body will be at position C. Unfortunately because our previous measurements were not 100% accurate, the actual position the body ends up in at time tC is position D, a little out from our calculation.
How could this happen? The previous measurements of where the body was hadn’t taken into account gravitational forces fully. At some point, you don’t know where every body is going to move next, and have to make some approximations before you can start exact calculations. Why? Because to calculate the effect of applying a force to an object you need to know three things at the time immediately before you apply the force:
- its position
- its speed (velocity)
- its acceleration
Applying the force alters the body’s acceleration and (assuming no other forces are involved), if you know all four quantities precisely (force, acceleration, velocity, position) you can calculate where it will go next precisely (until the parameters change). However if there is any error in any of the parameters to the calculation (including the force, which for gravitation relies on having the positions and masses of all other objects right), the answer will not be exactly right – despite the use of an exact formula.
Worse still, because time tC is in the future, between time tB and time tC, everything else has moved, making any calculation based on the gravitational effects of the positions of everything else at time tB inaccurate. Point C should be marked as a circle representing the uncertainty in the calculation, or a fuzzy blob if you want to represent the fact that the most likely location is at point C with diminishing probability of the position being at a location further out from point C.
Even assuming you can arrive at a tight enough approximation to the position, velocity and acceleration of all objects in the solar system to sufficient accuracy at a given time, you need to recalculate all the parameters for successive time intervals, applying the gravitational force each time to get fresh parameters. This is where things get really hairy. We have around a dozen objects big enough to call planets or large moons and over a million asteroids. Even with a large-scale computer to recompute the position in space of each object along with its new velocity and acceleration, we would have to do trillions of calculations just work out where everything has moved, even if we only do this once. Why? Because for each body, we must calculate the effect of every other body. If there were exactly 1-million such bodies, that would mean almost a million times a million applications of Netwon’s formula. This is clearly impractical, especially if we have to repeat the calculations many times to get an accurate projection of the probe’s trajectory.
Fortunately, there’s a better way. If a group of bodies is far enough away, treating them as a single body with a position based on their centre of mass is a reasonably accurate approximation to their contribution to the gravity computation [Barnes and Hut 1986]. This picture illustrates the basic concept (in two dimensions to make the picture easier to understand):
Depending on the sensitivity parameters of the calculation, it may be possible when calculating the forces at A to proceed as if there were a single body at each of locations C and E. The body at D on the other hand may be too close to C to allow this approximation.
The upshot of all this is that although we can in theory calculate gravitational forces extremely precisely, in the real world, any practical calculation has to contain errors. We can limit those errors by taking more measurements, taking more precise measurements, and reducing the approximations in the calculations at the cost of slower computation. Our space probe can be placed to within some reasonably accurate window of its intended destination, but it had better have a little fuel on board for course corrections.
Back now to climate models.
The situation is really not so different. The relationship between CO2 increases and temperature increases can be measured accurately in the lab, but effects on the real world require approximations because measurement is inexact, we have fewer data points than we’d like and accurate computation would take too long. But as long as we have a handle on the scale of the errors, we can work these into the computation, and produce an answer with a central value and a calculation of how much the actual answer could vary from that central value. This is not some strange new principle invented by climate scientists. Any science of the real world is necessarily inexact, for similar reasons to those that apply to the gravitation computation. As with any other area of science that requires complex accounting for the real world, an answer may be inexact, but not so inexact as to be useless for policy makers [Knutti 2008].
So what of the “correlation is not causation” mantra, which so often accompanies objections to climate science? Simulating the whole earth’s climate is not done using correlations. To claim this is ignorant or dishonest. The simulations are based on laws of physics and measured data (with the sort of simplification, handling of noisy data and approximation needed to do any real-world science) to predict a trend. Comparing that predicted trend against actual measures certainly can be done using correlation, but that is not the only test of the theory – nor should it be. In any case, if you have a mechanism and then look for a correlation, that correlation can hardly be said to lack causation. To claim blindly that correlation is not causation (as opposed to the more reasonable position that you should be cautious to claim causation if correlation is your sole evidence) is more or less to say that whenever you find a correlation, you must dismiss the possibility that there is a causal connection, which is clearly absurd.
References
[Barnes and Hut 1986] J.E. Barnes and P. Hut. A hierarchical O(N Log N) force calculation algorithm. Nature, 324(4):446-449, December 1986
[Knutti 2008] Reto Knutti. Should we believe model predictions of future climate change? Phil. Trans. R. Soc. A, 366(1885):4647–4664, 28 December 2008 [PDF]
Thursday, 13 August 2009
Climate of Fraud
I saw Bob Carter, a leading opponent of climate science, on SBS TV (semi-commercial government-owned channel in Australia) news tonight (13 August 2009). Shortly afterwards, the view switched to a slide carrying the claim that the IPCC’s models “predict monotonic warming, and they are wrong”. Here is the slide, captured from the online edition of the news (about 1:18 in from the start):
This is a blatant lie.
I reproduce here a graph from the IPCC’s 2007 report [Randall et al. 2007]:
A “monotonic increase” means that temperatures can only increase over time, with a possibility that they may stay level at times. Examine the graph. The yellow area represents results from 58 simulations. The black line is the actual temperature record and the red line the average of the simulations. What you can observe is that the yellow lines and their average, the yellow line, do not either increase or at least fail to drop over the entire period of the simulation.
Indeed it would be bizarre if any reasonable approximation to the real climate showed a monotonic temperature increase, unless that increase was so extreme as to overwhelm all natural variation, and no serious climate scientist is making any such claim. It is widely known that the two major short-term influences on temperature are the El Niño Southern Oscillation (ENSO) and the solar cycle. This is why climatologists define the climate as the long-term average. It is a shift in the long-term average that is a concern, not whether temperatures increase every year.
Why is the Carter crew claiming that the theory demands this? Because they want to knock the theory down, and have no evidence to the contrary, so they have no option but to lie.
That he is trundling this stuff out along with cronies from the right wing US Heartland Institute at a time when there is political activity around climate change is no surprise. That they cannot do better is. Professional science obfuscators – including Heartland – confused the public for years around the link between tobacco and cancer without resorting such obvious falsehoods.
[Randall et al. 2007] Randall, D.A., R.A. Wood, S. Bony, R. Colman, T. Fichefet, J. Fyfe, V. Kattsov, A. Pitman, J. Shukla, J. Srinivasan, R.J. Stouffer, A. Sumi and K.E. Taylor, 2007: Climate Models and Their Evaluation. In: Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change [Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M.Tignor and H.L. Miller (eds.)]. Cambridge University Press, Cambridge
This is a blatant lie.
I reproduce here a graph from the IPCC’s 2007 report [Randall et al. 2007]:
A “monotonic increase” means that temperatures can only increase over time, with a possibility that they may stay level at times. Examine the graph. The yellow area represents results from 58 simulations. The black line is the actual temperature record and the red line the average of the simulations. What you can observe is that the yellow lines and their average, the yellow line, do not either increase or at least fail to drop over the entire period of the simulation.
Indeed it would be bizarre if any reasonable approximation to the real climate showed a monotonic temperature increase, unless that increase was so extreme as to overwhelm all natural variation, and no serious climate scientist is making any such claim. It is widely known that the two major short-term influences on temperature are the El Niño Southern Oscillation (ENSO) and the solar cycle. This is why climatologists define the climate as the long-term average. It is a shift in the long-term average that is a concern, not whether temperatures increase every year.
Why is the Carter crew claiming that the theory demands this? Because they want to knock the theory down, and have no evidence to the contrary, so they have no option but to lie.
That he is trundling this stuff out along with cronies from the right wing US Heartland Institute at a time when there is political activity around climate change is no surprise. That they cannot do better is. Professional science obfuscators – including Heartland – confused the public for years around the link between tobacco and cancer without resorting such obvious falsehoods.
[Randall et al. 2007] Randall, D.A., R.A. Wood, S. Bony, R. Colman, T. Fichefet, J. Fyfe, V. Kattsov, A. Pitman, J. Shukla, J. Srinivasan, R.J. Stouffer, A. Sumi and K.E. Taylor, 2007: Climate Models and Their Evaluation. In: Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change [Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M.Tignor and H.L. Miller (eds.)]. Cambridge University Press, Cambridge
Subscribe to:
Posts (Atom)