Pages

Wednesday, 21 October 2009

Climate of Fraud Part 2

Here’s a pair of letters that appeared in The Australian. First, one from me:
MARC Hendrickx (Letters, 17-18/10) alleges that Pen Hadow “had to be rescued in the Arctic in 2003 due to the extreme cold and excessive ice”. Hadow in fact had always planned to be airlifted off once he arrived at the Pole, and the only issue was that he started his solo walk from Canada to the North Pole late in the season, when a pick up was risky because the ice was breaking up.

If it’s impossible to support an argument without resorting to fabrication or ad hominem attack, you don’t have a case. Every global warming denial theory falls apart when examined against the evidence, so the denial cult has given up arguing the facts.

Here’s one they won’t like. Despite the fact that we are in the deepest solar minimum—the period of least solar activity in the solar cycle of the sun—in almost a century, temperatures remain close to record highs. Had the “it’s all the sun” crew been right, we should have seen temperatures close to 100-year lows over the past few years.

As for the actual state of the Arctic, Hadow is not the only authority who has Arctic summer sea ice disappearing in the next 20 to 30 years. Several papers and reports have backed this conclusion. I’ve been working in science for nearly 30 years, and I have yet to encounter a situation where wishful thinking overturns a theory, especially when that wishful thinking runs counter to well-established physics (as is the theory of greenhouse gas warming).

If there are genuine climate-change sceptics who have alternative theories that explain the facts better than the mainstream theory, let’s hear them by all means. That’s how science works. But if the accepted theory is right, we are running out of time fast. The alternative theories have all failed any reasonable scientific test, while the mainstream has held up pretty well against the most concerted political attack on any scientific theory since the Inquisition stopped burning scientists at the stake. It’s time to move on and start addressing the real problems.

Philip Machanick

Then, a day later, a response:
USING dubious observations to bolster a preferred hypothesis is not how science works. Philip Machanick (Letters, 20/10) is on thin ice when he suggests that the human-caused global warming scenario is the only plausible explanation for our recent climate history. There is a plethora of contradictory data.

Reconstructions of the solar intensity record for recent centuries, referred to by Machanick, are speculative. Prior to 1978 there were no direct observations from outside the atmosphere and estimates of changing intensity have been made from proxies, such as sun spot numbers. As reported by the Intergovernmental Panel on Climate Change, even the successive satellites have calibration uncertainty. It is therefore a matter of dispute as to whether or not we are in the deepest solar minimum in almost a century, as Machanick claims.

If melting of Arctic sea ice is to be taken as the canary in the coal mine for human-caused global warming, then there are relevant reputable data extending over hundreds of thousands of years from which to draw comfort. Oxygen isotope ratios from Greenland ice cores and pollen analysis from sea-bed sediment cores off southern Greenland independently show a consistent pattern.

Over the past half-million years the Arctic has oscillated through glacial cycles, each of about 100,000-year duration, and we are currently in a relatively warm interglacial phase. During each of the previous interglacials the Arctic was warmer than at present. The pollen and isotope records also suggest that the Arctic was warmer during the current interglacial between 4000 and 8000 years ago, when the carbon dioxide concentration was much less than now, and well before industrialisation.

William Kininmonth


Kininmoth is accusing me of gross misconceptions about how science works (note the bits I’ve highlighted). Heavy. I should return my PhD, and stop working as a researcher. Or, maybe I should do what a researcher does, and re-examine the evidence – starting from the pronouncements of Kininmoth himself. My original letter did not come out of nowhere: I was attempting to demonstrate how the data the denialists use directly contradicts the evidence. Well, here’s another Kininmonthian contribution from December 2008:

THE attempt by Professor Marvin Geller to discredit scientists who do not follow the climate alarmist agenda only highlights the inconsistencies of his case ("Professor sheds light for climate sceptics”, 4/12).

The evidence of solar influences on climate is well documented, especially the relationships established over many centuries of observations, that link sunspot numbers and cosmic ray activity to global temperature.

The lack of a creditable explanation for the relationships should be reason for more research, not dismissal of the mechanisms.

It is wrong to claim that the past few decades of warming cannot be explained without including human influences.

The error of his statement is obvious from his own explanation for the temperature peak of 1998 as a massive El Nino event. The El Nino is a temporary reduction of upwelling in the surface layer of the tropical Pacific Ocean that decreases the entrainment of cold subsurface water; the warmer tropical waters provide additional energy to warm the planet during an El Nino event.

Research by Michael McPhaden and Dongxiao Zhang, published in the journal Nature in 2002, identified a major and sustained reduction in Pacific Ocean upwelling and warmer ocean surface temperatures that became established in 1976.

This was at the beginning of the most recent global warming episode that the alarmists mistakenly attribute to human-caused carbon dioxide.

Interactions between the ocean and atmosphere, the two fluids that regulate Earth’s climate, are now widely recognised as contributing to climate variability on a range of timescales.

William Kininmonth


Note again my highlighting.

What was that again, about “Using dubious observations to bolster a preferred hypothesis”? Is that not a vaguely similar methodological flaw to changing your degree of support for the validity of a data set when it no longer supports your “preferred hypothesis”?

I could also dispute other points he makes, but this to me is sufficient. If you want to accuse others of unscientific practice, make sure your own approach is beyond reproach.

Update


The WCC3 conference has downloads of speakers’ slides, and a voice recording. Latif’s talk (about a third of the way into the audio) is especially interesting since it has been so widely misreported. In particular, he addresses the need to get better resolution and accuracy for decadal predictions; this has somehow been interpreted as his saying that it will get cooler over the next two decades. If you want to get the best out of his talk, download Latif’s slides and follow them while listening to his part of the audio. I’ve posted a longer article elsewhere on how Latif has been misinterpreted.

Sunday, 11 October 2009

Lazy Sunday Cuisine

A lazy wet Sunday: what’s for lunch?

I have some nice organic tomatoes from a farmer’s market, a little left over Regianno Parmesan and some supermarket spinach and ricotta tortellini (the fresh kind, not died). On a less lazy day, I’d crank up the pasta machine – but not today. For dessert I have about 200g of good fair trade dark cooking chocolate, and some milk and eggs in the fridge.

To get started, how about a nice sauce for the pasta? Let’s see what else I have and put it together (enough for two serves):

  • a splash of olive oil
  • a shredded clove of garlic
  • bottled herbs (basil, parsley) – it’s raining, I’m not going to the garden this once
  • a sprinkling of macadamia halves
  • 4 smallish tomatoes
  • a hint of red pepper seeds
  • finely shredded Parmesan


The trick with a tomato-based sauce is to squeeze out the juice and seeds. This way you don’t have to cook it down so much and end up with a fresher taste. Start with heating the oil, toss in the garlic then the tomato, herbs and spices. Then the tomato has almost fallen apart, add in the macadamias.

At this point, you may want to add in a splash of good red wine: not too much, you’ll want to drink the rest.

Once the sauce is making good progress, cook the pasta. The brand I’m using (a local one, San Remo) is not half bad if you ignore the package directions to cook to death for 6 minutes. My rule for any filled pasta or gnocchi is (based on instructions from an Italian cook): “cook them until they give up” – when they float to the surface and turn upside down. This time, it’s about 2 minutes. The result: a good chewy texture, and a filling that still has plenty of taste. Australians (judging from the package instructions) share the South African and British taste for flavourless mush.

Toss the pasta in the sauce, add Parmesan and you’re ready to serve.

On to dessert. This needs a little planning ahead of time. I like to make meringues at the same time as I make ice cream because you can use the egg yolks for the ice cream. There are a few simple basics for making good meringues:

  1. never use chilled eggs: let them warm to room temperature
  2. use a copper mixing bowl: this seems to result in getting a stiffer texture much faster (I doubt very much a little copper is toxic: think of what they replaced lead with in water pipes)
  3. cook at low heat (90°C or 190°F for the neoliths): you are drying the meringues more than cooking them

Here’s another interesting trick I discovered. I recently found brown caster sugar in a supermarket and bought it wondering why you’d want something like that. The answer is, you get extra crispy meringues that whip up easier. The colour is a pale gold rather than snowy white, but they taste great.

OK, so the ice cream. The basic idea is to make an egg custard, let it cool, chill it, add whipped cream then churn in an ice cream maker.

Here’s an example:

  • 400 ml milk
  • quarter cup of sugar (double this if not adding something sweetened)
  • 3 egg yolks (reserve the whites)
  • 200g good dark chocolate (fair trade please: if you are going to die of death by chocolate feel good about it)
  • 200 ml good whipping cream

Shred the chocolate. I do this by slicking it finely: it shreds as you slice.

Whip up the egg yolks, gradually adding the sugar, until you have a light consistency.

Scald the milk in a microwave (2 minutes in mine; it needs to be just short of boiling). Add some of the milk into the egg mixture, mix well and whisk into the rest of the milk. Heat again in the microwave for 2-3 minutes, watching for the point where it starts to froth up. Stop and whisk vigorously as soon as this happens, and repeat until you have a thick custard. If you take it too far, you end up with sweet scrambled egg, so take care.

At this point, patience is called for, otherwise you might as well decant into mugs and serve as the world’s best hot chocolate. If you can resist, put the custard aside to cool off, then refrigerate it. Lunch is developing into an all-day affair.


While you have nothing else to do, work on the meringues:

  • 3 egg whites
  • 125 g sugar

Whip the egg whites to a froth, then gradually add the sugar. Whip until the mixture can hold its shape, then drop about half a tablespoonful at a time onto a nonstick baking sheet in a baking tray (I use the baking sheet as anti-crunch packing when I store the meringues, and reuse it a few times for future batches). Put in a cool oven (90°C) on the middle shelf for 3 hours, and leave in the oven after turning off the heat to continue drying.

Back to the ice cream. Once it’s thoroughly chilled, whip the cream fairly stiff, fold it in then put the mixture in an ice cream maker (mine is hand-churned). Work the machine the usual way. Although ice cream is one of the few things I eat from frozen, it’s best eaten fresh.

Sunday, 23 August 2009

District 9

District 9 has been picking up pretty good reviews on the whole, but another whole layer has been missed by many reviewers who don’t understand the South African setting. I lived in South Africa until 2002, and have been a long-time follower of the local SF culture. I’ve been a member of the national SF club, SFSA, since the 1970s, and the fact that there is this sort of creative talent in South Africa is not so much a surprise as the fact that it’s resulted in a major movie with a big worldwide launch – without turning it into an American-centric story, where the kid saves the day (actually, there is a minor element of the kid saves the day, but it’s not as unbelievable as in the average movie where a human kid does this).



The movie starts with a mystery: it has no hint as to why the aliens arrived. All we know is that they have lost their ability to control most of their vastly superior technology (not quite all: a few of their weapons work but only in contact with alien DNA, and their space craft is able to remain in a fixed position for decades). There appears to be some linkage between their genetic make up and their ability to control their technology, but much of this is left a mystery.

By a combination of hand cam shots meant to represent an official record of events, news-like footage, surveillance camera-style footage and the occasional realistic scene, it’s hard not to become involved and have a real sense of an actual story unfolding, even though the earlier events are in the past.

Imagine humans in a like situation. A million humans in a colony ship arrive at a distant planet, and our technology breaks down. Aliens who are obviously less advanced than us “rescue” us from our disabled ship and treat us like dirt. How would we cope? How many of us would have the advanced scientific knowledge to fix our broken space craft? Think Star Trek episodes and beaming down to the nearest planet to find some broken part or find a missing chemical. Totally unlikely. What we saw in District 9 is a much more likely scenario for a space ship breaking down far from home. Possibly this is why (though the last Star Trek movie was passably good, give or take the odd plot hole you could drive the Enterprise through) my favourite SF movies tend to be spoofs like Mars Attacks! and Galaxy Quest.

The real genius of this movie is in its use of role reversals. The aliens have arrived in a disabled space craft under squalid conditions (Australians, think refugees in leaky boats). They are treated with utmost condescension, and things that are obviously not for their own good are done to “improve” conditions for them. Suddenly you are put in a position of seeing this all from their point of view. You need a good understanding of South Africa to get all the references but for a foreign audience it adds to the “alienness” feeling of the movie. To add to the subtle feeling that you are looking at things backwards, one of the aliens is called “Christopher Johnson” (typical of the way colonial overlords renamed the natives when they couldn’t pronounce the native-language name), a name less “alien” to a non-South African audience than Van der Merwe.

Let’s examine some of those South African references. First, the very title harks back to the dark years of apartheid. District 6 in Cape Town (in a different part of South Africa) was a ghetto for Coloured (mixed-race) South Africans, which, despite poverty, had a strong sense of community. The government decided to clear out the residents because having Coloured people too near the city centre was inconvenient. Clearing out District 6 was a big running sore in apartheid history; forced removals were bitterly opposed, and the cleared land was left largely undeveloped until the fall of apartheid, when rights of former residents to return were recognized. Forced removals, in general, were a key feature of the apartheid system. One study of the effects was called the Surplus People Project; a movie depicting the effects was titled Last Grave at Dimbaza.

Second, the attitude towards the aliens is consistent with current attitudes in South Africa to “illegal aliens” of the human kind. There have been riots over the presence of such foreigners from poorer parts of Africa, and the attitudes expressed in the movie are absolutely typical, and a sad rejection of the apartheid past, when the rest of Africa rallied to support of the anti-apartheid cause, and accepted South African refugees with open arms. The aliens are accused of all kinds of things like causing crime, when the only evidence we see of criminality is from Nigerian gangs and the MNU company that is desperately trying to make money out of the aliens by any means, not matter how unscrupulous. MNU is a bit of a composite, not reflective of a real company. Americans may relate it to private contractors in Iraq. South Africa does indeed have a large armaments company, government-owned Denel (a relic of the apartheid era) but it does not do the sort of private security work depicted in the movie – there are other South Africa companies that do that sort of thing – nor is it as big in the world market as the fictitious MNU.

Third, and this is where the subtleties really accumulate, the attitude of protagonist Wikus van der Merwe to the aliens is exactly the way apartheid officials treated Black South Africans in the darkest apartheid years. Telling one of the aliens not to use so many clicks is a direct reference to South African languages, several of which include click sounds (the San languages are almost entirely composed of clicks, and Xhosa and Zulu have a few click sounds). That Van der Merwe, who obviously gets on well with his Black colleagues, can get away with this treatment of the aliens with no sense of irony or objection from the Black members of the team shows how little the South Africans represented in the story learnt from their apartheid experience.

More sensitive viewers may dislike the violence (especially in the second half, which turns into frenetic action scenes) but it is integral to the story and not gratuitous. The dialogue also includes some of the more serious Zulu cursing I’ve heard in decades but that would go over the heads of most foreign audiences.

No doubt the computer game heritage of the movie gives it some of its mass market appeal, but this is a real classic, as much a game-changer (in the other sense) as the Matrix movies, and a whole lot more intelligent. That there are almost no American (or even UK English) accents in the dialogue, some African language dialogue without subtitling and that the protagonist has a name almost unpronounceable to non-South Africa English speakers are brave moves but add to the movie’s appeal as something different from the usual Hollywood dross.

A little help for the foreigners: “Wikus” is pronounced something like Vee-cuss. “Van der Merwe” is pronounced something like Fun-deh-meh-vuh.

Sequel? Very likely. With the box office this one is generating, the unexplained details and the potential for follow-up developments left open at the end, a sequel is almost 100% on. I hope it’s as good as the first. At last, after a long drought, an SF movie that’s better than a parody. It’s been a long wait.

Sunday, 16 August 2009

Science in the Real World

One of the Big Lies in the campaign to confuse the public about climate science is that true science proceeds from exact information applied to an exact formula, producing an exact result. Because results reported by climate models are inexact, the Lie goes, the theory must therefore be flawed. Members of the public without scientific experience can be excused for getting this wrong; experienced scientists who propagate such views should hang their heads in shame. Real science is nothing like this. Exact results only apply in artificially constructed situations (and even then, you need to allow for errors in measurement). The real world is noisy: instruments have errors, multiple sources of information interact in ways that can’t always be disentangled with precision, and precise calculations on a real-world scale may be impractical. Just as with any other branch of science, the theory of anthropogenic greenhouse gas-driven warming is based on a precise formula: increases in CO2 result in increases in temperature logarithmic in the increase in CO2. As with many other physical theories (including gravitation, thermodynamics and electromagnetism), this theory is testable in the lab under idealistic conditions. As with any other theory, real-world application involves dealing with noisy data and interactions with other aspects of the total system.

A few days ago, in online comments to a letter of mine in The Australian, someone made the claim that climate science can’t be any good because it is inexact:

Philip. With all due respect, if anyone, it is you who are confused; at least about what science is, and in that, you are certainly not alone. Science is not based on correlation but on causation, and an understanding that if A causes B, then A and B move in absolute lockstep according to a precise mathematical law; like Newton’s F=ma. It is not F~ma. Where there is the most minute variation, science says that there is a causal factor for that variation, and a further law to absolutely and completely describe that variation; eg via Einstein’s E=mc*c. Note too that this does not change the original law; it still applies, but it introduces and accounts for another factor which can affect a factor of the original law. It is also worth noting that this new law actually predicted variations from Newton’s laws that were so minute they had not yet been measured. With both together then, everything remains exact. That is true science.

Contrast that to the greenhouse “science” laws (based on correlation) to which you refer. They predict that for a doubling of CO2, there will be between a 1.5 and 4.5C T rise. To in any way equate that to true science is just wrong; for comparison, it would be like having Newton’s law saying ma<F<3ma. Einstein’s Law would then have to be something like E=ms*s (s=speed of a snail crawling over sand) to account for that sort of variation. A hydrogen bomb would then not create enough energy to lift your hat, and the sun would be so weak that earth T would be about -273K, even when the variation in gravitational force brought our orbit within a few million k of the sun!

I know this sounds silly, but I use it to emphasise that real science is exact and absolute, because there is no correlation in it; only causation. That is what climate science needs to be before it can be taken seriously outside of political and religious circles; based on causation, not correlation.


First, greenhouse gas theory is not based on correlation. It is based on radiative physics. The logarithmic relationship between increasing CO2 levels and increased temperature was discovered by Arrhenius and demonstrated in the lab in 1897. The radiative physics needed to calculate the effect accurately was discovered early in the twentieth century. There is therefore as exact a measure of the effect of increasing CO2 as any of Newton’s Laws. What makes things more complicated is the fact that we are dealing with a real-world application, where exact measurement is not possible, and there are many other confounding factors to take into account in making exact predictions.

Rather that go into all this again, I will illustrate just how far from reality the commenter’s view of how “exact” science is in another area. Newton’s law of gravitation is about as exact a formula as you could want. While general relativity corrects it, if we are doing something as mundane as designing a bridge or navigating a space probe, Newton’s law is so close to accurate that we can assume it is exact. In principle, navigating a space probe is not terribly hard. The most efficient way of doing it is to burn a rocket until the probe achieves escape velocity, while making sure it points in the right direction, then leave it to drift. To make things simple, let’s assume we only want to navigate accurately past any one location, and anything else on the way is a bonus (except a collision, but let’s ignore that to keep things simple).

We have a formula for gravitation (thanks to Newton) that says we can calculate the force on any two bodies in space as a constant times the two objects’ masses over the square of the distance between them:

G is a constant, M1 and M2 are the masses of the two bodies, and r2 is the square of the distance between their centres of mass.

For our navigation problem therefore, things look very straightforward – until we get to the detail. We don’t just want to know the forces that apply to the space probe at an instant but how its motion is affected over its entire journey. To make things harder, the space probe isn’t the only thing moving. The entire solar system is in motion under the influence of gravitational forces of everything else in the solar system (and other more distant objects, but the inverse squared law makes that an insignificant correction). In theory we could set up a bunch of differential equations to solve exactly but there is a practical problem with doing this for so many different bodies. There are over a million asteroids for a start and even without them, the differential equations would not be practical to solve. So in practice what you need to do is to approximate the parameters at a given time, calculate where everything will be at some later time (soon enough not to loose too much accuracy, but not so soon that the amount of calculation is prohibitive), and keep applying these steps until you arrive at the time when you need to know where your probe will be.

Let’s just look at how difficult this is for one body in space. Assume we have a series of measurements of where this body is, culminating in the positions I’ve labelled here as A and B, measured at times tA and tB – with the aim of working out where it will be at a future time, tC:

So how do we work out where the body will be at time tC? Based on previous measurements, we estimate the speed and acceleration of the body as it passes through position B, and apply Newton’s formula to adjust its acceleration, resulting in calculating that the body will be at position C. Unfortunately because our previous measurements were not 100% accurate, the actual position the body ends up in at time tC is position D, a little out from our calculation.

How could this happen? The previous measurements of where the body was hadn’t taken into account gravitational forces fully. At some point, you don’t know where every body is going to move next, and have to make some approximations before you can start exact calculations. Why? Because to calculate the effect of applying a force to an object you need to know three things at the time immediately before you apply the force:

  1. its position
  2. its speed (velocity)
  3. its acceleration

Applying the force alters the body’s acceleration and (assuming no other forces are involved), if you know all four quantities precisely (force, acceleration, velocity, position) you can calculate where it will go next precisely (until the parameters change). However if there is any error in any of the parameters to the calculation (including the force, which for gravitation relies on having the positions and masses of all other objects right), the answer will not be exactly right – despite the use of an exact formula.

Worse still, because time tC is in the future, between time tB and time tC, everything else has moved, making any calculation based on the gravitational effects of the positions of everything else at time tB inaccurate. Point C should be marked as a circle representing the uncertainty in the calculation, or a fuzzy blob if you want to represent the fact that the most likely location is at point C with diminishing probability of the position being at a location further out from point C.

Even assuming you can arrive at a tight enough approximation to the position, velocity and acceleration of all objects in the solar system to sufficient accuracy at a given time, you need to recalculate all the parameters for successive time intervals, applying the gravitational force each time to get fresh parameters. This is where things get really hairy. We have around a dozen objects big enough to call planets or large moons and over a million asteroids. Even with a large-scale computer to recompute the position in space of each object along with its new velocity and acceleration, we would have to do trillions of calculations just work out where everything has moved, even if we only do this once. Why? Because for each body, we must calculate the effect of every other body. If there were exactly 1-million such bodies, that would mean almost a million times a million applications of Netwon’s formula. This is clearly impractical, especially if we have to repeat the calculations many times to get an accurate projection of the probe’s trajectory.

Fortunately, there’s a better way. If a group of bodies is far enough away, treating them as a single body with a position based on their centre of mass is a reasonably accurate approximation to their contribution to the gravity computation [Barnes and Hut 1986]. This picture illustrates the basic concept (in two dimensions to make the picture easier to understand):

Depending on the sensitivity parameters of the calculation, it may be possible when calculating the forces at A to proceed as if there were a single body at each of locations C and E. The body at D on the other hand may be too close to C to allow this approximation.

The upshot of all this is that although we can in theory calculate gravitational forces extremely precisely, in the real world, any practical calculation has to contain errors. We can limit those errors by taking more measurements, taking more precise measurements, and reducing the approximations in the calculations at the cost of slower computation. Our space probe can be placed to within some reasonably accurate window of its intended destination, but it had better have a little fuel on board for course corrections.

Back now to climate models.

The situation is really not so different. The relationship between CO2 increases and temperature increases can be measured accurately in the lab, but effects on the real world require approximations because measurement is inexact, we have fewer data points than we’d like and accurate computation would take too long. But as long as we have a handle on the scale of the errors, we can work these into the computation, and produce an answer with a central value and a calculation of how much the actual answer could vary from that central value. This is not some strange new principle invented by climate scientists. Any science of the real world is necessarily inexact, for similar reasons to those that apply to the gravitation computation. As with any other area of science that requires complex accounting for the real world, an answer may be inexact, but not so inexact as to be useless for policy makers [Knutti 2008].

So what of the “correlation is not causation” mantra, which so often accompanies objections to climate science? Simulating the whole earth’s climate is not done using correlations. To claim this is ignorant or dishonest. The simulations are based on laws of physics and measured data (with the sort of simplification, handling of noisy data and approximation needed to do any real-world science) to predict a trend. Comparing that predicted trend against actual measures certainly can be done using correlation, but that is not the only test of the theory – nor should it be. In any case, if you have a mechanism and then look for a correlation, that correlation can hardly be said to lack causation. To claim blindly that correlation is not causation (as opposed to the more reasonable position that you should be cautious to claim causation if correlation is your sole evidence) is more or less to say that whenever you find a correlation, you must dismiss the possibility that there is a causal connection, which is clearly absurd.


References


[Barnes and Hut 1986] J.E. Barnes and P. Hut. A hierarchical O(N Log N) force calculation algorithm. Nature, 324(4):446-449, December 1986
[Knutti 2008] Reto Knutti. Should we believe model predictions of future climate change? Phil. Trans. R. Soc. A, 366(1885):4647–4664, 28 December 2008 [PDF]