Tuning to the climate signal

This is a mirror of a PLOS blogpost.

 

This is part 3 of a series of introductory posts about the principles of climate modelling. Others in the series: 1 | 2

My sincere apologies for the delays in posting and moderating. Moving house took much more time and energy than I expected. Normal service resumes.

I’d also like to mark the very recent passing of George Box, who was the eminent and important statistician to whom I owe the name of my blog, which forms a core part of my scientific values. The ripples of his work and philosophy travelled very far. My condolences and very best wishes to his family.

The question I asked at the end of the last post was:

“Can we ever have a perfect reality simulator?”

I showed a model simulation with pixels (called “grid boxes” or “grid cells”) a few kilometres across: in other words, big. Pixel size, also known as resolution, is limited by available computing power. If we had infinite computing power how well could we do? Imagine we could build a climate model representing the entire “earth system” – atmosphere, oceans, ice sheets and glaciers, vegetation and so on – with pixels a metre across, or a centimetre. Pixels the size of an atom. If we could do all those calculations, crunch all those numbers, could we have a perfect simulator of reality?

I’m so certain of the answer to this question, I named my blog after it.

A major difficulty with trying to simulate the earth system is that we can’t take it to pieces to see how it works. Climate modellers are usually physicists by training, and our instinct when trying to understand a thing is to isolate sections of it, or to simplify and abstract it. But we have limited success if we try to look at isolated parts of the planet, because everything interacts with everything else, and difficulties with simplifications, because important things happen at every scale in time and space. We need to know a bit about everything. This is one of my favourite things about the job, and one of the most difficult.

For a perfect simulation of reality, we would need perfect understanding of every physical, chemical and biological process – every interaction and feedback, every cause and effect. We are indeed improving climate models as time goes on. In the 1960s, the first weather and climate models simulated atmospheric circulation, but other important parts of the earth system such as the oceans, clouds, and carbon cycle were either included in very simple ways (for example, staying fixed rather than changing through time) or left out completely. Through the decades we have developed the models, “adding processes”, aiming to make them better simulators of reality.

But there will always be processes we think are important but don’t understand well, and processes that happen on scales smaller than the pixel size, or faster than the model “timestep” (how often calculations are done, like the frame rate of a film). We include these, wherever possible, in simplified form. This is known as parameterisation.

Parameterisation is a key part of climate modelling uncertainty, and the reason for much of the disagreement between predictions. It is the lesser of two evils when it comes to simulating important processes: the other being to ignore them. Parameterisations are designed using observations, theoretical knowledge, and studies using very high resolution models.

For example, clouds are much smaller than the pixels of most climate models. Here is the land map from a lower resolution climate model than the one in the last post, called HadCM3.

Land-sea map for UK Met Office Unified Model HadCM3

If each model pixel could only show “cloud” or “not cloud”,  then a simulation of cloud cover would be very unrealistic: a low resolution, blocky map where each block of cloud is tens or even hundreds of kilometres across. We would rather each model pixel was covered in a percentage of cloud, rather than 0% or 100%. The simplest way to do this is to relate percentage cloud to percentage relative humidity: at 100% relative humidity, the pixel is 100% covered in cloud; as relative humidity decreases, so does cloud cover.

Parameterisations are not Laws of Nature. In a sense they are Laws of Models, designed by us wherever we do not know, or cannot use, laws of nature. Instead of “physical constants” that we measure in the real world, like the speed of light, they have “parameters” that we control. In the cloud example, there is a control dial for the lowest relative humidity at which cloud can form. This critical threshold doesn’t exist in real life, because the world is not made of giant boxes. Some parameters are equivalent to things that exist, but for the most part they are “unphysical constants”.

The developers of a model play god, or at least play a car radio, by twiddling these control dials until they pick up the climate signal: in other words, they test different values of the parameters to find the best possible simulation of the real world. For climate models, the test is usually to reproduce the changes of the last hundred and fifty years or so, but sometimes to reproduce older climates such as the last ice age. For models of Greenland and Antarctica, we only have detailed observations from the last twenty years.

As our understanding improves and our computing power increases, we replace the parameterisations with physical processes. But we will never have perfect understanding of everything, nor infinite computing power to calculate it all. Parameterisation is a necessary evil. We can never have a perfect reality simulator, and all models are… imperfect.

In case you do lie awake worrying that the entire universe is a simulation: it’s fine, we can probably check that.

 

 

10 comments

  1. genemachine

    “For climate models, the test is usually to reproduce the changes of the last hundred and fifty years or so, but sometimes to reproduce older climates such as the last ice age.”

    I accept that models can be tuned to give a passable similarity to the simple 1 dimensional metric of past global temperatures (though I understand that many models fail if they are initialised to a date outwith their tuning period). That they fit their test data is not surprising. My friend recounts a tale of tank detection software that fitted the test data perfectly but failed in the real world. It turned out that, within the test data, time of day was the salient piece of information to decide if there was a tank. For the modellers sake I hope that CO2 doesn’t turn out to be their equivalent of “time of day”.

    New data – or holding back part of the data – is a better test but we don’t have decades to find out if the models work and I’m sure that few modellers could resist the temptation to stop tuning their simulation at say 1960 and use the later decades to test it. Fortunately there are many processes that are not easily tuned (or parametrised) that should emerge naturally from a good model:

    -Do the model reproduce the precipitation patterns of the planet – deserts in the right place and wet seasons with broadly similar distributions, timings, and intensity? Is the cloud cover in the right place?

    -Does the model reproduce ocean currents such as the Gulf Stream (as shown in the video in your last post) in terms of speed, mass and temperature?

    -Does the model reproduce the great ocean oscillations such as the PDO and ENSO?

    If a model fails the above tests, and thus fail to reproduce the heat moving around the oceans and atmosphere, can it still be considered accurate enough to be useful for any practical purpose?

  2. Evil Denier

    ‘Tuning’ sounds a lot like ‘curve fitting’ to me. Predictive value = 0.
    Done it. Got the T-shirt.

    • Tamsin Edwards

      You’re right, it makes the assumption that the most successful parameter values for the recent past are also most successful for the future. Unfortunately it’s difficult to tune the model using observations of the future ;)

      The two main approaches to this problem are:

      1. Detune the model: use many different values of parameters to see what effect it has on the predictions, giving an envelope of uncertainty.
      2. Tune the models using several different past climates: not just the recent past, but also palaeoclimates (pre-instrumental record), with the aim of finding parameter values that are valid under several different conditions.

      Both of these methods have advantages and disadvantages, which I will discuss in more detail in future posts (it’s my research area)…

      • Rob Burton

        This reply to ED and other similar posts you have made imply you are essentially ‘curve fitting’ to the past (or multiple past times) by adjusting multiple parameters, then using the parameters selected here to predict the future. Do you agree with that?

  3. Evil Denier

    They’re (carefully!) called scenarios (© IPCC), not “predictions”.
    Twiddle one knob, what’s the cross-effect? Does anyone know?
    How well does anyone know paleolithic temperatures?
    On these one would have us spend trillions and re-construct society!
    You see why I’m a skeptik.
    The T-shirt is fading in each wash.

    • Tamsin Edwards

      Thank you John. A lovely insight. I already imagined he was a warm person from his photograph, which makes me smile when I include it in my presentations.

      I love the idea of “Monday Night Beer and Statistics”, and I wish I could have heard him sing “There’s No Theorem Like Bayes’ Theorem”! (I talk about Bayes’ Theorem in my latest post, which I will put online today.)

      I’ll repost these details:

      If you want to honor the memory of George, contributions could be made to UW Foundation-George Box Endowment Fund (for the support of graduate students) US Bank Lock Box 78807, Milwaukee, WI 53278; Agrace HospiceCare, 5395 E. Cheryl Parkway Madison, WI 53711.

  4. Angus Ferraro

    The development team for the MPI model wrote a paper (open access) in which they describe the tuning process. The discussion section is particularly interesting.

    “The MPI-ESM was not tuned to better fit the 20th century. In fact, we only had the capability to run the full 20th Century simulation according to the CMIP5-protocol after the point in time when the model was frozen.”

    It appears their tuning process was based on the model producing output consistent with the real world, but not necessarily reproducing the evolution during the 20th Century.

    It’s a really interesting (and honest) paper in general – highly recommended!