Virtually Reality

This is part 2 of a series of introductory posts about the principles of climate modelling. Others in the series: 1.

The second question I want to discuss is this:

How can we do scientific experiments on our planet?

In other words, how do we even do climate science? Here is the great, charismatic physicist Richard Feynman, describing the scientific method in one minute:

If you can’t watch this charming video, here’s my transcript:

“Now I’m going to discuss how we would look for a new law. In general, we look for a new law by the following process:

First, we guess it.

Then we — no, don’t laugh, that’s the real truth — then we compute the consequences of the guess to see what, if this is right, if this law that we guessed is right, we see what it would imply.

And then we compare the computation result to nature, or we say compare to experiment, or experience, compare it directly with observations to see if it works.

If it disagrees with experiment, it’s wrong. In that simple statement is the key to science. It doesn’t make a difference how beautiful your guess is, it doesn’t make a difference how smart you are, who made the guess, or what his name is — if it disagrees with experiment, it’s wrong. That’s all there is to it.”

What is the “experiment” in climate science? We don’t have a mini-Earth in a laboratory to play with. We are changing things on the Earth, by farming, building, and putting industrial emissions into the atmosphere, but it’s not done in a systematic and rigorous way. It’s not a controlled experiment. So we might justifiably wonder how we even do climate science.

Climate science is not the only science that can’t do controlled experiments of the whole system being studied. Astrophysics is another: we do not explode stars on a lab bench. Feynman said that we can compare with experience and observations. We would prefer to experience and observe things we can control, because it is much easier to draw conclusions from the results. Instead we can only watch as nature acts.

What is the “guess” in climate science? These are the climate models. A model is just a representation of a thing (I wrote more about this here). A climate model is a computer program that represents the whole planet, or part of it.* It’s not very different to a computer game like Civilisation or SimCity, in which you have a world to play with, in which you can tear up forests and build cities. In a climate model we can do much the same: replace forests with cities, alter the greenhouse gas concentrations, let off volcanoes, change the energy reaching us from the sun, move the continents. The model produces a simulation of how the world responds to those changes: how they affect temperature, rainfall, ocean circulation, the ice in Antarctica, and so on.

How do they work? The general idea is to stuff as much science as possible into them without making them too slow to use. At the heart of them are basic laws of physics, like Newton’s laws of motion and the laws of thermodynamics. Over the past decades we’ve added more to them: not just physics but also chemistry, such as the reactions between gases in the atmosphere; biological processes, like photosynthesis; and geology, like volcanoes. The most complicated climate models are extremely slow. Even on supercomputers it can take many weeks or months to get the results.

Here is a state-of-the-art simulation of the Earth by NASA.

The video shows the simulated patterns of air circulation, such as the northern hemisphere polar jet stream, then patterns of ocean circulation, such as the Gulf Stream. The atmosphere and ocean models used to make this simulation are high resolution: they have a lot of pixels so, just like in a digital camera, they show a lot of detail.

A horizontal slice through this atmosphere model has 360 x 540 pixels, or 0.2 megapixels. That’s about two thirds as many as a VGA display (introduced by IBM in 1987) or the earliest consumer digital camera (the Apple QuickTake from 1994). It’s also about the same resolution as my blog banner. The ocean model is a lot higher resolution: 1080 x 2160 pixels, or 2.3 megapixels, which is about the same as high definition TV. The video above has had some extra processing to smooth the pixels out and draw the arrows.

I think it’s quite beautiful. It also seems to be very realistic, a convincing argument that we can simulate the Earth successfully. But the important question is: how successfully? This is the subject of my next post:

Can we ever have a perfect “reality simulator”?

The clue’s in the name of the blog…

See you next time.

 

* I use the term climate model broadly here, covering any models that describe part of the planet. Many have more specific names, such as “ice sheet model” for Antarctica.

 

10 comments

  1. Rob Burton

    “It’s not very different to a computer game like Civilisation or SimCity”

    “I think it’s quite beautiful. It also seems to be very realistic, a convincing argument that we can simulate the Earth successfully.”

    Not really different at all, all 3 of the games and GCMs are beautiful, realistic simulations of the Earth. I probably had more fun with the first one though compared to my time with the Reading SGCM ;-)

  2. ikh

    Hi Tamsin,

    Nice post. And as usual you have an interesting writing style. By this I mean that the post reads well ( and easily ) and you tell a good story. That’s meant to be a compliment. The problem is with what you leave out. And that means IMHO you lack rigor. And I can’t believe that your physics training lacks rigor.

    Let me explain. The first element, the guess, is a mathematical model. The computer model is a way of testing the mathematical model. But you also leave out the most fundamental element, assumption.

    GCMs assume a water vapour feed back. amplifying the warming. And yet a lot of the empirical evidence does not support this.

    There was an asumption used in GCMs that particulates from volcanoes could only reach high up ( stratosphere ) if it was a major eruption. Now, recent research suggests that even minor eruptions reach the stratosphere.

    It is the lack of rigor in the models that make skeptics, and the lack of documentation of the assumptions.

    Regards

    /ikh

    • Alexander Harvey

      Hi ikh,

      I don’t think it is correct to suggest that GCMs assume some value for water vapour feedback. I am not sure that to do so would be practical or even meaningful.

      Water vapour feedback in terms of a GCM is an abstraction. Its notional value can be estimated with some difficulty (see Soden & Held papers).

      Insofar as it exists, it arises as the result of an interplay of processes including, evaporation from land and sea surfaces, transpiration, transportation, cloud formation and precipitation, and radiative transfer, all expressed as a ratio to the planck feedback (another abstraction) which in turn results from another interplay of processes.

      There are a whole list of abstractions that in terms of GCMs are parameters only insofar as they are parameters of a simpler model, for which values may be inferred from the simulated output. They are not input parameters in any sense nor are their values assumed in the context of a GCM.

      I think that the most common simple model feedbacks fall into this category, they are not GCM parameters.

      Planck feedback
      Cloud feedback
      Water Vapour feedback
      Albedo feedback
      Lapse Rate feedback

      In terms of simpler models these can be parameters, in terms of GCMs they are abstract notions.

      The GCMs do have both parameterisations and discrete parameters which can be altered to some degree but they are constrained to some degree by their effect with respect to weather and climate (as opposed to climate change). There are limits to how far they can be altered, (in order to match the temperature record), without impacting the GCMs ability to reproduce weather and climate.

      Obviously the GCMs are not perfect, and they might or might not track the observed record with some desired level of precision. Some weaknesses must certainly be due to inadequate parameterisations and uncertainty in some parameter choices but that is not the same thing as suggesting that such weaknesses are due to the assumption of cetain values for the feedbacks. Values for those feedbacks may be inferred from a GCM’s outputs but are not available as inputs.

      Alex

  3. Rob Burton

    “I think that the most common simple model feedbacks fall into this category, they are not GCM parameters.

    Cloud feedback”
    Surely this is parameterised in a GCM. ie looking at the tweaked parameters in the climateprediction.net experiment
    http://climateprediction.net/content/parameters
    wouldn’t rhcrit, entcoef and eacf parameters , for example, effect cloud feedback here.

    “parameterisations and discrete parameters….. without impacting the GCMs ability to reproduce weather”
    In the same vein what GCM parameters that may be altered for fit in an experiment would actually change significantly a weather forecast, for say 2 days time. Or did you mean parameters are constrained so that they produce ‘realistic’ weather in the model.

    • Alexander Harvey

      Hi Rob,

      “counters” below gives you most of your answer and I suspect he knows more about these matters than I do.

      If you are interested, and have the time (~60 minutes) there is a video presentation here:

      http://www.newton.ac.uk/programmes/CLP/seminars/091614002.html

      given by Stephen Belcher (now head of the Hadley Center) back in 2010 on the development of a parameterisation for Langmuir turbulence in the ocean mixed layer as a work in progress (more than 10 man-years). He presents well (it is technical but it is explained in English plus equations). I think it be a good example, but it is the only such presentation that I know of.

      I think it gives some insight into the roles of the basic physics (e.g. Navier-Stokes), fine grain simualation of the sub-grid processes, choosing a candidate parameterisation that marries grid scale process to those resolved sub-grid processes, the role of observations, and making a convincing case for including the parameterisation into a GCM.

      There is also their paper (Grant and Belcher 2009) that further illustrates :

      http://journals.ametsoc.org/doi/full/10.1175/2009JPO4119.1

      I do not pretend to have any deep understanding of any of this, but I note that it is pitched at a much deeper level than what is commonly presented in many climate papers. From what I can judge, there is not a lot of wiggle room in this particular paramterisation. There will be discrete parameters and I dare say some of them are open to some adjustment but there will be constraints based on the sub-grid observation of turbulence (the mechanism) , as opposed to the overall effect (more realistic ocean surface mixing, and hence temperatures, at the grid scale).

      That said, climatenet do run a lot of “What if” experiments, explorations of parameter space. So that seemingly contradicts what I have said.

      One of the GCMs they are currently using is HadCM3, which was cutting edge (also first GCM that was stable without flux adjustments) and contributed to AR3. So we are talking about the same class of AOGCMs, so what I have said should apply.

      To some degree, disturbing the parameters, and hence the physics, gives the experts a much better understanding of the uncertainty inherent in the projections, above and beyond that due to changes in initial conditions (an ensemble of runs with fixed physics). It does allow tuning to fit the observations, (picking a sweet spot) but as a counter-balance it allows all the less sweet but still plausible physics to contribute to our uncertainty in the output. Perhaps, in an ideal world, only the best physics as supported by direct observations of the underlying processes would be used in long range climate GCMs, that there would be no tuning to the long term record. Even were it the case, that possibility should I think be discounted lest an unintend Darwinian evolution of the models to match the historical record has occurred. Once discounted, we do have the “What if” exploration of the parameter space to keep us on the straight and narrow, to make sure we are not fooling ourselves into becoming overly confident in the projections.

      If the work on uncertainty now underway could have been done prior to 2000, which it couldn’t, (climatenet plans 10,000 runs of HadCM3 for just one experiment), I believe we would have coped with the post 2000 variation in the trend more elegantly.

      Getting back to your point (at last):

      I shied away from describing the feedbacks as emergent properties of the GCMs, it is true that they are neither contained in any of the physics nor are they prescribed as parameters, but they may still have some intuitive relationship to actual model parameters. They are not quite as surprising as emergent properties commonly are.

      An expert might be able to adjust a parameter that is observationally uncertain whose effects seem intuitive, (perhaps the effective deep ocean diffusivity), to tune a GCM in a predictable manner, but the relationship between many of the actual parameters (and their interactions) and concepts like the feedbacks is deeply obscured. Were it otherwise there would be no need for climatenet.

      Alex

  4. counters

    Rob Burton –

    “wouldn’t rhcrit, entcoef and eacf parameters , for example, effect cloud feedback here.”

    Yes, they would affect the feedback. But they don’t prescribe it, which is the point that Alex was making.

    “In the same vein what GCM parameters that may be altered for fit in an experiment would actually change significantly a weather forecast, for say 2 days time. Or did you mean parameters are constrained so that they produce ‘realistic’ weather in the model.”

    Models are “tuned” in sort of a manner, usually in order to for the model’s equilibrium spun-up state to match climatology from pre-1870. This can sometimes involve adjusting certain factors in model parameterizations. But lest this be considered “cheating”, it’s important to mention that no parameterization for convection, or aerosol activation, or optical effects would ever be used in a product GCM without being validated independently. Often times, the simple parameterizations are derived straightforward from empirical observation or from high-fidelity physics and chemistry models which would just be too complex to incorporate into a GCM.

    A good example – which is very relevant for CMIP-5 models – is the “activation” scheme used to connect the model’s aerosol scheme to cloud physics. Several modern climate models cull from the ambient aerosol population to feed a source of CCN to the microphysics package. This allows the simulation of aerosol indirect effects – how changing the aerosol burden can affect clouds, either their climatology or their optical properties. You can easily derive a very detailed microphysics and chemistry model – and validate it in the laboratory – which adequately describes how the aerosol activate into CCN. But such a model would be impractical for carrying out long-term climate simulations.

  5. Rog Tallbloke

    Hi Tamsin, you said:
    “At the heart of them [Climate models] are basic laws of physics, like Newton’s laws of motion and the laws of thermodynamics”.

    I have spoken to many climate scientists who seem to think that at equilibrium, the troposphere would be isothermal, in accordance with the Maxwell-Boltzman distribution. They argue that there would be no lapse rate without radiation and convection. Do the models make this assumption too?

    If they do, it seems like a grave error to me. The true situation isn’t like Maxwells ‘isolated column of gas’ for several reasons. Although it is bounded by Earth and space, its volume is not constrained. NASA has confirmed observationally that since the Sun went quiet in 2004, the thermosphere has shrunk by 30% and the average altitude of the cloud deck has dropped by 30m.

    Gravity acting on the mass of the atmosphere produces a pressure gradient by pulling the gas against the solid Earth. Because air is compressible, that pressure gradient acts to produce a density gradient. Because the molecules nearer the surface are pushed closer together, there are more of them in a given volume. That makes it more likely that water vapour and co2 molecules in the volume will intercept and absorb photons of incoming solar energy and photons of outgoing long wave radiation.

    Because the path length is short, these energised molecules soon share their extra energy with surrounding molecules of nitrogen and oxygen which make up the bulk of the atmosphere. The ensemble has a heat capacity. Therefore, the denser near surface air will get hotter than the less dense air at altitude.

    Naturally, in accordance with the gas laws, the warmer air nearer the surface will have been expanded by the higher temperature and its density thus lowered again, but this won’t fully offset the effect due to gravity acting on mass to raise pressure and density to form a gradient. (We should calculate the value).

    So even before we consider convection and radiation, there will be a ‘pre-existing’ lapse rate due to these simple thermodynamic-gravitational considerations.

    Is it included in the models?

    [Sorry this comment got lost - was swamped by spam - think this is the same as your comment at PLOS but will add here anyway -- Tamsin]

  6. Frank

    The video from Feynman illustrates one aspect of why climate science done with ensembles of models is not the kind of science Feynman taught and from some philosophical points of view is not science at all. “If it disagrees with experiment” says it all. You can’t conduct any experiments that determine whether the output from an ensemble of models is right or wrong. The large range of uncertainty in the predictions from ensembles where both parameters and starting conditions suggests that are that all comparisons with observations will appear correct, but none will effectively test the ensemble. In other videos, you can find Feynman bragging about the agreement between QED and experimental observations to about 10 significant digits. The fact that QED survived such stringent experimental tests tells us QED is a very useful theory.

    Your other posts that discuss how to present this uncertainty (to other scientists and policymakers) hints that you may have an agenda about the conclusions you want others to derive from these presentations. And that leads to a politicization of science.