Limitless possibilities

Mark Maslin and Patrick Austin at University College London have just had a comment published in Nature called “Climate models at their limit?”. This builds on the emerging evidence that the latest, greatest climate predictions, which will be summarised in the next assessment report of the Intergovernmental Panel on Climate Change (IPCC AR5, 2013) are not going to tell us anything too different from the last report (AR4, 2007) and in fact may have larger uncertainty ranges.

I’d like to discuss some of the climate modelling issues they cover. I agree with much of what they say, but not all…

1. Models are always wrong

Why do models have a limited capability to predict the future? First of all, they are not reality….models cannot capture all the factors involved in a natural system, and those that they do capture are often incompletely understood.

A beginning after my own heart! This is the most important starting point for discussing uncertainty about the future.

Climate modellers, like any other modellers, are usually well aware of the limits of their simulators*. The George Box quote from which this blog is named is frequently quoted in climate talks and lectures. But sometimes simulators are implicitly treated as if they were reality: this happens when a climate modeller has made no attempt to quantify how wrong it is, or does not know how to, or does not have the computing power to try out different possibilities, and throws their hands up in the air. Or perhaps their scientific interest is really in testing how the simulator behaves, not in making predictions.

For whatever reason, this important distinction might be temporarily set aside. The danger of this is memorably described by Jonty Rougier and Michel Crucifix**:

One hears “assuming that the simulator is correct” quite frequently in verbal presentations, or perceives the presenter sliding into this mindset. This is so obviously a fallacy that he might as well have said “assuming that the currency of the US is the jam doughnut.”

Models are always wrong, but what is more important is to know how wrong they are: to have a good estimate of the uncertainty about the prediction. Mark and Patrick explain that our uncertainties are so large because climate prediction is a chain of very many links. The results of global simulators are fed into regional simulators (for example, covering only Europe), and the results of these are fed into another set of simulators to predict the impacts of climate change on sea level, or crops, or humans. At each stage in the chain the range of possibilities branches out like a tree: there are many global and regional climate simulators, and several different simulators of impacts, and each simulator may be used to make multiple predictions if they have parameters (which can be thought of as “control dials”) for which the best settings are not known. And all of this is repeated for several different “possible futures” of greenhouse gas emissions, in the hope of distinguishing the effect of different actions.

2. Models are improving

“The climate models…being used in the IPCC’s fifth assessment make fewer assumptions than those from the last assessment…. Many of them contain interactive carbon cycles, better representations of aerosols and atmospheric chemistry and a small improvement in spatial resolution.”

Computers are getting faster. Climate scientists are getting a better understanding of the different physical, chemical and biological processes that govern our climate and the impacts of climate change, like the carbon cycle or the response of ice in Greenland and Antarctica to changes in the atmosphere and oceans. So there has been a fairly steady increase in resolution***, in how many processes are included, and in how well those processes are represented. In many ways this is closing the gap between simulators and reality. This is illustrated well in weather forecasting: if only they had a resolution of 1km instead of 12km, the UK Met Office might have predicted the Boscastle flood in 2004 (page 2 of this presentation).

But the other side of the coin are, of course, the “unknown unknowns” that become “known unknowns”. The things we hadn’t thought of. New understanding that leads to an increase in uncertainty because the earlier estimates were too small.

Climate simulators are slow, as slow as one day to simulate two or three model years, several months for long simulations. So modellers and their funders must decide where to spend their money: high resolution, more processes, or more replications (such as different parameter settings). Many of those of us who spend our working hours, and other hours, thinking about uncertainty, strongly believe the climate modelling community must not put resolution and processes (to improve the simulator) above generating multiple predictions (to improve our estimates of how wrong the simulator is). Jonty and Michel again make this case**:

Imagine being summoned back in the year 2020, to re-assess your uncertainties in the light of eight years of climate science progress. Would you be saying to yourself, “Yes, what I really need is an ad hoc ensemble of about 30 high-resolution simulator runs, slightly higher than today’s resolution.” Let’s hope so, because right now, that’s what you are going to get.

But we think you’d be saying, “What I need is a designed ensemble, constructed to explore the range of possible climate outcomes, through systematically varying those features of the climate simulator that are currently ill-constrained, such as the simulator parameters, and by trying out alternative modules with qualitatively different characteristics.”

Higher resolution and better processes might close the gap between the simulator and reality, but if it means you can only afford the computing power to run one simulation then you are blind as to how small or large that gap may be. Two examples of projects that do place great importance on multiple replications and uncertainty are the UK Climate Projections and

3. Models agree with each other

None of this means that climate models are useless….Their vision of the future has in some ways been incredibly stable. For example, the predicted rise in global temperature for a doubling of CO2 in the atmosphere hasn’t changed much in more than 20 years.

This is the part of the modelling section I disagree with. Mark and Patrick argue that consistency in predictions through the history of climate science (such as the estimates of climate sensitivity in the figure below) is an argument for greater confidence in the models. Of course inconsistency would be a pointer to potential problems. If changing the resolution or adding processes to a GCM wildly changed the results in unexpected ways, we might worry about whether they were reliable.

But consistency is only necessary, not sufficient, to give us confidence. Does agreement imply precision? I think instinctively most of us would say no. The majority of my friends might have thought the Manic Street Preachers were a good band, but it doesn’t mean they were right.

In my work with Jonty and Mat Collins, we try to quantify how similar a collection of simulators are to reality. This is represented by a number we call ‘kappa’, which we estimate by comparing simulations of past climate to reconstructions based on proxies like pollen. If kappa equals one, then reality is essentially indistinguishable from the simulators. If kappa is greater than one, then it means the simulators are more like each other than they are like reality. And our estimates of kappa so far? Are all greater than one. Sometimes substantially.

The authors do make a related point earlier in the article:

Paul Valdes of Bristol University, UK, argues that climate models are too stable, built to ‘not fail’ rather than to simulate abrupt climate change.

Many of the palaeoclimate studies by BRIDGE (one of my research groups) and others show that simulators do not respond much to change when compared with reconstructions of the past. They are sluggish, and stable, and not moved easily from the present day climate. This could mean that they are underestimating future climate change.

In any case, either sense of the word ‘stability’ – whether consistency of model predictions or the degree to which a simulator reacts to being prodded – is not a good indicator of model reliability.

Apart from all this, the climate sensitivity estimates (as shown in their Figure) mostly have large ranges so I would argue in that case that consistency did not mean much…

Figure 1 from Maslin and Austin (2012), Nature.

Warning: here be opinions

Despite the uncertainty, the weight of scientific evidence is enough to tell us what we need to know. We need governments to go ahead and act…We do not need to demand impossible levels of certainty from models to work towards a better, safer future.

This being a science and not a policy blog, I’m not keen to discuss this last part of the article and would prefer your comments below not to be dominated by this either. I would only like to point out, to those that have not heard of them, the existence (or concept) of “no-regrets” and “low-regrets” options. Chapter 6 of the IPCC Special Report on ‘Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation (SREX)’ describes them:

Options that are known as ‘no regrets’ and ‘low regrets’ provide benefits under any range of climate change scenarios…and are recommended when uncertainties over future climate change directions and impacts are high.

Many of these low-regrets strategies produce co-benefits; help address other development goals, such as improvements in livelihoods, human well-being, and biodiversity conservation; and help minimize the scope for maladaptation.

No-one could argue against the aim of a better, safer future. Only (and endlessly) about the way we get there. Again I ask, please try to stick to on-topic and discuss science below the line.

Update 14/6/12: The book editors are happy for Jonty to make their draft chapter public:


*I try to use ‘simulator’, because it is a more specific word than ‘model’. I will also refer to climate simulators by their most commonly-used name: GCMs, for General Circulation Models.

**”Uncertainty in climate science and climate policy”, chapter contributed to “Conceptual Issues in Climate Modeling”, Chicago University Press, E. Winsberg and L. Lloyd eds, forthcoming 2013. See link above.

***Just like the number of pixels of a digital camera, the resolution of a simulator is how much detail it can ‘see’. In the climate simulator I use, HadCM3, the pixels are about 300km across, so the UK is made of just a few. In weather simulators, the pixels are approaching 1km in size.




  1. Ed Hawkins

    Hi Tamsin,
    I think I agree with most of what you say. I’m glad they have put together that figure (but there must be more estimates than that?!). I have wondered what such a figure would look like for climate sensitivity as there are other classic examples, such as estimates of the speed of light, where consistency can mean very little. Specifically, this one for an astronomical quantity – the Hubble constant – which measures the expansion rate of the Universe:
    Having said that, I don’t think our estimates of climate sensitivity are a factor of 10 wrong!

    But I think there is plenty more discussion to be had over experimental ensemble design. For example, why do we run our complex climate models with detailed scenarios all the way out to 2300? Surely we would learn more of relevance by running more ensemble members out to 2100 (or even 2050).


    • Tamsin Edwards

      Ed, thanks for breaking the silence! I’ve just added a link at the end of the post to Jonty and Michel’s chapter (which I love).

      About climate sensitivity – I failed to define it, or to link to our 2007 review paper – darn! But for those who missed it, I do both in my previous post. The review paper has other estimates in it (separated into those based on modern climate and those based on palaeo).

      That’s a great post you link to. I went to a talk last month by Jim Berger (Duke University) on “Reproducibility of Science: P-values and Multiplicity” and he showed several examples like this from particle physics. Estimates of particle masses and so on. Of course the answers are already “known” – you can look them up in a little book – and my friend Emily and I were always talking about how people looked for mistakes in their analysis when their latest estimate disagreed with the Particle Data Group book, and stopped looking when they didn’t… Throw in the fact that particle physics (and astrophysics) is often done by the same group of people for decades, due to the small number of experiments (of course there is throughput, but they are trained by the same leaders) and you find that they could be quite vulnerable to the kind of thing your graph shows.

      I agree with your point about your ensemble design.

      • Oliver K. Manuel

        Hi Tamsin,

        The title of your blog, “All Models Are Wrong”, describes well my conclusion as a not-very-bright – but darn stubborn – experimentalist in nuclear and space studies since 1960.

        Fear of also being destroyed by the “nuclear fires” that consumed Hiroshima and Nagasaki on 6 Aug 1945 and 9 Aug 1945, respectively, compelled world leaders to:

        a.) Establish the United Nations on 24 Oct 1945, and to

        b.) Guide government science to confirm approved models of energy stored in the cores of atoms [1] and stars [2] after 1945 through anonymous reviews of research proposals and manuscripts.

        Here’s the rest of the story:

        With kind regards,
        Oliver K. Manuel
        Former NASA Principal
        Investigator for Apollo

        1. Hideki Yukawa, Introduction to Quantum Mechanics (1946); Introduction to the Theory of Elementary Particles (1948)

        2. Fred Hoyle, “The chemical composition of the stars,” Monthly Notices Royal Astronomical Society 106, 255-59 (1946); “The synthesis of the elements from hydrogen,” Monthly Notices Royal Astronomical Society 106, 343-83 (1946)

  2. Chris Vernon

    Just like the number of pixels of a digital camera, the resolution of a simulator is how much detail it can ‘see’. In the climate simulator I use, HadCM3, the pixels are about 300km across, so the UK is made of just a few. In weather simulators, the pixels are approaching 1km in size.

    Nice analogy. One could also say that the resolution is poor metric for assessing the quality of a camera (my 8MP SLR is way better than my 8MP phone). Just as resolution’s a marketing tool to sell cameras to non-experts, increasing GCM resolution a tool to ask policy makers and funding bodies for more resources. As Jonty suggests, I don’t think incrementally increased resolution is very useful.

    …closing the gap between simulators and reality.

    Do we have any good reason to assume that as the representation of the simulator approaches the real world, the accuracy of the output improves proportionally? That’s been the case with relatively simple systems/models, but does it still hold for a GCM?

    • Tamsin Edwards

      Thanks. I agree – a few pages on in that presentation it states in big letters “It is not just about resolution”.

      I don’t know about proportionally. I know there are cases where improved resolution and processes have made a model worse in some respects – you probably already know that the UK Met Office model “lost” El Nino-Southern Oscillation when they made major upgrades (HadCM3 to HadGEM1 – it’s back now). Weather forecasting skill has improved a lot, but much of that is better assimilation of observations. Someone more involved in GCM development might like to comment.

    • mrsean2k

      (@tamsin – great post incidentally)

      Chris, I’ve wondered the same thing. After all we can construct ever more sophisticated tools for performing a task , marvel at the engineering, skill and dedication involved in constructing it, learn valuable lessons from the construction process itself, and yet still be better off with a simpler approach.

      On a slightly more apposite note, look at the example of the North Carolina Senate who are eschewing modelled projections of sea-level rise in favour of much simpler projections based on past empirical data.

      This gives fodder to people claiming they are “anti-science”, but in truth they have adopted a predictive model – albeit a very simple one. And it’s one that’s capable of being tested for relative accuracy against the method they rejected.

      • Tamsin Edwards

        Hi mrsean2k,

        Thanks 🙂

        I think it’s important to have models across the spectrum of complexity. Michel and Jonty work with an extremely simple representation of ice age cycles (e.g. here). Andy Ridgwell, Neil Edwards and many others use the intermediate complexity model “GENIE” (e.g. here).

        Choosing a particular model, such as linear extrapolation of past data, is OK but legislating it is not…

  3. Maurizio Morabito

    A great post and I keep wondering it all of this had been said five or ten years ago, we wouldn’t be staring at a complete impasse in policymaking. That said, the Nature’s authors foray in policymaking indeed, is less than helpful and extremely naïve. If they can determine what amount of uncertainty is acceptable for drawing out policies, then I am a climate simulator expert too. Horses for courses, simulator experts not for policies.

    And talking of simulators, here’s an interesting thing that appears to be happening in Civilisation 2, and might be further indication that simulations on very long time spans are meaningless.

  4. Roger Longstaff

    For some time I have been unable to understand how the UK Met Office models can predict climatic conditions decades in the future, when they are admitted to be of “low skill” (by Richard Betts) in predicting the weather / climate over periods only weeks or months in the future. I asked Richard for references to the modelling procedures and he kindly provided several on various threads at Bishop Hill. This raised more questions and I finally emailed the Met Office with the following:

    “I would be grateful if you could let me know if you think that it is reasonable practice to use numerical models for multi-decadal forecasts that:

    A. Use low pass filters “chosen to preserve only the decadal and longer components of the variability” ( (quote from page 21), and,

    B. Accommodate errors that cause instabilities “by making periodic corrections, rather
    than by fixing the underlying routines ( (quote from page 7).”

    The Met Office kindly replied with the following answers:

    “(A) The models themselves do not use low pass filters. Indeed they simulate weather on timescales of minutes. Low pass filters are used to analyse the model output in order to focus on timescales of interest.

    (B) The mathematical equations describing the evolution of the atmosphere and oceans cannot be solved analytically. Instead they must be solved numerically using computers. This process frequently involves a compromise between accuracy and stability. During model development much research is undertaken to find the numerical schemes that provide the best accuracy whilst minimising instabilities. At the Met Office the same numerical schemes are employed for weather and climate predictions, and the performance of the forecasts are continuously assessed across a range of timescales from days to decades.”

    I am still struggling to find an answer to my original question, and perhaps my questions to the Met Office were the wrong ones. So here is the question again – how can numerical models be “low skill” over short timescales but accurate enough to justify massive intervention over multi-decadal timescales?

    • Tamsin Edwards

      Hi Roger,

      The difference is that the same model is used to do different things.

      Weather forecasts are trying to predict a trajectory, a particular “path” of the atmosphere through time. What will be the temperature on Tuesday? then Wednesday? and so on. Chaos limits the skill of this to a couple of weeks.

      Climate projections are trying to predict a distribution, a set of frequencies: an analogy could be the range of the wiggles in the path, and which are the locations most often visited. What is the most likely temperature of a day in June? How often do temperatures go above 30degC? And these predictions are made for several different “possible futures” of greenhouse gas concentrations, which is why if we are being pedantic we call each of these projections (“what would happen if…”) rather than predictions (“what will happen…”).

      “Seasonal” predictions are somewhere between the two. They predict distributions of weather (such as hurricanes) that are related to other parts of the climate that change slowly (from weeks to years), like ocean temperatures or the El Nino Southern Oscillation.

      Is that any help? Others may like to add to this.


      • Roger Longstaff

        Hi Tamsin, thanks for the reply, but it does not answer my question.

        My problem is that any filtering (between calculation steps), or “re-setting” of variables to preserve stability, inevitably leads to loss of information on the system.

        Filtering is used to increase signal to noise ratio when there is “a priori” knowledge of the signal (for example in a radio receiver that uses a narrowband filter tuned to a specific frequency). With climate models there is only an assumption that there will be a GHG signal. Furthermore, any low pass filtering will remove information generated by the model itself, as signals must be sampled at twice their highest frequency component (Nyquist theory) in order to preserve information.

        If one were to equate accurate information to Shannon entropy, it seems inevitable that climate models will deviate from reality exponentially with respect to time, as a consequence of the logarithmic nature of information theory.

        Is there a flaw in my argument?

        Cheers, Roger

        • Paul S


          On filtering, you were given a clear answer in the Met Office’s reply: the low-pass filter described has nothing to do with the workings of the model as it is running. It’s a technique used to aid analysis of a specific climatic phenomenon after the model run has completed.

          • Paul S


            Well, you could get into a discussion about what filters are appropriate to use for any specific task, but that’s not really relevant here. The important point is that this has nothing to do with the models themselves.

          • Roger Longstaff


            The stated aim of the Met Office models is to study the effects of GHGs on the climate. The MO will not release the code, or the methodology (even though it was paid for by UK taxpayers). We only ever see the “filtered” results of the models output. I am already certain that these models can not produce useful results because they need to be “corrected” when they violate conservation of mass (see point B from my original post). Do you have a reference for the methodology of using low pass filters to “aid analysis”, bearing in mind that there is no a priori knowledge of the signal?

          • Paul S


            We’re veering off-topic here but, as an attempt to bring this to a close, this is the more complete quote from the HadGEM-2ES paper:

            ‘North Atlantic and Pacific patterns of the decadalcentennial variability in the HadGEM2-ES control simulation were derived from a principal component analysis of low-pass filtered simulated annual mean SST data in each basin. The filter half-power timescales were chosen to preserve only the decadal and longer components of the variability. The patterns derived bear a strong resemblance to those seen in observations (Parker et al., 2007).’

            If you follow this to Parker et al. 2007 you’ll find it’s a paper describing certain semi-discrete components of climate variability, PDO (Pacific Decadal Oscillation) being one, derived from SST observations of the real world. These climate components are derived by means of processes involving low-pass filters. The Jones et al. HadGEM-2ES paper uses a low-pass filter in this specific case because they are comparing with observational data derived in a similar manner.

          • Roger Longstaff

            Further to our discussion about filtering – it seems you were wrong. The reference provided by Tamsin (p37) states: “Fourier filtering of model fields at each timestep is available as an option. The normal use of this is in global modeld to reduce noise and instability…”

            Please see my earlier comments.

    • Andrew Richards


      You seem to be asking how it is possible for a model that is low-skill in the short term to be more accurate in the long-term. If your question is more technical than that, I apologise.

      Imagine there is an unknown correct model (CM) of reality (sorry, Tamsin). Our current models could be considered as approximations to the CM, with the difference between a current model and the CM itself modelled as a realisation of some stochastic process.

      Then the question boils down to asking whether it is possible to know little about the trajectory of the realisation of a stochastic process in the short term, but a lot about it in the long run. And this, under suitable assumptions, is what the (functional) law of large numbers gives us. If I toss a coin which has probability of 1/3 of coming down heads and 2/3 tails, I can’t predict what the outcome of the next ten tosses will be, but, with appropriate scaling, I can tell you to a high degree of accuracy what proportion of the tosses will be heads after 1000 tosses.

      This doesn’t mean current climate models have this convergence property. But your question seemed to be about whether it was possible for such aphenomenon to exist, rather than whether it did exist for climate models.

      Apologies if I’m teaching you to suck eggs!


      • G Moran

        Yes, its the old initial value problem vs the boundary values problem. Except the climate probably doesn’t work like that (Swanson and Tsonis).

        Climate models have yet to demonstrate skill (accurate future regional predictions); yet to hear the MetO’s official proclamations (Vicky Pope has said this a number of times) you would think weather forecasts and future climate predictions operate at approximately the same skill level, and its only the seasonal forecasts that allude us; a line which is patently untrue.

      • Sashka

        I don’t think that the law of large numbers helps here. Conceivably, the imperfect model trajectory hangs around a wrong attractor that has different statistics than the one that attracts the trajectories in CM.

  5. Liz

    Imagine being summoned back in the year 2020, to re-assess your uncertainties in the light of eight years of climate science progress. Would you be saying to yourself, “Yes, what I really need is an ad hoc ensemble of about 30 high-resolution simulator runs, slightly higher than today’s resolution.” Let’s hope so, because right now, that’s what you are going to get.

    Perhaps more importantly, for the development of the science, what we’re also going to get is multiple nations with these 30 simulator runs, slightly higher than today’s resolution, using models where no-one really has any idea what the important differences between them are.

    Let’s say that by 2020 the international community had pooled resources for a global supercomputer that ran a single (weather / climate) model that had modular components. We would then have 30 * x simulator runs, at a potentially useful model resolution, as well as a better framework for making improvements and assessing uncertainty.

    Of course there are plenty of political reasons why this would never happen, but surely if it did, we would end up with the ability to predict Boscastle-type events globally, savings tens of thousands of lives, as well as developing a science that is currently held back by computational power?

    (At least this was what Jonny and I decided over a bottle of wine last week)

  6. hunter

    Will the models finally start producing anything of value or continue gigo in support of rent seeking?
    So far not one policy based on the political demands of those who strongly believe in the work product of climate science has worked. CO2 has not been reduced, countries like Japan and Germany are more committed to higher CO2 production, cap and trade is an utter failure, etc. etc. etc.
    Also not one metric of extreme weather has credibly been shown to have changed for the worst. None of the predictions about islands sinking has been true. No field evidence of pH changes in the oceans has been produced, much less damage from pH changes. So one can dance around what models that have led to useless and worse policies at great cost might do in 8 short years, but I believe that is avoiding the real issue.

    • Tamsin Edwards

      Hi hunter,

      Whether policies have worked is a separate issue of whether the climate models have something of value.

      We don’t have enough observations to know very well whether extreme weather is increasing. The most confident we are is about high temperature: “It is very likely that there has been…an overall increase in the number of warm days and nights” (p6, SREX Summary for Policymakers). Of course, this also means a reduction in extreme cold. For rainfall and storms, it’s harder to know if they have become more frequent or stronger.

      I know less about ocean pH, but there are observations showing a decrease in Hawaii. That’s getting off topic really.

      • tonyb

        Tamsin said;

        “We don’t have enough observations to know very well whether extreme weather is increasing.”

        I’m sorry, but I disagree. Our severe weather events are very well documented. Hubert Lamb itemised many back in 1991 in his book “Historic storms of the North sea, British isles and Northwest Europe”. It chronicles storms back to 1509. Numerous other extreme weather events are chronicled in the numerous books in such places as the Met office archive and library. I have read about 100 of them. In addition events are mentioned in such things as Church and manorial records. There is a vast amount of information out there but Historcal climatology- based on historic observed records- has taken a back seat to computer modelling and tree rings over the last twenty years.

        • Tamsin Edwards

          Hi Tony,

          I’m talking about direct observations. Trying to estimate return periods, and whether they are changing, for rare events from a ~150 year (aside from the CET) record is tricky, no?

          I agree the kind of documentary evidence you are interested in is useful, of course. For those who would like to know more, this looks like a useful overview: Brázdil et al. (2005) Climatic Change. I think someone talked about it at the SUPRAnet second meeting, but the talk is not listed on the website and the URL for the participants and interests has broken.

          By the way, sorry for not replying to your email from ages ago… But I guess this answers your question of whether this kind of evidence is useful to supplement other sources of information – yes it is, and people already do use it.

  7. Joe's World


    Scientists will be having a MASSIVE shake up of peoples confidence in them. The models are based on data that they hope to find a trend and try to recreate to project the future.
    Ignoring ALL factors of of planetary mechanics. The sun is less active at spewing out mass which also effects the thickness of our atmosphere. Our atmosphere is constantly loosing mass to space but that is lessoned with an active sun. This is the insulation for our planets eco-system.
    Media bias has failed to report events that have been currently occurring such as: Fruit blossoms being frozen by damaging frost, heavy damaging frost and snow in some areas in growing season, animal species in starvation from lack of spring melt and glacial run off declining(which some societies need for their water.

    Where in the climate data stream do these events get reported?
    Would not these event show cooling and our food system could be in dire effects from frost and snow?

  8. manacker

    A great new post, Tamsin. Thanks for the “heads up” on Judith’s site.

    The Mashin/Austin comment in Nature is unfortunately behind paywall, but your synopsis tells the story why the “uncertainties” are so great that the “models are always wrong”.

    Regrettably they are taken as “oracles” or “prophets” by many in the political/scientific climate community.

    But, as you write, they are getting “better” (i.e. “faster”). But does this help if the input data are still unknown?

    The transition of “unknown unknowns” to “known unknowns” may increase the statistical awareness of “uncertainty” – but it does nothing to resolve them.

    I agree fully with you that the fact that “models agree with each other” (i.e. “consistency between models”) means nothing at all. Remember that Ralph Waldo Emerson wrote:

    A foolish consistency is the hobgoblin of little minds…

    Overlapping prediction stability on 2xCO2 ECS between 1 degC and 5 degC doesn’t tell us much, as you write – especially since the source of the input data is either pourely theoretical or highly dicey.

    All in all, your post tells us that, even if AR5 is filled with statements of “improved confidence”, etc., it will not have resolved any of the major uncertainties on what makes our planet’s climate behave as it does and how (if at all) human GHG emissions will impact future climate.


  9. A fan of *MORE* discourse

    Here is brief comment that relates equally to your weblog’s post “All Models are Wrong: Limitless Possibilities” and to Anthony Watt’s present headline post on WUWT titled “WUWT: Climate Models Outperformed by Random Walks.”

    As a warm-up, let’s consider a non-controversial subject: models of turbulent flow over airframes. As we improve the spatial and temporal resolution of airflow simulations, we find that our ability to predict microscopic details of the flow does *not* improve. The reason is simple: the microscopic dynamics are chaotic, such that no dynamical model (however sophisticated) can predict their future evolution with microscopic accuracy.

    None-the-less, experience shows that fluid dynamical simulations DO successfully predict (typically within errors of order one percent) the flight characteristics that we mainly care about, including (for example) fuel efficiency, stall-speeds, and g-limits.

    How does this happen? It happens because the microscopic dynamics is governed by global conservation laws and thermodynamic constraints, chief among them being strict conservation of mass and energy and global increase in entropy. So instant-by-instant, we don’t know whether a Karman vortex will spin-off an extended wing-flap, and yet minute-by-minute, we can predict the lift, drag, and glide-path of an airliner with considerable accuracy and confidence.

    As with fluid dynamics, so with climate dynamics. Chaotic fluctuations on continental spatial scales and decadal time scales are difficult to predict with confidence. Yet global climate changes are constrained by strict conservation of mass and energy and global increase in entropy, and thus *CAN* be predicted. So year-by-year, we don’t know whether the local weather will be hot or cold, and yet decade-by-decade, we can predict the warming of the earth, and the rise of the sea, with considerable accuracy and confidence.

    Appreciating this, James Hansen and his colleagues have focussed their predictions on the global energy balance, and in particular, upon sea-level rise as an integrative proxy for that global energy balance. In 2011 they confidently predicted an acceleration in sea-level rise for the coming decade. Hansen’s prediction required a certain measure of scientific boldness, since at the time satellites were showing a pronounced decrease</i in the sea-level rise-rate.

    In coming years we will see whether Hansen’s global prediction is correct. Supposing that the global prediction *is* proved correct, then the concerns of rational climate-change skeptics will be largely addressed.

    More broadly, it is global conservation of mass and energy and global increase in entropy that explain why simulations of both airflow and climate can be grossly inaccurate on individual space-time gridpoints, yet highly accurate globally.


    LOL … for fun, I’ll post this essay also to WUWT. It will be fun to see which forums are mainly interested in rational arguments accompanied by scientific links, versus mainly interested in compiling an ideology-first “enemy list.”   🙂

  10. tonyb

    Tamsin said;

    ‘This is illustrated well in weather forecasting: if only they had a resolution of 1km instead of 12km, the UK Met Office might have predicted the Boscastle flood in 2004 (page 2 of this presentation.)’

    I was very closely involved in this event from the perspective of the Environment Agency. The steep sided nature of the Combes in this part of the world make prediction just about impossible according to the reports I saw. How would a 1km resolution have helped at all? The situation was made worse by the amount of debris that had built up along the river course and a variety of other factors.

    By the way the Met office predicted warmer wetter winters and hotter drier summers. -and have been for some years. The winters hae become drier-with several very cold examples-and the summers miserable-This has hardly been hot and dry has it? Why were the models so wrong? Is anyone looking at them again?

    I think a very large part of what happens with the weather is concerned with the Jet stream and its position and how long it ‘sticks’ in one position thereby blocking better weather. if those patterns repeat themselves over a number of years that becomes ‘climate.’ We have a good example of that with westerly winds which seem to have phases when they become predominant and others when winds from other directions predominate.. That has happened throughout history as I noted here in my examination of CET back to 1538.

    “Due to its geographical location British weather is often quite mobile and periods of hot, cold, dry or wet weather tend to be relatively short lived. If such events are longer lasting than normal, or interrupted and resumed, that can easily shape the character of a month or a season. Reading the numerous references there is clear evidence of ‘blocking patterns,’ perhaps as the jet stream shifts, or a high pressure takes up residence, feeding in winds from a certain direction which generally shape British weather.”

    The above is collected from many thousands of contemporary observations back to 1538 that I examined at the Met office Archives on the very day you paid a visit there. Its very difficult to see anything much going on thats materially different with our climate when you look at historical observations, and the more I read them (and I have now read tens of thousands) the more I believe that AGW is hugely exaggerated and we need to look elsewhere (such as the jet stream) for answers to the constantly changing climate evidenced throiughout history. Less modelling and more historical observation would I feel be a better usage of the climate budgets.

    • Paul S


      How would a 1km resolution have helped at all [for Boscastle flood prediction]?

      Open the pdf link next to Tamsin’s text on the subject and go to slide 2. Note the difference between the 12km and 1km forecasts compared to observations. Of course other local environmental factors are involved but Boscastle doesn’t flood like that every time it rains, so clearly the extreme nature of the precipitation was required for such an event to take place.

      By the way the Met office predicted warmer wetter winters and hotter drier summers

      Could you cite these predictions? Were they seasonal, or multi-decadal climate predictions?

    • Liz

      Tony, I think Tamsin’s statement should have read that a 1km resolution model might have predicted the Boscastle rainfall accumulation, rather than the flood itself. This kind of step change in resolution would no doubt improve the discharge predictions, but especially in small catchments where the hydrological rather than hydraulic processes are more important, there might not be the same magnitude of improvement in the discharge predictions as there are in the rainfall accumulations. I don’t know whether the Environment Agency have run the 1km rainfall through their hydrological models to see how much better their discharge estimates would have been, I probably should do…

  11. hunter

    Thank you for your reply.
    I would suggest taht your position on extreme weather is more of a sour grapes style resopnse.
    The record on extreme weather is pretty well documented and extends back much farther than the last 2 or 3 decades.
    It is flat.
    As to warming, I think it is clear that there is not good data to support the claims of the AGW proponents.
    as to policies, consider this: Models permit us to build everythign from buildings and roads to space ships: They permit useful decisions and actions to be taken.
    Climate models have led to many technological and political policies. None of them work well or accomplish the stated goal of reducing CO2.
    Yet we are pressured to implement yet more policies based on these same models.
    I would offer a bit of food for thought: The problem with AGW is not uncertainty at all.
    The problem is unwarranted certainty.
    Just a few areas to consider: The carbon cycle is not well understood. Soil, fresh water systems, Arctic ocean phytoplankton, are all signficant areas I am aware of that I have not seen reconciled with the movement of carbon.
    Are all of the dynamics of the climate well described?
    Climate science admits clouds are not. What else? Why is thefailure of the troposphere hot spot prediction ignored from what I have read?

  12. Judith Curry

    Hi Tamsin,

    Very nice post, and thanks for making the Rougier and Crucifix paper available.

    In case you missed my post a few months ago, with my talk “What can we learn from climate models?”, I address some of these same issues.

    I am also concerned at looking back from 2020, and seeing that that we spent all our chips on increasing model resolution and adding new physics modules (e.g. carbon cycle). I fully agree with the need for a larger and better designed ensemble.

    But I think the problem is really more fundamental, which you are pointing to with your kappa analysis of the consistency of models. I interpret the sluggishness/stiffness of the models in a different way: they do not simulate well multi-decadal and longer time scales of natural internal variability. This implies that the models are over sensitive to external forcing, with the inference that the models may be OVER estimating future climate change.

    So, how do we investigate the possible problems of undersimulation of natural internal variability and oversensitivity to external forcing? These issues will most likely not be solved by increasing resolution (although increasing ocean resolution might help with the natural internal variability) or be adding/improving the physics modules. No one seems to be asking the question (well other than myself) as to whether there are fundamental structural errors in the dynamic core. There are two broad issues of concern here as I see it:
    i) The inference seems to be that if the atmospheric model works well for weather prediction, it should be fine for climate simulations. But in climate simulations, the name of the game is water vapor feedback, which is a non issue in weather models. There are many approximations that have been made in atmospheric models re the inclusion of the water phase. I have argued that we need to go to a multi-fluid formulation to get this right, or to at least investigate the impact of the simplifying assumptions that we have made.
    ii) The other issue of concern is the dynamics of the coupled ocean/atmosphere system. These do not seem to be working correctly, as evidenced by the strongly damped natural internal oscillations at time scales beyond about 30 years. Investigations using lower order coupled models are needed to get a better handle on understanding this issue

    With so much invested in the current (expensive) models, I unfortunately see virtually no sign of research into investigating alternative structural forms or to understand model structural error (not just parameter errors and different physics modules).

    Judy Curry

    • Tamsin Edwards

      Hi Judy,

      I interpret the sluggishness/stiffness of the models in a different way: they do not simulate well multi-decadal and longer time scales of natural internal variability.

      What do you base this interpretation on? (genuine question – I don’t have a power spectrum comparison plot to hand…). I’m no expert on variability, but I understand the models can get at least some 50-100 year modes right (e.g. Atlantic multi-decadal variability, albeit for different reasons). Of course it’s difficult to test long-term modes when the palaeoclimate reconstruction uncertainties are quite large.

      Also – I’m interested in what you mean by exploring structural error but not just with different subcomponents – do you mean starting from scratch?

      • Judith Curry

        Hi Tamsin,

        Based upon what I have seen (and see particularly Fig 9.7 in IPCC AR1), most of the models have wimpy spectra (too little power) beyond about 40 years. There is also a recent (or forthcoming) paper by Lovejoy and Schertzer on this. Relatively little attention has been paid to this aspect of the models, since almost all of the focused has been on the response to external forcing.

        Re model structural error, yes that would not necessarily imply starting over, but in building some A/O models (could even be aquaplanet, with minimum physics) to explore the basic model structural form, and the impact of various approximations to the equations, plus new types of formulations such as stochastic. We would learn alot from such an exercise, and this would be a start in terms of characterizing model structural error. I suspect that such an exercise would result in some changes to climate model structural form.

          • David Young

            As to the second paper cited by Judith, it is an observation that has been well known in fluid dynamics for at least 30 years and possibly longer. Generally it is recognized that discrete conservation of mass, momentum, and energy will result in much more accurate results. If one can discretely conserve angular momentum or pitching moment, that is better still. That’s why I was glad to hear Andy Lacis say that the GISS model does conserve these things discretely, i.e., to machine precision, i.e., to 14 digits. If other climate models don’t they need to get busy and fix it.

            Another example that is even older (40 years) is Chorin’s projection method for incompressible flow in which at each time step, the discrete solution is projected onto the space of discretely divergence free vector spaces. This ensures exact conservation of mass and it turns out Chorin’s method is indeed more accurate than other methods. You know, many people do treat the atmosphere as incompressible and the ocean is quite incompressible.

        • BillC

          Tamsin, Judith and Lucia,

          I find Judith’s comment immediately upthread to be compelling, but I have no detailed knowledge of the model dynamical cores to counter a claim that anyone might make as to “this has been settled, and it doesn’t matter”. I am, however very interested in the implications of Judith’s criticism with respect to a statement Isaac Held made on his blog post #25 about relative humidity feedback:

          We want to use a reference response that is physically meaningful in itself — ie, that doesn’t require “feedbacks” to be present to ensure that it remains physically meaningful as climate changes. But specific humidity can’t remain fixed as we cool the climate — the atmosphere would become supersaturated in a lot of places. And this would happen pretty quickly; the amount of cooling at the peak of the last glacial would be more than enough. Why should fixing specific humidity be a useful starting point as we warm but not as we cool the atmosphere? We would have to argue that there is something special about the position of the present climate in the space of climates with different temperatures.

          I don’t know the magnitude of the supersaturation effects, but what if the answer is that fixing specific humidity is more appropriate for both warming and cooling because the quasi-stable humidity levels are dependent to a greater extent on mass and momentum versus temperature than is incorporated into the model cores?

          I am hoping this question can start some kind of discussion though I realize it may be hard to penetrate the noise levels in this discourse.

          P.S. I am going to post this to Isaac Held as well.

          Respectfully, Bill

      • Philip Richens

        Hi Tamsin,

        You mentioned above that you didn’t have a power spectrum comparison plot to hand … Are you able to lay your hands on such a plot for the GCMs used in your work and/or by the Met Office? And if you are, would you mind sharing it here please?

      • Philip Richens

        Hi Tamsin,

        Do you have access to plots of the spectra for the GCMs you use?

        Some GCMs do have spectra that are too weak in the low-frequency end compared with observation. The magnitude of temperature fluctuations simulated by these GCMs decrease with increasing time scale, when according to observations they should increase, at least over climate relevant time scales (30+ years). For my money, this is a significant discrepancy that causes me to question warming attribution arguments based on comparison between GCMs and observation. However, I don’t know if the criticism is also fair w.r.t. the models used by the Met Office. I’ve asked them this question via email (and have also asked Richard Betts at BH) and here is a summary of the response to date:-

        The question is a very reasonable and relevant technical question. I know you don’t work for the Met Office, but nonetheless I imagine that some of the scientists working there are colleagues. Can you help me please by asking them this question on my behalf?

        • Alexander Harvey

          Hi Philip,

          Here are temperature spectra from AR4:

          All of the plots (GCMs and Observations) show higher values at longer periods.

          You wrote:

          “The magnitude of temperature fluctuations simulated by these GCMs decrease with increasing time scale, when according to observations they should increase, at least over climate relevant time scales (30+ years). ”

          Do you recall where you got that impression from?


          • David Young

            Alex, This is a point made by Judith Curry on several of her blog posts. Perhaps she will join in and show the data supporting the claim. You could probably find it on her blog.

        • Philip Richens

          Hi Alex,

          I appreciate your response, thank you very much. Judith Curry mentioned earlier in this thread (@June 14, 2012 – 7:29 pm) a paper by Lovejoy and Schertzer that discusses this. As I understand it, the issue with the comparison in figure 9.7 is that it uses simulations of 20th C climate including all of the estimated forcings. What I am asking for is the behaviour of long control runs of the Met Office models without the forcings or with estimated natural forcings only – the object being to see whether the scaling behaviour of the Met Office models is similar to that described in the paper.

          • Alexander Harvey

            Hi Philip,

            I suspect that I can answer the core issue:

            The variation of amplitude (in the way they have defined it) with period for GCMs performing control runs is unlikely to reproduce their plot for variation in the observed temperatures or the proxies. E.G. the divergence at multidecadal periods is likely to be correct.

            The point is why should this not be the case. One has to ask what the original question was that these GCMs would be the answer too. E.G. is a control run likely to reproduce the 20th century type variation or a multi-millenial run reproduce ice core records. FWIW there is an approach to this were the questioned asked is; “If these were the local temperatures what would the proxies look like?” which can lead to quite a different analysis. Further to that last point, on reading that paper, I didn’t find a statistical model or framework in place, and no structured reasoning about what one should expect or how often the observations or proxies would diverge by chance.

            The paper as a whole is largely (but not completely) an error and uncertainty free zone. I must wonder as to the uncertainty range for the amplitude-period plot fot the multi-proxies and for the GCMs for that matter.

            The paper doesn’t seem to have graphics for the original GCM and muilt-proxy series which makes it hard to judge how they varied in time and hence what the underlying issue is. For instance if the GCMSs are failing to show a decline in temperatures from 1500-1900, the infamous HS shaft, then I would not be in the least surprised as I doubt we would know the boundary conditions prevavlent during that period.


          • Philip Richens

            Hi Alex,

            Thanks very much for your reply, and I’m pleased you think the divergence is likely to be correct. I also accept that GCMs should not be expected to reproduce the details of observed variations. However, it does seem reasonable to me to expect them to simulate the observed decadal scaling properties, especially if they are to be used for detailed comparison with the instrumental record.

            You may also like to look at this earlier paper,


            It contains references to the data they analyse, as well as to other research in the same area, and a lot more general background.

            Regarding the statistical model and the uncertainty and error analysis, I don’t think I can usefully respond, but I have emailed to Shaun Lovejoy inviting his comments.

          • Alexander Harvey

            Hi Philip,

            Thanks for the link.

            I think that there is a real problem when switching from the standard spectral argument to one based on “fluctuation exponents”. I take the latter to refer to the Hurst component H (which is the symbol they use). I think that there has to be an assumption that the series is self-similar which to me implies that the series as a whole can have a value for H. The 2012 paper states that the series as a whole does not have a value for H. In fact it is sometimes negative (H<0 at intermediate periods) which doesn't make any sense in terms of a Hurst exponent which is non-negative. It would I think be true that values of beta < 1 might imply a Hurst value < 0 given a belief that beta = 2H + 1 but when that imples that H<0 one might query how appropriate that belief is.

            Where H<0 they use a "Haar fluctuation" which may be novel. It is described as averaged over the ensemble but there is no further mention of ensemble. I think that normally (non-Haar) it would be averaged over the series and would be both positive and considered to be an estimate of H.

            An alternative assumption would be for the series as far back as 10000Yr BP to be the result of a low pass filtering and one would expect beta to vary from between 0 and 2 and for H to be undefined on the basis that the series is not self-similar. On that basis, except for the effects of the gentle Holocene temperature decline, the GCMs will have the correct type of frequency response which is one where the frequency response for periods 1-10 millenia would be mainly flat and the band from 10,000 year down to annual (or monthly anomalies) could be seen as one coherent whole and not two differing regimes as they choose to. Viewed as a frequency response the function never slopes the wrong way e.g. gets smaller with increasing period.

            I believe it is known that analysis by fluctuations can misinterprete slopes in the data and from what I see the Haar method actively promotes this issue. If I have it right it will interpret a slope as a fluctuation that increses with period. Without seeing a plot of the multiproxies or preferably the data I cannot say that this is not the case.


          • Philip Richens

            Hi Alex,

            Thanks again for your comments – I’m very pleased to see such thoughtful criticism of this research.

            Let me try to respond to your point about H. In these papers, H doesn’t represent the traditional Hurst exponent, although it is related. The standard Hurst exponent (nor the difference) doesn’t disclose a decrease as time-scale increases, because intuitively it is “hidden” by the larger small scale fluctuations. If you like, you can find more detail about their use of Haar – including comparison with DFA and spectral techniques – in yet another recent paper,


            BTW, I must apologise, I didn’t intend to drip feed these links: you can access nearly all of this group’s papers going back to 1979 via

            Regarding your point about the possible non-scaling character of the data, scaling has also been noticed using other techniques. Table 3 in the 2011 paper – Low frequency weather and the emergence of the Climate – contains some references and quotes values for beta obtained by other research groups, also in the (1,2) range. As an example, the paper by Ashkenazy and co-authors uses DFA to estimate the value (for times > 1000 yrs).

            Regarding the reconstructions used, they are again referenced in the 2011 paper. Basically I think GRIP, Vostok and post-2003 multi-proxies e.g. Ljundqvist.

            My overall impression from all this, is that there is a need to convincingly assess the magnitude of anthropogenic change, but to do this it still remains to separate out the natural effects at the 30+ yr timescale.

          • Shaun Lovejoy

            Thanks Philip for advising me of this exchange. Perhaps I can clarify the main points that we tried to raise about the low frequency variability.

            The first is that a weather/climate dichotomy is empirically untenable, there is an intermediate low frequency “macroweather” regime (see the new submssion: Lovejoy, S., D. Schertzer, 2012: The climate is not what you expect, Bull. Amer. Meteor. Soc. (submitted 6/12): Without new forcings or couplings that both stochastic (cascade based) and deterministic models (GCM’s) reproduce only weather and macroweather statistics (they do this quite well), not climate statistics. Whereas the fluctuations in the weather regime grow with scale, in the macroweather regime they decrease (a seemingly stable rather than unstable behavior). This is not surprising, there is here an tendency to the classical “the climate is what you get” idea except that what you really expect is macroweather and – due to forcings and slow dynamics over scales 10- 30 years and longer, what you get instead is the climate.

            The second point is that all our evidence points to essentially a continuous “background” spectrum up to scales of 50- 100 kyrs. At this glacial/interglacial scale fluctuations in temperature are of the order of ±5K. This doesn’t mean that there are no periodicities (including due to orbital forcings), but that these account for at best a small fraction of the variance. Consequently, at some scale the fluctuations must stop decreasing with scale and start increasing again. All the evidence points to this happening on average but with large geographical variations – at around 10-30 years.

            Third point. With the exception of sunspot (but not 10Be) based reconstructed solar forcings, the proposed natural forcings (up to about 10kyrs) have fluctuations decreasing in amplitude with scale rather than increasing so that it is not obvious how they could account for the (increasing) climate regime fluctuations (see Lovejoy, S., D. Schertzer, 2012: Stochastic and scaling climate sensitivities: solar, volcanic and orbital forcings, Geophys. Res. Lett. 39, L11702, doi:10.1029/2012GL051871.

            Fourth point. We and others have confirmed that several of the forced last millenium simulations have low frequency varaibility that is too low (this includes some new results from the GISS-E model, see the updated version of Lovejoy, S. , D. Schertzer, D. Varon, 2012: Do GCM’s predict the climate…. Or low frequency weather?, Geophys. Res. Lett. (submitted, 6/12),

            These are the reasons why it seems likely that new uniquely climate, slow dynamic processes will be needed to reproduce the climate regime. Indeed it seems naive to think that at long enough time scales that such processes would not eventually become dominant.

    • lucia

      Could you clarify this,

      I have argued that we need to go to a multi-fluid formulation to get this right, or to at least investigate the impact of the simplifying assumptions that we have made.

      What do you mean by “multi-fluid”? You do conserve water and air separately…. don’t you? You do let rain fall down relative to air, right? I’m trying to understand some distinction you are making, since “two-fluid” models are used in multiphase flow a lot. The term “mixture models” is also used and connotes a different method of approximating something. I can never be confident similar terms are used similarly in all sub-fields involving transport– so if you could clarify, that would be nice!

      • Judith Curry

        Hi Lucia, the atmospheric mass continuity equation used in weather and climate models only includes dry air; changes in mass associated with evaporation and precipitation are not included. IMO that is a really egregious approx for a long term simulation where the most important effect is water vapor feedback.

        • lucia

          Ahh! Interesting.

          The scaling is so different for engineering and climate problems that I would often have to sit down and do an order of magnitude calculation to figure out if I agree or disagree that failing to account for the effect of humidity or rain drops on the mass in a cell ‘matters’ in the long term. There are quite a few other details I don’t know so I can’t rank the relative impact of different things.

          For example: the ocean/atmosphere is a free surface. Do models let the ocean surface rise and fall due to things like tides or hurricanes? (I’m not so worried about ripples which seem suitable for subgrid parameterizations. But do prevailing winds affect the average height of the surface? What if they fail? Assuming the answer to the latter is “yes”, and the height can change– do models capture that? This is all very hypothetical and I have no idea whether these would matter. They probably wouldn’t matter short term, but might medium or long term.)

          • David L. Hagen

            Judy & Tamsin
            See: Sun and Clouds are Sufficient Posted on June 4, 2012 by Willis Eschenbach

            my calculations show that the value of the net sun (solar radiation minus albedo reflections) is quite sufficient to explain both the annual and decadal temperature variations, in both the Northern and Southern Hemispheres, from 1984 to 1997.

            Willis’ findings further suggest that there may be major variations in clouds and thus in atmospheric water. That further emphasizes the need to focus on the failure to control the mass balance of atmospheric water!

          • jim2

            Wind also creates droplets and spray. This will vastly increase the ocean water/air interface.

          • Sashka

            Hi Lucia,

            Q: Do models let the ocean surface rise and fall due to things like tides or hurricanes?
            A: Climate models do not account for tides. The equations are simplified to filter tidal mode out. It’s called rigid lid approximation, I believe.

            Q: But do prevailing winds affect the average height of the surface?
            A: Yes. Winds excite Rossby and Kelvin waves that eventually set the position of free surface.

            Sorry, didn’t understand your other questions.

        • David L. Hagen

          Re:”changes in mass associated with evaporation and precipitation are not included. IMO that is a really egregious approx for a long term simulation where the most important effect is water vapor feedback.”

          I’m dumfounded. In thermo, we had drummed into us the importance of mass and energy conservation.
          I agree that NOT including evaporation and precipitation is likely to cause a huge systemic error.

          Ferenc Miskolczi took the TGIF radiosonde data and found major discrepancies with the 76 US standard atmosphere. Furthermore, when evaluating the global optical depth, he found effectively NO change, compared to the strong increase expected when applying a feedback amplifier to CO2.

          Now part of that could be systemic errors from changing technology. However it is a major data point to address. Miskolczi’s results suggest that this failure to account for evaporation and precipitation on atmospheric water content could be the cause for the major 2 sigma difference Lucia shows between the IPCC model mean trend (0.2C/decade) over the actual 32 year temperature trend (0.138C/decade).

          • Paul Vaughan

            David L. Hagen (June 14, 2012 – 11:41 pm) wrote:
            “I’m dumfounded. […] failure to account for evaporation and precipitation on atmospheric water content […]”

            If what Dr. Curry has written is true, this is terminally grave.

            “Apart from all other reasons, the parameters of the geoid depend on the distribution of water over the planetary surface.” — N.S. Sidorenkov

          • David L. Hagen

            I should clarify that I meant “NOT including mass conservation of atmospheric water with variations in evaporation and precipitation”.
            Lack of water mass conservation could be at the root of variations in clouds/albedo not being modeled and ignored in GCMs, resulting in the over sensitivity to CO2 needed to compensate to adjust parameters to fit the data.

        • Paul Vaughan

          Judith Curry (June 14, 2012 – 7:02 pm) wrote:
          “[…] changes in mass associated with evaporation and precipitation are not included.”

          If this statement is false, someone please indicate so right away.
          Otherwise, here is my reaction:

          Profoundly remarkable.

          I’ve never paid any attention to the details of climate modeling because the output is so hopelessly far from being consistent with observation …but certainly this detail is worth knowing.

          I don’t think I ever would have imagined such a severe omission.

          I again sternly advise climate scientists to study up (QUICKLY) on Earth Orientation Parameters. With such glaring omissions, there are no acceptable excuses for further delay.

        • Sashka

          If that’s the case how come nobody looked into that yet? Is it too hard or folks are too lazy?

        • Alexander Harvey

          One centimetre of precipitable water is equivalent to one thousandth of the mass of the column and similarly one thousandth of its weight which is equivalent to 1 millbar.

          As a global average there is about 100 centimetres of precipitation per annum and the same amount of evaporation. One hour of rain at a moderate to heavy rate, say 1cm/hr could only occur for about 100 ocassions per year to use up that budget and each would cause a loss of one thousandth of the mass or weight equivalent to 1 millibar of pressure. Obviously prelonged or torrential rain would cause a larger deviation from the mean and have a greater significance but would occur less frequently.

          Ignoring the mass of precipitable water is a very different thing from ignoring the effects of precipitation and evaporation. On average the atmospheric reservoir of latent heat is ~100 times greater than the reservoir of mechanical energy, e.g. the condensation of 1cm of precipitable water would release an energy equivalent to the mean mechanical energy. I don’t think anyone is suggesting that the GCMs ignore this factor.

          I do not know how significant ignoring the loss and gain of mass is in the scheme of things but it is not equivalent to ignoring the effects of precipitation. It seems obvious that the mass effect would be more important where the amounts and rates of precipitation are greatest e.g. monsoon rain, and much less significant for the majority of occassions.


          • Alexander Harvey


            not even the right order of magnitude :

            “e.g. the condensation of 1cm of precipitable water would release an energy equivalent to the mean mechanical energy.”

            should have read:

            “e.g. the condensation of 1cm of precipitable water would release an energy equivalent to ~40 times the mean mechanical energy.”

            I think that is about right.

            It is clear to me that it is the energy and not the mass budget that dominates.


          • David L. Hagen

            It’s not that precipitation and evaporation are ignored, but that the systemic change in precipitable water has not been adequately accounted for – by failing to close mass balance on liquid water and/or clouds in the atmosphere. Energy is closure closely related but different.
            Nigel Fox of NPL observed that clouds form 97% of the uncertainty. See his TRUTHS project talk. See
            TRUTHS: -Traceable Radiometry Underpinning Terrestrial-and Helio-Studies: A benchmark mission for Climate and GMES Dr Nigel Fox 9 Dec 2010 Presentation.
            Nigel Fox et al. Accurate radiometry from space: an essential tool for climate studies, 2011 The Royal Society Paper. etc.
            The cloud uncertainty is so great that even the sign of cloud feedback is not known despite confident assertions of “climate change” (an equivocation for catastrophic anthropogenic global warming). Thus this issue of mass conservation on liquid water is a key component underlying that cloud uncertainty.
            Mass conservation on water may be difficult to quantify because of relatively small quantity and difficult to measure. However, it can be a critical component that drives the cloud models etc.
            Note too the different trends in the tropics vs temperature regions, and the width of the tropics may be changing etc. etc.

        • David L. Hagen

          Judy & Lucia
          Roy Spencer has posted briefly on their recent 1D ocean diffusion model which included mass conservation. He notes that at least 3 IPCC models do NOT conserve mass and consequently got ocean cooling despite radiative forcing!
          JGR Paper Submitted: Modeling Ocean Warming Since 1955 July 18th, 2012

          The 1D model has the advantage that it conserves energy, which apparently is still a problem with the IPCC 3D models which exhibit spurious temperature trends (peer reviewed paper here). Our own analysis has shown that at least 3 of the IPCC models actually produce net (full-depth) ocean cooling despite positive radiative forcing over the 2nd half of the 20th Century.

          After all,if a climate model can’t even satisfy the 1st Law of Thermodynamics, and global warming is fundamentally a conservation of energy process (net accumulation of energy leads to warming), how then can 3D models be used to explain or predict climate change?

          Links to: CLIMATE DRIFT IN THE CMIP3 MODELS Alexander Sen Gupta et al. 2011

  13. David L. Hagen

    Thanks for exploring the “usefulness” of climate models by better understanding their uncertainties or how “wrong” they are. Some thoughts on further uncertainties to explore:

    Empirical climate sensitivity
    The ranges shown under “Prediction Stability” underestimate the uncertainty by only showing the high climate sensitivities derived from climate models. Please add the low climate sensitivities evaluated by “climate realists” based on empirical evidence that range from about 0.4 to 1.2 C/doubling. See: Climate Sensitivity NIPCC

    Idso, S.B. 1998. CO2-induced global warming: a skeptic’s view of potential climate change. Climate Research 10: 69-82.
    Lindzen, R.S. and Choi, Y.-S. 2009. On the determination of climate feedbacks from ERBE data. Geophysical Research Letters 36: 10.1029/2009GL039628.
    Lindzen, R.S. and Choi, Y.-S. 2011. On the observational determination of climate sensitivity and its implications. Asia-Pacific Journal of Atmospheric Sciences 47: 377-390.
    Scafetta, N. 2012. Testing an astronomically based decadal-scale empirical harmonic climate model versus the IPCC (2007) general circulation climate models. Journal of Atmospheric and Solar-Terrestrial Physics: 10.1016/j.jastp.2011.12.005.
    Ludecke, H.-J., Link, R. and Ewert, F.-K. 2011. How natural is the recent centennial warming? An analysis of 2249 surface temperature records. International Journal of Modern Physics C 22: 10.1142/S0129183111016798.

    Random Walks
    Note reports finding Computer models outperformed by random walks
    Fildes, Robert and Nikolaos Kourentzes (2011) “Validation and Forecasting Accuracy in Models of Climate Change International Journal of Forecasting 27 968-995.

    The climate models, by contrast, got scores ranging from 2.4 to 3.7, indicating a total failure to provide valid forecast information at the regional level, even on long time scales. The authors commented: “This implies that the current [climate] models are ill-suited to localized decadal predictions, even though they are used as inputs for policymaking.”

    Socioeconometric models:
    Ross McKitrickobserves:

    I keep finding the socioeconomic patterns do a very good job of explaining the patterns of temperature trends over land. In our 2010 paper we showed that the climate models, averaged together, do very poorly, while the socioeconomic data does quite well.

    Ross McKitrick Climate Models versus Reality: Part I
    Note McKitrick’s upcoming papers developing socioeconomic methods.

    Type B “Systemic Bias”
    Almost all climate papers I have read do NOT address systemic bias or Type B errors. See
    NIST Technical Note 1297 (1994 Edition) Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results

    Stochastic variations
    Few models provide enough runs to overcome chaotic variations and obtain statistically significant means. See S. Fred Singer (2011) NIPCC v IPCC Addressing the Disparity between Climate Models and Observations: Testing the Hypothesis of Anthropogenic Global Warming (AGW), at the Majorana Conference in Erice, Sicily

    A synthetic experiment with an unforced 1000-yr control run shows that at least 10 runs are necessary to form a stable asymptotic cumulative ensemble-mean (for a run-length of 40 years) and at least 20 runs for a run-length of 20 years. [But there are no IPCC climate models with more than five runs.]

    Stochastic v Deterministic
    D. Koutsoyiannis et al. at ITIA show deterministic models underestimate natural stochastic variations (Hurst Kolgomorov Dynamics)

    In particular, we hope to have contributed in showing that current modelling approaches
    can be dangerous, because, as they are unable to reproduce climatic variability, naturally they hide or underestimate future uncertainty (cf. Koutsoyiannis et al. 2007, Koutsoyiannis 2010).

    D. Koutsoyiannis, A. Christofides, A. Efstratiadis, G. G. Anagnostopoulos & N. Mamassis (2011): Scientific
    dialogue on climate: is it giving black eyes or opening closed eyes? Reply to “A black eye for the Hydrological Sciences Journal” by D. Huard, Hydrological Sciences Journal, 56:7, 1334-1339

    Solar leading climate
    David Stockwell at Niche Modeling Solar-accumulation-theory

    Natural vs anthropogenic forcing:
    Compare Nicola Scafetta’s natural cycles vs IPCC anthropogenic forcing.
    Scafetta N., 2012. Testing an astronomically based decadal-scale empirical harmonic climate model versus the IPCC (2007) general circulation climate models. Journal of Atmospheric and Solar-Terrestrial Physics 80, 124-137. DOI: 10.1016/j.jastp.2011.12.005.

    Best wishes on your uncertainty explorations

  14. David L. Hagen

    Speaking of errors, see Ross McKitrick correcting the error in Weitzman’s Dismal Theorem, that resolves the fat tail problem underlying the “precautionary principle” arguments.


    The Weitzman Dismal Theorem (DT) suggests agents today should be willing to pay an unbounded amount to insure against fat-tailed risks of catastrophes such as climate change. The DT has been criticized for its assumption that marginal utility (MU) goes to negative infinite faster than the rate at which the probability of catastrophe goes to zero, and for the absence of learning and optimal policy. Also, it has been pointed out that if transfers to future generations are non-infinitesimal, the insurance pricing kernel must be
    bounded from above, making the DT rather irrelevant in practice. Herein I present a more basic criticism of the DT having to do with its mathematical derivation. The structure of the model requires use of ln(C) as an approximate measure of the change in consumption in order to introduce an ex term and thereby put the pricing kernel into the form of a moment generating function. But ln(C) is an inaccurate approximation in the model’s own context. Use of the exact measure completely changes the pricing model such that the resulting insurance contract is plausibly small, and cannot be unbounded regardless of the distribution of the assumed climate sensitivity.

    • Steven Mosher

      David, this is OT. This is a technical discussion of uncertainty in GCMs. You destroy the conversation by littering. Is that your intention?

      • David L. Hagen

        No. I was summarizing the major uncertainties involved. I see GCM uncertainty to include the consequences and thus DT amplifying the fat-tail. With McKitrick’s correction, uncertainties have a much smaller impact.

  15. Tamsin Edwards

    Roger Longstaff @ 3:56 pm

    The MO will not release the code

    Hi Roger,

    This version of the Met Office model (v4.5, HadCM3, 1999) was used for the UK Climate Projections: UM 4.5 code.

    [Edit 17:12 – This is also the version we are using in our estimate of climate sensitivity.] Similar versions (such as lower resolution) have been used by It is available for academic use, subject to signing a licence agreement. This version is much faster than the current generation so is still used a lot for large groups (ensembles) of simulations, palaeoclimate studies and other areas where you need a lot of, or long, simulations. See for example the Met Office and model pages.

    The Unified Model is now up to about version 8.2, I think, for operational weather forecasting. IPCC runs for AR5 were done with v6.6 (HadGEM2-ES).

  16. Paul Matthews

    Judith Curry’s recent post CMIP5 decadal hindcasts and the associated paper by Kim, Webster and Curry (2012) was quite an eye-opener to me as to just how bad the models are. When initialised to some past state, most of the models seemed to quickly drop by about 1 degree to their own preferred state, while others weren’t initialised to the past temperature at all (presumably because of this effect). When this was compensated for, the models on average overestimated 20th century warming by about 50%.

    “Climate modellers, like any other modellers, are usually well aware of the limits of their simulators.”
    I don’t think this is true. In my experience, most people working with complex computer models over-estimate the accuracy of their models and under-appreciate the significance of the simplifications and assumptions and numerical approximations that have been made.

    Finally, when people say the models are good because they agree with each other, or because the predictions today are much the same as what they were 10 years ago, all one can do is laugh, sorry.

  17. Pekka Pirilä

    the IPCC Special Report on ‘Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation (SREX)’ describes them:

    Options that are known as ‘no regrets’ and ‘low regrets’ provide benefits under any range of climate change scenarios…and are recommended when uncertainties over future climate change directions and impacts are high.

    Many of these low-regrets strategies produce co-benefits; help address other development goals, such as improvements in livelihoods, human well-being, and biodiversity conservation; and help minimize the scope for maladaptation.

    No-one could argue against the aim of a better, safer future. Only (and endlessly) about the way we get there. Again I ask, please try to stick to on-topic and discuss science below the line.

    Thanks for a very nice posting.

    While the above quote is not from the climate science part of your post, it’s still from there. I agree that solutions that can be classified as ‘low regrets’ are really an important possibility. Unfortunately there are not many alternatives that are both low regrets and efficient. Furthermore I have my doubts on the co-benefits. I think that they are most commonly proposed by people who wish to advance their preferred policies claiming that they belong to those which have multiple benefits. In most cases they cannot give good evidence to support these claims.

  18. Herman A (Alex) Poope

    Options that are known as ‘no regrets’ and ‘low regrets’ provide benefits under any range of climate change scenarios.
    High CO2 makes green things grow better while using less water.
    Low CO2 makes green things grow less well while using more water.
    Anything that limits CO2 or reduces CO2 is worse for life on earth.
    Anything that increases CO2 is better for life on earth

  19. Ron Manley

    It seems to me that by expressing the results of models as anomalies relative to a defined period the very real differences between models are minimised.When temperatures are expressed as celsius relative to zero then difference between models are of the order 1.5 °C.

    For simulation of precipitation there is also a large difference, of the order of 100 mm/year.

    In both cases the difference in forcing represented by the difference between the models is large relative to the projected changes in forcing.

  20. Peter Mott

    I have always wondered what the little blob of rain just North of Harrogate meant on the BBC weather maps. I wondered if it was artistic licence but with ~5km resolution it could be a prediction. AFAIK you cannot actually get to view these maps except for 30secs while the presenter talks which is a shame. But do you know how the simulator is primed with it’s data? Is there a metric for how good a weather prediction was?

  21. Joe's World


    Judith happens to be one of the few scientists who actually has a better understanding of our failings with the models.
    Many areas have been ignored to generate a viable model which is just understanding the different processes in effect from velocity differences to planetary tilting to angles of the suns energy. Different factors then are the differences of atmosphere to water with different time frames due to the density differences.
    What also is making models difficult is our inaccuracy of measuring the different atmospheric pressures as they are based on pressure on water and NOT on the atmospheric gases themselves.
    These add to the mentality of a chaotic system when in actual fact it is a highly complex system.
    Temperature data collecting and averaging was the fools path.

  22. Alexander Harvey

    I ponder the sense in which a simulator could be said to reproduce climate and hence also the nature of climate.

    It is writen that climate is what you expect and weather is what you get. (Mark Twain, Robert Heinlein, et al).

    Alternatively that Climate is the average of weather. But the average (expectation) is only the first statistical moment. What about varaince, the second moment.

    I would argue that one must consider at least these two. If one wishes to encompass extreme events one needs at least two more.

    A knowledge of climate gained by observation gives rise to a goodly amount of predictive power. In many cases a climate based prediction could out perform a weather prediction produced by a single simulation, providing one tries to predict sufficiently far into the future. But more interesting to know would be how often it would out perform an ensemble weather prediction on all time scales.

    The ensemble does give us both an expection and variance, if it work well we might anticipate that the long run ensemble expectation reproduces climate, in the sense of expected (average) weather, and the long run ensemble variance, the variation from that expectation. E.G. that the ensemble mean will reproduce climate once passed the threshold where the individual simulations decorrelate from the revealed weather, which is typically after a couple of weeks.

    From two weeks out one might anticipate that any single simulation will likely be out performed by a more naive prediction based on previously observed climate averages. The single simulation is hampered by the now uncorrelated variance it carries. It has the wrong weather and is competing with the climatic average which carries no variance due to weather. This would be the case even if the simulator was perfectly capable of predicting the weather precisely if only the initial and boundary conditions were precisely known. However, an ensemble of such simulations with fortuitous variation of initial and boundary conditions might yet out perform a naive prediction based on observed avarages.

    If an ensemble of simulations could reproduce weather in the short run and climate in the long run and slide gracefully between the two, with the ensemble mean moving towards the climatic average whilst its variance expands to the observed climatic variance we might have about everything achievable.

    Perfecting the weather models and running them in large ensembles is an obvious approach to the problem of reproducing climate (but not necessarily climatic changes). I believe that there are considerable practical and theoretical limitations to such an approach. The historical purpose of predicting weather a few days out did not require the simulators to be able to reproduce a realistic climate. The time scales were short, This is changing, helped by the revolution in data collection and assimilation that has taken place which, with improvements to the simulators, has pushed the time scales out to around two weeks for a weather prediction and a little further towards seasonal forecasting. I believe that the distinction between a simulator suitable for medium range forecasting and one suitable for long term climate is becoming increasingly blurred with an exchange of ideas, software and importantly people between the two disciplines.

    Climate simulation seems, to me at least, to have suffered from the opposite problem in that it cannot reproduce weather. Weather is a local phenomenon and even if the amount of rainfall on a particular grid square could be reproduced its pattern might not. This could mean representing the rainfall in some average way. This can I believe give rise not to rain and shine but a seasonal drizzle. There is a case study relating to Australia were this was highlighted as an issue when one wishes to predict future hydrology in a hot dry land. A result is the development of general regional simulations and locally specific rainfall simulations using local knowledge bases.

    We have started to pose the climate simulators a tricky question. Can they make annual to decadal climate forecasts? Can they assimilate initial conditions, projected boundary conditions, with their inherent uncertainties, and produce skillful forecasts?

    I think it is likely that they will suffer similar issues to those that burden weather forecasting. That a single simulation, and perhaps small ensembles, once passed some time threshold will be out performed by naive predictions based on observed climate perhaps including some estimate of presistent trend. Beyond that threshold the synthetic variation will become uncorrelated and count against the simulation giving an advantage to the naive prediction due to its lack of variance. Put simply, not attempting to reproducing natural variability will out trump any attempt to do so beyond the threshold for skillful forecasting by simulation. A related way of looking at this is that, once passed that threshold, an individual simulation is likely to be out performed by its own smooth.

    It may be the case that simulators used in large ensembles with pertubations in initial conditions, boundary conditions, and importantly physics, backed up by statistical analysis can produce forecasts that meet similar idealised criteria to those I mentioned for weather simulators. That the forecasts move gracefully from skillful prediction of foreseeable annual to decadal climate complete with variations that correlate with the revealed truth, to some long term underlying tendency whilst the predicted variance expands to match long term climatic variance.

    I might be helped if I try to describe what such a forecast might look like. Put simply, the forecast would begin with a wiggly signal with little additional variance (small error bars) and progress to a smooth tendency (in the absence of volcanoes) with a lot of variance (large error bars). The revealed reality would be on average and proportionately within the error bars. E.G. the temporal development of the predicted mean and variance reflects the accuracy of the prediction at every time scale.

    In the above there are assumptions that simulator experiments can be constructed that reproduce climate. My understanding is that we cannot currently construct simulators that are sufficiently ideal to meet our perceived needs. But we may be able to construct simulation experiments that come a lot closer. Perturbed ensembles can give us synthetic climatic means, variance and higher moments but they are removed from those of our Earthly climate. An hoped for missing link is an ability to map between the statistics of each and do so in a framework that extends to climatic conditions significantly different from those of the well instrumented period. This part seems to be a focus this blog, my interest in the work of Tamsin et al, and a response to the question. How do we best deploy such resourses as we have in an effort to render their totality meaningful once it is accepted that the problem has proven resistent to ingenious simulation allied with brute force?


      • Alexander Harvey

        Hi Tamsin,

        I am glad that you liked my posting. There is really no need to hurry over a response it is my privileged disposition to enjoy time. Could one imagine Twitter Chess? (I wish I had the sense not to tempt people 🙂 and the wit to suspect it’s with us already 🙁 )

        For what it is worth, I still have a response to “A Sensitive Subject” on the stocks.


  23. clazy8

    Tamsin, in your point 3, you ask, “Does agreement imply precision?” If I may indulge in nitpicking, don’t you mean “accuracy”? Indeed, although agreement says nothing about the precision of any single measurement, it does represent the “precision” of an average of measurements, no? And in the same way that statistical agreement is analogous to measurement precision, statistical validity is analogous to measurement accuracy.

  24. Anteros

    An interesting post.

    I’m very conscious about your wish to avoid policy discussions. Having said that, I hope there is some scope to examine where the edges of ‘scientific evidence’ lies and the demarcation between fact and value.

    It seems to me that the problem you sum up very elegantly with

    Warning: here be opinions

    is either that opinions have crept into scientific discourse, or that scientific discourse has crept out to opinions.

    Either way, the issues still exist of a) the limits of science and b) the demarcation problem.

    I’m not sure that scientists can ever be wholly divorced form their opinions – or values, beliefs, or morals for that matter. However I think it does science itself a great disservice if the two are ladled up in one big soup. Surely, as far as possible science should be value-free. I think there is quite a misunderstanding of the nature of scientific processes when people can assert that the “science has spoken” – in the direction of the speakers values and prejudices.

    So, without commenting on the particular policy preferences [prejudices?] of the authors of the article, I think it would have been vastly improved if the dividing line between science and opinion had been somewhat reinforced.

  25. Anteros

    Tamsin –
    You make this superficially uncontentious statement-

    No-one could argue against the aim of a better, safer future. Only (and endlessly) about the way we get there

    I disagree that we only argue about how we get somewhere that we’ve agreed upon as a good destination. I happen to think that making many thing ‘safer’ actually makes them worse. If it becomes impossible for us to lose our jobs (ie they are ‘safer’), our incentives and rewards are diminished. If a business is assured of never going bankrupt, it’s vitality is compromised.

    We could make skiing safer by introducing a speed limit, but who would bother to ski?
    We could attempt to eradicate heartbreak by making it illegal to end relationships, but it is the risk of our loved ones leaving that makes their presence so magical..
    In Gulliver’s Travels, Swift took this idea of ‘safety’ to its limit and showed what it would be like if we could never die – the lives of those subjected to this torture only wished for an end to their misery. They only wished for death.

    It is not true that we all agree where we should go – our values and world-views can vary enormously, so in fact almost everything is up for discussion.

    I understand and share your view about keeping science scientific, and this little example shows how beyond the boundaries of the testable and falsifiable almost anything goes. This is another reason why we should strive to keep science itself value-free, and actually why talking about where science ends and values begin is on-topic for your thread. I know it might not seem it…..

    To me there is a connection between this demarcation and the belief – which I think you still hold – that ‘science’ and ‘sceptics’ are in some way opposed. As a scientist and a sceptic I find this very hard to take!

    Judith Curry addresses this with a comment about the recent Adam Corner post with this point –

    Much of the climate community continues to view AGW skeptics as anti-science……..But none of the academics seem to acknowledge reasoned skepticism (such as described by Geoff Chambers) by knowledgeable and well educated people as having an actual scientific basis; as such, they are “missing the point.”

    And I would add that, after chewing over this ‘disconnect’ for a long time I believe the problem lies solely with an undemocratically narrow use of ‘Sceptic’. I think if somebody has at least one objection to any of the canonical IPCC positions they are likely to be called (or self-label) as a sceptic. That definitely includes Judith Curry herself (attribution), Richard Tol (cost-benefits of mitigation) Roger Pielke Jr (purported increases of extreme event cost), John Christy – and a thousand others (climate sensitivity) and so on.

    Adam Corner seems to hold this narrow view mentioned above, believing that to be a ‘sceptic’ means to disbelieve that GHGs change the radiative properties of the atmosphere. For 99% of sceptics that is simply not true! We are sceptical of one or more, but not necessarily more than oneof the consensus views, many of which themselves are value-laden and non-scientific.

    If the consensus is necessarily narrow, scepticism towards it takes myriad forms. As Judy Curry says, some of those are science-based. Very few of them are ‘anti-science’

    • BillC


      Your examples are not shining:

      We could make skiing safer (end) by introducing a speed limit (means), but who would bother to ski? (objecting to the means, not the end)

      We could attempt to eradicate heartbreak (end) by making it illegal to end relationships (means), but it is the risk of our loved ones leaving that makes their presence so magical (no friggin clue if I agree or not. you’re the polypsych, carry on!).

      In Gulliver’s Travels, Swift took this idea of ‘safety’ to its limit and showed what it would might be like if we could never die – the lives of those subjected to this torture only wished for an end to their misery. They only wished for death.

      If we include “do nothing at all” in the many options Tamsin has stated we can discuss endlessly, would you be happy? And why was July 2004 so cold?

      • Anteros

        BillC –
        I take your point about ends and means, but with safety I think the two are inextricably linked. It is actually the danger of many things that give them their worth – that make people feel alive and conscious of the miracle of existence. Therefore the very nature of trying to make things safer is in opposition to the value of the things we do.

        The future of the planet’s climate is perhaps a different ball-game altogether, although I would say that a huge part of current hysteria about climate change is generated by an emotional disposition (for worry, and imagination of imminent calamities) that I don’t share.

        In a broad sense ‘doing nothing’ may well be what happens; if Kyoto was our best shot and it [see the Hartwell paper] produced ‘no discernible effect’, then I think expectations about dramatic changes in fossil fuel usage over the next century or so are misplaced.

        I may be displaying some obtuseness, but your reference to why July 2004 was so cold leaves me totally flummoxed! Is it code? 🙂

        • BillC


          To your first paragraph – again I plead ignorance.

          To your second paragraph, I mostly agree.

          To your question – no, not code, a lame attempt at comic relief.

          • Anteros

            BillC –

            I did contemplate that it might be, but made the assumption that I was being dense…

            In retrospect, my examples were maybe a poor attempt to make the distinction between science – including Tamsin’s interesting concerns with the whole nature of uncertainty – and things where our values impinge, and our world-views lead us to have different kinds of conversations to scientific ones.

            Tamsin very succinctly said

            Warning: here be opinions

            where there was an obvious (and inappropriate, to my mind) conflation of the two.

            I agree that is an ill-defined area – hence my interest – and scientists have very different takes on it. I’m much more comfortable with, say, Richard Betts making a healthy distinction between science and value, than I am with James Hansen wrapping them up in a dubious mess. I also confess that this is partly because I disagree with Hansen that the world will go to hell in a handbasket unless drastic and immediate action is taken.

    • Steve Bloom

      In my experience most are to one degree or another anti-science. There’s rather a lot of “I don’t like the policy implications of scientific result X, so rather than address those I, not an expert in the relevant field, will dive into and purport to refute the details of the science without first obtaining the background knowledge that will allow me to do so competently.” In any sufficiently complex field of study, smart people with agendas have no trouble finding things to argue about. Tamsin has prohibited the “D” word (while continuing to allow all sorts of similar-grade insults heading the other direction, go figure), but the “S” word is far less appropriate. Maybe another “D” word, delusional, is more apt.

      Your examples of some of the more qualified delusionists are interesting to consider, although I think we can bring them into the big tent by including those who promote delusion.

      Richard Tol: An economist, not a scientist, so I’m not sure how much we need to say about his views. There’s obviously a great diversity of views about the timing of the necessity for adaptation and mitigation, and the balancing thereof, but perhaps it suffices to note that the economists Tol seems to be mainly opposing, Nordhaus and Stern, are themselves criticized by numerous scientists for low-balling the problem.

      RP Jr.: A political scientist, a qualification that speaks for itself. The point that weather extremes and the damage they do are distinct things is hardly original with him, is it? He seems to have passed his sell-by date, though, as we now do seem to getting a climate-related increase in some metrics of extreme weather (with the models seeming to be a bit behind the curve, an interesting topic that Tamsin could perhaps take up).

      John Christy: OK, an actual climate scientist. But could we perhaps find an example of someone with other than what is arguably the worst research record in the field? I’m not just talking about the egregious series of forced satellite temperature record corrections, BTW, but his other research as well, in particular the papers on the local temperature effect of California central valley irrigation and on Sierra snow trends.

      Judy Curry: Another actual climate scientist. The basic problem seems to be that she’s operating way outside her own expertise, and sails on unperturbed despite the remonstrances of people who have the relevant expertise. Would it be unfair of me to also point to her less-than-stellar publication record (noting in particular the relative lack of sole- and first-author papers)? My relevant first-hand experience with Judy came a few years ago in the comment section of a (IIRC) Colllide-a-scape thread, wherein I pointed out to her that low sensitivity is in conflict with the deep-time paleo record and she pointed to Knutti and Hegerl (2008) as being in support of her position. Oops, wrong, and not just a little wrong. At that point I kind of lost patience with her, and her more recent output has given me no reason to attempt to regain it.

  26. Brian H

    Your disinclination to engage in or host a discussion of policy driving climate science is perhaps either naive or disingenuous. If the the most robust and predictive hypothesis for explaining the content, prominence, and conclusions of scientific papers in the field is that they are apologia for strong policy action to “mitigate” projected climate disasters, then it is unproductive to discuss and debate anything and everything but that hypothesis.

    In simple terms, don’t kid yourself that the science is central to the dispute about CAGW. No matter if that’s where you’re most “comfortable”.

    As far as your second point, that the models are improving, I think that is just a matter of putting brighter lipstick on the pig. Only other pigs will become aroused.

  27. Dave Springer

    Once the nature of the models is properly understood many of the uncertainties become clear. Only two points need be grasped.

    1. The models are toys as the system being studied is far too complex and far too little is known about initial conditions.

    2. Toys are for children.

    [Hi Dave, given that we do want to study the earth system, it’s more helpful to try and come up with constructive ideas than just throw our hands in the air… – Tamsin]

  28. Brian H

    As far as I can gather, the “increased variance” of the long-term projections will be so great that they are meaningless or useless. That possibility should certainly be allowed for, or else “validation” goes out the window.

    GrammarNasty comment: “if it work [works] well … once passed [past] the threshold”

    • Alexander Harvey


      Regarding “if it works” being correct:

      would that it were.

      I insist that the verb indicate the mood.

      I am not precise with my grammar but I think that “passed” may be a verb in that phrase and short for “it has passed” when I desire it. If it “be” a preposition, I fear the verb “be” missing. Yet it probably is so, when you insist it be so.

      My usage may be archaic, wistful, sometimes amusing, even inappropriate, but there is some method in its madness. It is also past caring.


      “As far as I can gather, the “increased variance” of the long-term projections will be so great that they are meaningless or useless.”

      When will that be? How does unhappened future determine present status?


  29. Tamsin Edwards

    Roger Longstaff @ June 14, 11:06PM

    Roger, you might be interested in this paper by Paul Williams at Reading:

    The Effects of the RAW Filter on the Climatology and Forecast Skill of the SPEEDY Model

    This is about the effect of filtering the calculations to avoid numerical instability (which you are interested in, even though we got distracted by post-processing types of filtering).

    Paul has introduced an improvement to the old type of filter known as RA. He says the RA filter “weakly damps the physical mode, especially at high frequencies. This damping may become important for long integrations”. You can see there is a smoothing parameter. I haven’t looked at the papers he cites but I would imagine they choose this so as to maximise the skill of weather forecasts (rather than obtaining a particular response to GHGs, as you stated).

    With Paul’s new RAW filter “one can minimize the spurious, numerical impacts on the physical solution and obtain the closest match to the exact solution”. He tests this on simple linear system in a 2009 paper, and on an atmospheric GCM in this paper. The GCM short and medium weather forecasts do improve.

    I don’t know much more about this than what I have written here. Paul is quite a busy person (because he’s very smart!) but I expect he would be happy to give you a quick opinion on how much the problems of the RA filter might affect climate simulations (I don’t know how many of the current generation of models use it). Tell him I sent you 🙂

    Hope that helps,


    • Alexander Harvey


      Paul Williams gave this recorded presentation:

      “The importance of numerical time-stepping errors”


      This covers the problem in general and the Robert-Asselin-Williams (RAW) Filter in particular. I have watched it twice but I forget so many things but the blurb agrees with my recollection.

      Some of the issued highlighted aren’t pretty ! There is a definite case to be made for “nice try but could do better” and he has a proposal to that end but it seems it ain’t easy to implement.


  30. Sashka

    I am confused as to what you (or they) are referring to as consistency. When IPCC tells that the models predict 1 to 6C degrees sensitivity, is this an example of consistency?

    • Tamsin Edwards

      No, it’s the fact that they’ve estimated moreorless the same range for the past few decades. But as this range is quite wide, I’d argue it’s only a weak argument in support of the models.

      • Sashka

        Hm. Will you allow me to extend this argument ad absurdum? Suppose they said they sensitivity is from -10 to +20. Would that also provide a degree of support? How wide, in your opinion, need the “consistent” range be so that the results provide a measure of skepticism instead of support?

        • Tamsin Edwards

          Hi Sashka,

          I wasn’t very clear in my comment. Their argument is that climate models are useful because they are stable in their predictions. I say (a) stability is not a guarantee of usefulness and (b) a stable but wide prediction range is not very useful.

          Are you asking how wide the range needs to be before I say that the models are categorically useless? I might say an interval that included both negative and positive values, or an upper bound that was ten times greater than temperature changes we’ve seen in palaeoclimate records for similar CO2 changes (though these are uncertain too).

          But of course the interval has an estimated probability with it. So it would depend how much of the probability they estimated was in the tails. You could have a useless bound of 99.9999999% probability of being between absolute zero and 100degC, but it could be useful if the 67% interval was much narrower.

          Not sure if this comment is any more helpful 🙂

          • Sashka


            Thanks for sharing your views. Qualitatively, I agree with your (a) and (b). Quantitatively, in my not so humble opinion, the ensemble of models is definitely useful if (1) the distribution of model predictions resembles normal; (2) the standard deviation of model predictions is an order of magnitude below its mean.

            If the distribution resembles uniform and the standard deviation is just a little below mean then I consider the models borderline useless.

            Would you agree with this?

  31. Roger Longstaff

    Tamsin & Paul,

    Thank you for your replies and references. It seems that we have established the widespread use of filtering (between computational time steps) in GCMs, and this leads me back to one of my original points. The scalar and vector fields resulting from a calculation time step represent the state of the system generated by the model itself, and any filtering of the data between time steps, without an a priori knowledge of the “signal”, inevitably leads to loss of accuracy (where I define accuracy as physical reality, and errors as loss of accurate information).

    Furthermore, errors are cumulative in numerical integration. If we equate accuracy to Shannon entropy the loss of accuracy (or increase in error) is then exponential with respect to time (as a consequence of the logarithmic nature of information theory) – this seems to be the case with Met Office models that massively deviate from reality after a few weeks. Logically, therefore, it seems inevitable that integtrations over decades can produce no useful information.

    Is there a flaw in my argument?

    • Paul Williams


      There are a number of problems with your argument. First, the evidence is that although the RA filter perturbations may affect the evolution of individual weather systems, they do not affect the climate (Amezcua et al.). Relatedly, climate prediction is a boundary-value problem, whereas weather prediction is an initial-value problem. These are very different mathematical problems, where the origin of the predictability has a different source. Finally, climate models that do not employ the RA filter in the atmosphere (e.g. HadCM3) give climate sensitivity ranges that are not systematically different from those employing the filter.


      • Sashka

        Any climate simulation inevitably begins from some initial condition of the model. In this sense there is no difference between climate and weather models.

        But we define climate as a suitable average of weather so they are averaging over time and ensemble of the initial states. Thus the source of unpredictability is the same. The averaging could remove the unpredictability but it nobody ever proved it, AFAIK. I’m not sure whether there is a solid science beyond generating the ensemble. In theory, it is supposed to remove the dependence on initial value. How well it works in reality I don’t know.

  32. Alexander Harvey


    Even a perfect model, e.g. another Universe would diverge from this one, if it did not share precisely the same state.

    If that other Earth were to differ by one small atmospheric happenstance the weather patterns would likely diverge. Would you wish to imply that nothing about this Earth’s climate could be learned from that other Earth once its weather had diverged?

    I think you need to make a much stronger argument than noting that simulated weather forecasts diverge in order to justify a phrase as strong as “… can produce no useful information”.

    To paraphrase: “… cannot produce one bit of information about anything that could be put to some use. I hope that is not what you mean.

    I might have some interest in knowing the noon temperatures in Leh during August next year. An ensemble of weather simulations could be produced that gave estimates for the noon temperatures and estimates of the variance from those tempeatures.

    Would those estimates be useless?

    That must depend on what is meant by useful. You might begin by characterising “useful information” in some context.


  33. Roger Longstaff


    If we are talking about decadal timescales: “…cannot produce one bit of information about anything that could be put to some use” is EXACTLY what I mean! We are talking about logic and pure mathematics here, it is not a question of semantics.

    The main point that concerns me is that, in the absence of any empirical evidence for anthropogenic global warming, numerical models are being used as the sole justification for policies that are costing us billions and wrecking our economy. What I am trying to show is that climate modelling over decadal timescales is mathematically impossible.

    I must repeat – is there a flaw in my argument, and if so, what is it?

    • Steve Bloom

      “in the absence of any empirical evidence for anthropogenic global warming”

      So what term should we be using for this, Tamsin?

    • Alexander Harvey


      You seem to start from weather model divergence and then assert that climate prediction decades into the future will produce no useful information. You haven’t shown that the first part implies the second part.

      Do you believe that the Earth system has a climate? Some people seem to think that it doesn’t.

      There is a view that the Earth system’s weather does not have a statistical representation. E.G. it has no long term mean values, that the expectations do not converge. That the weather is not predicable even in a statistical sense. That may or may not be the case but it has never been proven to be so, at least not necessarily so on centenial timescales.

      If you believe it does have statistics you have not shown that mathematical errors in the simulators imply that the statistics produced my the model have no useful relationship whatsoever with the Earth’s climate.

      You haven’t shown that the imperfection results in the simulator being totally useless for estimating the climate, e.g. that it could get nothing useful right about the statistics of the weather some decades out.

      I think that it is trivially true that the simulators get some aspects of the climate broadly correct but perhaps not things that you find useful.

      The assertion you make (no useful information) is a very strong one: it requires that a prediction will have no statistical relationship with the truth or any proxy for the truth. Not that it is a bit wrong or a lot wrong but that it has no relationship with the truth whatsoever.

      You can’t show that based on observation but claim that it is logically provable.

      The basis of that proof stems from computational error but that would be common to all climate predictions twenty years out, which would include things that are trivially true such as the simulators reliably predicting that it is hot in the tropics and cold at the poles.


  34. Roger Longstaff


    “…climate prediction is a boundary-value problem, whereas weather prediction is an initial-value problem. These are very different mathematical problems..” However, the Met Office state that the same models are used in both applications. ANY use of flitering leads to loss of information (and this will be cumulative), and any “resetting” following a “restart” automatically invalidates ALL of the data generated up that point in the integration.

    Is this not correct?

    • Paul Williams


      The atmospheric and oceanic equations are the same for weather and climate, and hence the models are the same. The difference is the relative importance of initialisation and forcing. I stress that no RA filtering is done by the Met Office in its atmospheric model, either in its weather integrations or climate integrations. I also stress that it is possible (and indeed found) that filtering can affect individual trajectories without affecting the climate attractor.

      I hope that helps to clarify things,

  35. David Young

    I’m glad to see Paul Williams showing up. However, I’m a little dissappointed with the gloss concerning initial value problem vs. boundary value problem. Both climate and weather are initial value problems. Climate has variable boundary conditions. Both are subject to the problem of chaos and nonlinear behaviour such as bifurcations. The usual doctrine which Paul invokes in some measure is the doctrine of the attractor, viz., the chaotic details don’t matter because the attractor will “take over.” Paul should correct me if I’m wrong, but I see no theoretical justication for this. It seems to be based on the observation that “whenever I run the model, I get essentially the same statistics.” This is of course circular reasoning. There is no guarantee whatsoever that the attractor will even be correct, given the discretization, subgrid models, etc., etc., etc.

    Don’t get me started on subgrid models, which are another source of error that can have big effects on the resolved scales. This is well known in fluid dynamics where the problem is dramatically simpler. The usual turbulence models for example are known to be badly wrong even in simple separated flow situations. This is documented for example in the NASA drag prediction workshops over the last decade.

    Further, Paul, I’m assuming climate is not a stationary problem so that the whole argument of Reynolds’ averaging has a term that is not modeled. Certainly its usually unmodeled in fluid dynamics.

    Anyway, I’m glad to see Paul posting here as his work on time stepping is about the only attempt I know of to actually deal with these issues. That is a huge surprise to me given the huge investment in building and running models. Perhaps what we have here is “positive results bias” as documented recently in Nature or in the New York Times. You get a result that agrees with data and you publish, ignoring the fact that with different parameters and numerical details you get a result that disagrees with data.

    • Paul Matthews

      Yes, I was amazed to see Paul Williams repeating the nonsense about climate prediction being a boundary value problem. This is particularly odd coming from someone who has written a paper on time-stepping methods!

    • Paul Williams


      Thanks for your comments. I think the important point is that the influence and importance of the initial conditions wanes as the timescale increases. Accurate knowledge of today’s atmospheric frontal systems is critical for correctly predicting tomorrow’s weather, but unimportant for predicting next century’s climate. Of course, climate models still need to be initialised with something, it’s just that whether or not you get the weather systems in the right place doesn’t particularly matter. The initialisation of the ocean matters much more, and is particularly important for decadal climate predictions.


      • David Young

        Paul, I am a great admirer of your work and hope you get to do more of it in the future (funders pay attention). My question is whether your statement that the initial conditions have less and less influence on the climate as time goes by is an empirical statement about the models or whether there is other evidence for it. It seems plausible (at least we hope the climate is relatively stable over time) but I’m not sure historical evidence or experience in fluid dynamics supports it. In fluid dynamics there are often multiple essentially steady state solutions and which one you end up at is critically dependent on initial conditions. For a simple wing, there are often attached flow solutions and massively separated solutions at the same “forcing.” We don’t know if there are only two or tens or even hundreds. In time dependent calculations, there might be multiple attractors lurking out there and possibly some stable points and stable orbits too. Or the attractor might be multi-modal. My suspicion is that there are lots of these “features” of the climate system that might depend on chaotic details such as the details of convection in the tropics, but I’m not sure of course.

        • Paul Williams

          David – thanks for your comments. There is much we can learn from fluid dynamics. I suppose evidence comes, for example, from the Lorenz butterfly attractor. The occupation statistics of the two “wings” (e.g., residence times, transition probabilities) do not depend upon the initial conditions, even though individual trajectories certainly do. If there were multiple steady states (d/dt = 0 for each variable everywhere), then the selected state might well depend upon the initial conditions, but the climate attractor and Lorenz attractor do not have steady states in this sense (only in the statistical sense).


          • David Young

            Paul, When I mentioned the two “essentially” steady states for a wing at the same forcing, I chose my words carefully. There is of course in turbulent flow never a steady state for which d/dt = 0. However, these states are two things that one might characterize as local attractors and which one you get sucked into depends on initial conditions. However, we don’t know if there are only two of these things or maybe more. Computational evidence using Navier Stokes with turbulence modeling suggests there may be more. These things seem to me to be analogous to interglacial states and ice age states.

            To NJ,

            I agree that the kind of sensitivity study you discuss and which I believe Tamsin is trying to do is a very good idea. My one concern with it is that if the model is too dissipative, then the sensitivity to perturbations will likely be too small.

            I’m not quite sure I agree with your characterization of the system. The attractor may be quite complex and have both long time scale and short time scale approximately periodic behaviour. There is an excellent review article on this just a few months ago in SIAM Review. The ice ages and interglacials seem to be an example of this. But its very complex and one quesiton might be can climate models simulate these cycles? I’ve asked this question many times and haven’t gotten a definite answer. The problem may be that the ice ages were almost certainly caused by a change in the distribution of the forcing and not much of a change in total forcing. This does tell me that details of dynamics do matter and I have seen some pretty alarming statements about the fidelity of climate models to these details, such as convection. If anyone can enlighten me on this, that would be great. As I understand it vertical resolution in the models is very coarse and I guess convection must be essentially a “subgrid” model.

            Using simpler models to try to simulate things like the ice ages might tell us a lot about the interactions and possibly even the nature of the attractor.

            In the mean time, I do agree somewhat with the implication of Tamsin’s post here that perhaps we should reexamine the level of investment in the GCM models and try to do some more fundamental work.

            I do understand the reticence of Tamsin and Paul Williams to express controversial opinions in this blog forum. Things can be taken out of context and careers are always at stake in a field like climate science where the danger of being smeared at Real Climate or at some skeptical blogs is always real. The difference is that being smeared at RC can probably end a career or at least stunt it, particularly for young people. But some of us do appreciate and like directness and not shying away from controversy with regard to the science and do appreciate this blog and the effort it takes to do it. Please, keep up the good work.

      • Sashka

        I agree with David Young’s comments.

        In addition, if the initial condition is quickly forgotten by the atmosphere why would they do ensemble simulations?

        • billc

          sashka – are you asking assuming oceans are already taken care of per Paul’s last sentence?

          • Sashka

            bilic – I’m sure initial state of the ocean can make a difference. I’m not sure whether for a given state of the ocean the initial state of the atmosphere is irrelevant.

        • Paul Williams

          I don’t know who “they” are, but long-term ensemble climate simulations are typically multi-model ensembles or multi-parameter ensembles, not initial-condition ensembles.

          • billc


            I think that is worth questioning. I read Isaac Held’s blog and have played with some of the CMIP3-era GFDL CM2.1 results online. They seem to have run many ensembles with different initial conditions but the same parameters for that model. I thought it had something to do with initializing the oceans since it’s an AOGCM.

          • Sashka

            What is Mark & Patrick are talking about, in the quote that I repeat:

            Imagine being summoned back in the year 2020, to re-assess your uncertainties in the light of eight years of climate science progress. Would you be saying to yourself, “Yes, what I really need is an ad hoc ensemble of about 30 high-resolution simulator runs, slightly higher than today’s resolution.” Let’s hope so, because right now, that’s what you are going to get.

            But we think you’d be saying, “What I need is a designed ensemble, constructed to explore the range of possible climate outcomes, through systematically varying those features of the climate simulator that are currently ill-constrained, such as the simulator parameters, and by trying out alternative modules with qualitatively different characteristics.

  36. David Young

    One other thing. Most subgrid models are over dissipative, in exactly the same sense as the leapfrog filter analyzed by Paul. What that means is that disturbances will be damped with time and the attractor will appear more “attractive” than it in fact is.

    And these things matter critically. Feedbacks are a function of the dynamics and the subgrid models and the subgrid action and its chaotic details does matter for clouds for example. Just look at Isaac Held’s post on local modeling of convection. Do current subgrid models accurately predict this sensitivity without excessive dissipation? I am skeptical.

    • Tamsin Edwards

      David, thanks for your thoughtful posts. Our immediate response to questions about predicting climate vs weather is usually to quote Lorenz on problems of the first and second kind*, because the majority of people out there aren’t familiar with the differences between questions of trajectories and attractors.

      However, as you rightly point out there are important questions about the extent to which climate has attractors and the degree to which a climate model can represent them. I’ve invited a couple of other colleagues to this discussion in case they’d like to contribute.

      * For those not familiar with this, I’ll recap all the terminology together. Edward Lorenz distinguished between:
      (a) problems of the first kind, where we try to predict the exact path (trajectory) of a thing: e.g. weather, where we care about the chronological order of atmospheric states (“what happens tomorrow, then the next day,…”).

      (b) problems of the second kind, where we try to predict the statistics of a thing: e.g. climate, where we care about whether the mean, mode, max (etc) of many atmospheric states is changing through time.

      For the first kind, it’s very important to get the starting point (the ‘initial conditions’, such as today’s weather) as accurate as possible, because small errors lead to a big change in the predictions (i.e. chaos). For the second kind, it’s very important to get the driving forces (the ‘boundary conditions’, such as CO2) right, because these control where the atmosphere is ‘attracted’ to, such as warm or cool temperatures. We don’t know the future driving forces of the climate so instead we make predictions for several “possible futures” – e.g. with different concentrations of CO2.

      • Roger Longstaff


        I do not think that the Lorez Attractor analogy works in this case. The LA shows chaotic outcomes resulting from DETERMINISTIC equations – which is clearly not the case for the climate system.

          • Roger Longstaff

            Sahka – it is basic mathematics – google “Lorenz attractor”, then google “deteministic equation”, then verify that climate modelling includes non-deterministic equations. I am sorry, but I do not have the time to teach you mathematics on the internet.

      • N. J. McCullen

        Hi Tamsin et al.

        For what it’s worth here are my professional thoughts.

        1. The System.

        Firstly we can reasonably assume that the real climate is deterministic, since we live in a causal universe. Also I hope that it’s safe to say that there are some dominant mechanisms governing the system, such as solar forcing, thermal absorption and so on. Anything perturbing the system without strong feedbacks can be put in as a stochastic element (random kicks, if you like), giving a non-deterministic (noisy) element.

        2. Qualitative Dynamics

        If these are represented in the correct form in a model, then it should contain at least qualitatively similar dynamics to the real system. This means that measurements of a variable (such as temperature T) should, at appropriate values of our parameters (e.g. solar energy and gas concentrations c), show the same functional dependencies on average.

        An attractor in a high dimensional system is a little different from that of low dimensional chaos, but it still attracts nonetheless. What this means is that starting points outside the range of this region (T too high or low, for example) do not stay that way, or just wander off, but are returned (attracted) to the region of “stability”. Noise may kick it around, but in practice this can just make otherwise temporary (transient) dynamics be visited more often.

        The point is that these should exist in both the real system and also the models. To find them you need to start from as many and varied initial conditions (IC) as possible, and look for different outcomes. (In technical speak, look for different solutions with different basins of attraction). This includes IC that are not reasonable, in order to find modes of the model/system that are not previously observed.

        The main thing is to look at the model’s behaviour as we vary the parameter(s), and look how the response changes (the solutions bifurcate). This is the whole point of the modelling here, i.e. will the observable temperatures rise as humans increase the parameter of atmospheric absorption.

        3. Quantification and Validation

        To get more accurate figures and validate the model against reality, one needs to compare with data, where an obvious driver (e.g. CO_2 from a past volcanic eruption) was the clear driver rather than a feedback effect, in order to test the model’s behaviour against the resulting outcome.

        Which is one of the things being done by Tamsin and colleagues, I believe.

        • Roger Longstaff

          Thank you for this clear explanation.

          The assumption that a “climate attractor” exists, based upon the assumption that the real climate is qualitatively deteministic, seems reasonable. Is it therefore correct that, given these assumptions, we are searching for a negative feedback mechanism in the climate system?

          • N. J. McCullen

            As a general principle I think the search should be for a (set of) model(s) that include(s) all physically relevant factors and produce(s) the dependencies in a way to allow us to answer the open questions. If reality is observed to be doing something different from the results, then we have to ask what may be missing from the underlying model and look at the effect of including it.

            I would expect that the current models contain the main known feedbacks and that people have tried including others. What matters is that, even with any negative feedbacks creating a “stable attracting region” varying some parameter can result in shifts of the average behaviour and even jumps to new states (bifurcations) in many high dimensional mathematical models.

          • N. J. McCullen


            WARNING: What follows contains speculation and is provided only as food for thought!

            The modelling process could work in two ways: one being to try to construct a model that does what we intuitively expect (e.g. from back-of envelope calculations), then ask if this seems a reasonable representation of real physical processes; the other by building in the processes from the bottom up and seeing if what “emerges” looks like reality (I think this is the current approach).

            An abstraction of the first case (which I don’t believe is the current approach) is that we could produce a purely mathematical model with a series of nonlinear terms, some noise, and respective scaling parameters, “tune it” to do what reality does in all known “training” examples, then interpret the dominant parameters by looking for natural analogues, then look at the particular cases we want to address.

          • Roger Longstaff

            Thanks again for your further thoughts, and I agree with your speculation.

            My problem with current modelling (on which so much capital is being stored) is that it is searching for a positive (anthropogenic) driver (ppCO2) in a system that we assume to be governed by a negative feedback mechanism that has not been identified (there is no known saddle point for the climate attractor). Mathematically, therefore, the models are searching for a signal within a superposition of two unknown processes. Furthermore, the current practices of filtering (which I equate to low pass filtering) and pause/reset/restart techniques that seem to be employed reduce the numerical fidelity of the data fields as a function of time – thus rendering the investigation mathematically impossible, in my opinion.

            Very few share my opinion, but would you agree with the rest?

          • N. J. McCullen

            So I take it you’re assuming there’s no strong interaction between the variables then, as you talk of superposition?

            Would you even speak of the climate as a system in that case?

        • Roger Longstaff

          You are quite right – superposition was the wrong description. How about “..searching for a signal within a system containing at least two unknown processes.” ?

      • Roger Longstaff

        Paul, can you define “climate attractor”, and what assumptions to you make on its nature?

  37. Roger Longstaff

    Firstly, I agree with the comments of David Young (above). I think he is making essentially the same points that I have raised, but more from a mechanical rather than a theoretical standpoint. I will make some further comments based on the dialogue above (with Alex and Paul).

    The climate is a complex, multivariate system, with a large number of variables with non-linear, chaotic and sometimes unknown dependencies. It is inevitble that any numerical, time step integration climate prediction model will violate either boundary conditions or physical laws in short order unless either filtering or re-setting of the data fields is implemented – can we all please agree with that statement?

    Filtering or re-setting of data fields leads to loss of accuracy (deviation from reality) which is cumulative. While techniques may be implemented to show the “doctrine of the attractor”, all these can do is verify an a priori assumption – there is no real information being generated. Also, models are “tuned” to reproduce their training data, which is mistakenly cited as validation. This is circular reasoning, as models can be tuned to give “rhe right answer”, but in actual fact for all of the wrong reasons. As others have pointed out – it would be a grossly incompetent programmer who could not reproduce the training data.

    Finally, what we are discussing is the predictive capability of climate models. We have seen that seasonal models lose accuracy, or numerical fidelity, after a few weeks of integration at best. The same models did not predict the last decade of flatlining temperatures. For all of the reasons given it is still my opinion that numerical climate models have no predictive capability AT ALL over decadal timescales.

    • Sashka

      It is inevitble that any numerical, time step integration climate prediction model will violate either boundary conditions or physical laws in short order

      Could you state specifically what boundary conditions or physical laws would be violated and why/how?

    • David Young

      I don’t think this is a problem. Andy Lacis says mass, momentum, energy, and stuff like angular momentum are discretely conserved. Usually this is done without “filtering”. Chorin’s projection method for incompressible flows has a little of this flavor, but its not by itself a problem.

        • Sashka

          An interesting case of selective quoting. What it actually says is:

          might be accommodated by making periodic corrections

          I have no information that it actually happens in every model. Do you?

          • Roger Longstaff


            If you follow the references in this thread (and I am not going to go through them all again – this is not my “day job”), there are many references to the “resetting” and “restarting” of models. It is blindingly obvious that this invalidates all information generated up to that point in the integration

        • David Young

          I just read your earlier post and you may be taking things out of context. There are lots of problems with models but the filtering is not one of them. The stability thing may have more substance. Need to think about it.

  38. Roger Longstaff

    Sashka, please read my post of June 14th @ 10.55am. This contains references to filtering and “resetting”, as explained by the people that use these techniques.

    • Sashka

      Roger, I just read that comment of yours. What is it supposed to explain to me? What question does that answer?

  39. Roger Longstaff


    Physical laws violated – conservation of mass (see reference)

    Boundary conditions (eg. sensible temperature, pressure, etc.) violated – the inevitable consequence of the numerical, time step integration modelling of a complex, multivariate system, with a large number of variables with non-linear, chaotic and sometimes unknown dependencies.

    • Sashka

      Thank you Roger. So, pressure and sensible temperature are boundary conditions. You should have said so right away. Chaotic dependencies are great, too.

      Have a nice rest of your life.

  40. Roger Longstaff

    “Have a nice rest of your life”

    Sounds like you’re annoyed with me. Shall we try to keep this civil?

  41. Roger Longstaff

    Tamsin, I find the layout of your site a bit confusing, with posts and replies getting lost in the thread, so if you don’t mind I would like to repeat a question to Paul Williams, and to add a new one:

    Paul, I have looked at your paper (referenced by Tamsin above) on the RAW filter, and its differences with the RA filter. You demonstrate the greatly increased accuracy of the new filter in a number of tests. I understand that most current modelling results are derived using RA filters – does your work therefore not invalidate them?

    Also, you say that research is underway to understand the influence of numerical schemes on the “climate attractor”. Please could you define this term, and is it a purely mathematical construction? Can such an attractor even exist in a complex system defined by non-deterministic equations (noting that the Lorenz attractor results from deterministic equations)?

    • Paul Williams

      I’m not sure it is useful to talk about validating or invalidating models, or about whether models are “right” or “wrong”. What matters is whether they are useful for a specific, well-defined purpose. The evidence is that climate models are useful for climate prediction, with or without the RA or RAW filters. By attractor I just mean the subset of phase space to which all trajectories eventually converge in forced, dissipative, nonlinear systems.

      • Roger Longstaff

        Thank you Paul.

        From an exchange above (with Dr. McCullen) I had come to the conclusion: “…The assumption that a “climate attractor” exists, based upon the assumption that the real climate is qualitatively deteministic, seems reasonable.” However, your description “…by attractor I just mean the subset of phase space to which all trajectories eventually converge in forced, dissipative, nonlinear systems” raises a further question – where does the dissipation come from, and how do we know it is forced? Also, would I be correct in assuming that RA and RAW filters are dissipative, and therefore equivalent to low pass filters?

  42. Alexander Harvey

    I am getting to be a bit puzzled by some of the conflicting viewpoints and I shall try to explain why. Much seems to depend on what we mean by climate, weather, determinism, sealing wax and whether pigs have wings. The following may be tedious but hopefully not wholly without merit.

    I will define a constant climate as a system determined solely by constant laws and constant boundary conditions that produces weather trajectories distinquished by initial conditions with all trajectories sharing the same well defined statistics.

    I think my defintion is a restricted example of what may be meant by the climate being a boundary condition problem whereas weather is an initial value problem.

    I have changed things around a bit, in that it is the climate that determines the weather statistics rather than the weather determining the climate statistics. The statistics are inherent in the climate and a weather trajectory is an instantiation of climate.

    I will emphasis that the weather is defined as being deterministic and that initial conditions are never forgotten. The system never becomes oblivious to the initial conditions.

    I will define tipping points with respect to their absence from such a system. The existence of tipping points being determined by a failure to produce a single set of well defined statistics. This I think reflects what people may mean by an irreversible change leading to a new, permanent and observationally distinct climate regime. I have defined the system as being free of tipping points.

    Now I will cheat a little by perturbing the boundary conditions for a while before returning them to their original values and holding them constant for the rest of time.

    This is the point of my interest. If that perturbation causes the climate to shift to a new, permanent and observationally distinct climate regime we would have a contradiction. The climate system would be the same as the before but with a different climate regime. The laws are the same, the boundary conditions are the same, but the weather trajectory belongs to a different regime. I think that this implies that the climate regime is dependent on the system state at the moment that the boundary conditions returned to their original values. Which is to say that the weather regime and hence the weather statistics are dependent on initial conditions.

    The weather statistics were defined as inherent in the climate system (laws and boundary conditions) and not dependent on initial conditions hence the contraction.

    In this highly simplified scheme, tipping points (as defined) cannot be reached by an excursion in the boundary conditions. In order for such tipping points to be reachable and hence for different climate regimes with distinct weather statistics to exist we would have to define climate to be dependent on initial conditions as well as laws and boundary conditions.

    I defined tipping points in terms of their irreversibility and I defined that as resulting in a permanent change even if the boundary conditions are restored. I think this is this the cause of the problem. If climate is a boundary condition problem, irreversible changes of that sort will not occur.

    I will give an example in terms of the loss of the Greenland ice sheet representing a tipping point. Should we loose that ice sheet but restore the boundary conditions only to find that the ice sheet will never be restored and hence the climate would be permanently changed (unless or until the boundary conditions made some other excursion) implies that the climate, as viewed as a boundary condition problem, has two distinct stable states, with and without a Greenland ice sheet. Which contradicts the assertion that climate is solely a boundary condition problem.

    Tipping points as defined by me and climate as defined by me as a boundary condition problem are incompatable. I could have one but not the other. The problem may be due to the notion of irreversible change. If it was just a matter of waiting for a very very long time the contradiction would disappear.

    Alternatively the problem may be due to the notion of constant boundary conditions.

    I will speculate that the climate with constant boundary conditions is in fact highly dependent on initial conditions, e.g. that there exist a myriad of differing climate regimes each with their distinct weather statistics each corresponding to different partitions of the set of possible initial conditions. If the boundary conditions are perturbed in a similar fashion to that above but repeatedly it may be possible to move the weather trajectory between regimes, effecting a reconnection of the partition. Perturbation might prevent the persistence of separate climate regimes or a least a reuniting of those that can be reached into the appearance of a seamless region hiding the underlying transitions.

    There is another possibility, in that irreversibility may be a practical consideration. That restoring the boundary conditions may become impractical. That defintion would restore the compatability of tipping points and a climate dependent solely on laws and boundary conditions. The tipping point would be in the boundary conditions not in the weather system.

    I hope I have illustrated that the assertion that climate is a boundary condition problem, although plausibly correct, removes the possibility of a whole class of tipping points from the system if one chooses to define the system as I have done.

    Some of my decisions may have been idiosyncratic but I have not found there to be a well established and coherent set of definitions that meets my purpose. Tipping points seem to mean many things to many different people and I find that problematic. Despite all the interest, I have not found a definition of climate that I find workable so I made one up. I think that defining climate as an inherent system property, an abstraction that has an existence before any weather has occurred makes sense as it allows one to argue as to whether climate, as so defined, even exists. I.E. If weather trajectories were in fact partitioned into disparate climate regimes with distinct statistics dependent on initial conditions, the notion of their being a single climate determined by just laws and boundary conditions becomes untenable, in my view at least.

    I suspect, or at least hope that the system has an inherent climate dependent on boundary conditions at least to the degree that we will not be able to determine by observation that this is not the case.

    Mostly I have tried to reconcile some of the comments made by Paul Williams and David Young and also to attempt a putting together of a framework which treats simulations and real weather as instances of random functions (as seems to be required for exchangability) yet ones born of deterministic processes. In order to achieve this I have tried to unite weather and climate in a non-standard way.


  43. Alexander Harvey


    As I understand it the AR5 RCPs contain time series for both emissions and concentrations.

    Some of the simulators are capable of being driven directly by emissions and model the resultant concentrations. If some of the simulators lack such features will we end up with an apples and oranges situation where at a specific future date (say 2100) it may not be totally clear whether differences in the models are due different response to concentrations or to different concentrations.

    As far as I can judge, unless the different simulators all produce runs that correspond to the same concentration time series it may be difficult to determine sensitivities in the way this was down for AR4.

    I think it is clear that moving from a concentration scenario to an emissions scenario introduces much scope for additional variance between simulators with an apparent increase in the uncertainty in long term forecasts.

    If we are going to get a report with increased uncertainty, is the IPCC doing anything to forewarn the public and the media. There is a possiblilty for a leap forward in sophistication to appear to be a huge leap backwards if the first thing to hit the streets are graphs that are all over the shop compared to the AR4 versions.

    Any thoughts?


  44. David Young

    NJ, I actually like the idea of using simpler models with nonlinear feedbacks that are then tuned based on data. At least in some climate regimes, one might hope to gain some understanding from the approach even if quantitative results are questionable.

  45. Alexander Harvey

    Hi N.J & David,

    The glacial cycle is very long and that more or less excludes the use of GCMs for following its long term dynamics. If they run at the rate of one day to simulate one thousand days modelling a single ~100,000 year cycle would not be practical.

    Fortunately the long term dynamics may be simple, in the sense that the majority of the variance can be modelled by a simple system with only a handful of variables and equations, a forcing pattern and some noise for luck. Amongst those studying this is Michel Crucifix who visits here from time to time.

    On these longer time scales and at lower temperatures it may well be that a simpler system comes into view. A system which expresses low dimensional chaos organised on the large scale.

    I am pesismistic as to whether the large scale features of the glacial cycle can tell us much about centenial term climate. It seems to be a different ball game dominated by the properties of ice and the large scale reorganisation of water in general.

    There are similar studies being made for the Holocene looking for a simplification. Very simple models e.g. a cusp catastrophe may express themselves by two concurrent changes, increasing variance and slowing down when the cusp is approached. A move towards an increase in lower frequency signals. I believe that an answer to the question of whether we have skirted a cusp during the Holocene is a definite maybe based on searching for a concurrence of such changes.

    That basins can occur in the climate attractor seems to me to be a certainty given the history of glaciation. It is not at all clear to me that this implies that a more narrowly viewed climate system (atmosphere and ocean dynamics) exhibits any discernable basins. That is to express again my feeling that the simple but dramatic large scale, long period chaos of the glacial cycle is primarily due a glacial attractor coupled to the Milankovitch cycles. My view might be wrong or simply trivially true. Trivial in the sense that I have redefined the system by making the feedback effects of the glaciation a forcing on a narrower view of the climate system.

    My way of understanding chaos divides it into two parts, a divergence highly sensitive to intial conditions and the maintainance of long term structure, a hidden organisation. I believe that the second part relates to the Fermi, Ulam, Pasta problem and to whether sufficiently complex systems are likely to maintain the long term organisation that simpler systems with only a few or a few hundreds or thousands of degrees of freedom sometimes do.

    My view of the climate system may contain chaotic simplifications with periods greater than one year. Candidates would be ENSO, oscilations in the major ocean basins, and the overturning circulation. An alternative view is that these oscilations are no more than the result of resonance or simple persistence driven by noise. Of the above I suspect ENSO has a chaotic component but I am dubious about the rest.

    I think that I may be searching for the answer to the question of whether an all singing and dancing view of the climate system is separable on temporal and perhaps spatial scales into simpler systems and hence whether we can rely on a cascade of simulations without losing too much. For certain I doubt we can attempt to model the glacial cycles with GCMs but they can be used to see how well they can simulated the more stable interludes of maximal and minimal glaciation and I believe that is being done. Perhaps other simpler models can determine the pattern of glaciation and hence set the boundary conditions for the GCMs.


  46. Tamsin Edwards

    Apologies for not answering your questions everyone – things have got a bit hectic. I appreciate the interesting, though technically off-topic, conversations you are having. Hope to be back with specific answers / comments soon.

  47. Roger Longstaff


    The thread now seems to have died, but thanks for coming over to Bishop Hill and asking for participation. Just for the record, nothing here has changed my opinion that multi-decadal, numerical simulations can not produce useful information on the future climate.

  48. Roger A. Pielke Sr.

    I have posted on the excellent Lovejoy and Schertzer 2012 paper here.

    I also suggest readers look at the post

    The bottom line is the multi-decadal, numerical climate simulations can not yet produce skillful information on the future climate, based on evaluations of their predictions run in a hindcast mode.

  49. Dan Hughes

    Somewhere way up-thread somebody said:

    The magnitude of temperature fluctuations simulated by these GCMs decrease with increasing time scale, when according to observations they should increase, at least over climate relevant time scales (30+ years).

    This response is one characteristic of the effects of the properties of numerical solution methods for which minimization of inherent intrinsic numerical dissipation is not addressed. Time accurate numerical integration of wicked problems is hard. Time accurate numerical integration over enormously long time periods is a wicked problem in itself.

    Somewhere else up-thread somebody mentioned comparisons of the spectra of measured data and calculated results. I have often speculated that this might be a useful exercise especially now that some temperature, and other, data are available at the same time scale that is used in the numerical integration methods: and smaller scales, actually. I have also speculated that the spectra for the calculated results should contain no power at the frequency that corresponds to the discrete step size.

    Speculation is about as far as I can go because I have zero experience and expertise in the area. So little expertise that this speculation might very likely be completely wrong. One problem that I encountered was finding GCM output at the time-step size frequency. I would gladly assist after my current gas-money-for-moto-road-trips day job ends.

    David Young, do you have information that viscous dissipation, the conversion of fluid motions into thermal energy, is correctly handled in the NASA GISS ModelE GCM? Do the discrete approximations plus solution method themselves conserve mass and energy to machine precision, or is some ad hoc post time-step-integration back-fitting done.

    My experience has been that for simulations of compressible fluid flows careful consideration of the Equation of State in the discrete domain is necessary to ensure maintaining conservation principles associated with the continuous equations. Failure to solve complete non-linear formulations and to provide for feedback into the other equations following that solution leads to lack of mass and energy conservation, for example. Additionally, small matters such as rigorous attention to the effects of the time-level on the terms that represent the driving potentials for mass, momentum, and energy exchange are critically.

    If the solutions of the discrete approximations are not iterated to machine precision, how can mass and energy conservation be at machine-precision level?

    • David Young

      Dan and Tamsin,

      I would take it as a given that all climate models are too dissipative unless there is careful analysis to prove otherwise and I have seen none. Usually, this subject is met with deafening silence, even in fluid dynamics. Climate models must solve some form of the Navier-Stokes equations and it is a well known fact that for compressible flow, artificial dissipation is needed to stabilize any numerical scheme for the Navier-Stokes equations. The problem is that they contain a hyperbolic component, i.e., a convection part. If you look in the literature you will see that entropy and enthalpy are convected along streamlines. Usually artificial dissipation shows up as “artificial” entropy, i.e., the flow is stabilized too much. It is a very difficult numerical analysis problem and there has been ten of thousands of man years (sorry Tamsin) invested in trying to do this, not totally successful.

      If we are talking about subgrid models, I find it implausible that they are not too dissipative. Eddy viscosity turbulence models generally dissipate vorticity way too fast. Vorticity is critical to weather and I assume to climate. And convection and clouds, etc. Very complex phenomena that have I’m sure eigenvalues near zero.

      By the way, I second Paul Matthews statement below that even on long time scales, I would expect the climate to be chaotic at a large scale. I could be wrong, but I would say at a minimum, much more research is needed to reach a conclusion.

    • David Young

      Dan, Just reread your comment. I would say that enforcing discrete conservation of mass, momentum, and energy is usually fairly straightforward, using finite volume methods for example. I have less experience with the time accurate calculation, but I believe Andy Lacis when he claims discrete conservation. The problem is that you can be discretely conservative and get a totally incorrect answer. Excessive dissipation is one of the infinite ways to get the wrong answer.

  50. Paul Matthews

    The “The climate is not what you expect” paper by Lovejoy and Schertzer is now discussed at Roger Pielke sr’s blog and WUWT. Pielke describes it as excellent and I’d agree. The conclusion says
    “we have argued that the climate is not accurately viewed as the statistics of fundamentally fast weather dynamics that are constrained by quasi fixed boundary conditions”. Yes yes yes yes yes.
    I’ve been saying for some time that the climate is likely to be chaotic on long time scales just as the weather is on short timescales, and that trying to describe the climate by a single linear damped differential equation with a ‘forcing’ is nonsense.

    • Paul S

      I’m not sure there are any clear chaotic implications in the Lovejoy and Schertzer paper. Placed in the wider context of climate research, as far as I can see the paper is largely proposing new terminology for things that have already been widely discussed. They talk about macroweather versus climate, whereas numerous other papers have talked about fast-feedback climate sensitivity versus Earth system sensitivity, which incorporates carbon-cycle feedbacks, vegetation feedbacks, interactive atmospheric chemistry and ice sheet dynamics on top of the ‘fundamentally fast weather dynamics’.

      • BillC

        And here we have a shining example. I’m not trying to pick on Paul S. But it’s not clear to me that we have any sense of the right models to use to determine chaotic behavior of the climate system over long time periods. It seems like the centennial predictions, as CPV labels them below, are dependent on the assumption of insignificant variability at time scales longer than 30-50 years. Which is fine if it is admitted. I think right now it is part of “irreducible uncertainty” though I guess studies like Tamsin’s here and others, by combining models with paleo, have a chance to reduce the uncertainty somewhat. Otherwise, these types of variability remain about as well-represented in the predictions as volcanoes; actually worse because there’s no understanding on what represents an adequate “what-if” scenario. But back to Paul’s post above, I don’t think oceanic variability (even the known kind) is represented in either “fast feedback” or “earth system” sensitivity; and the latter is certainly a loose collection of a lot of different kinds of dynamics.

  51. CPV

    So, part of the art of designing models (or simulations) is deciding what is of interest (“output”) and constructing a model that is parsimonious but captures features in sufficient detail to determine the output with a sufficient degree of accuracy. If what we are interested in is centennial probability distributions of air temperatures 6 ft off the ground then here are a few observations:

    –the mean of this distribution is almost certainly determined by energy balance

    –the energy balance is critically dependent on very detailed analysis of the dynamics of cloud formation, moisture, etc, etc,

    –the heat capacity of the air whose temperature is being measured is a small fraction of the overall energy balance, the oceans playing a much larger role

    –as such, medium term (multi-decadal) fluctuations in air temperature caused by heat transfers from ocean to air and back may determine the width of the centennial distribution

    So, what is not clear is whether the same type of model should be used to determine the mean of the centennial distribution as should be used to determine its higher moments…

  52. J. Seifert

    “”All climate models are wrong”” and ” climate models at their limit”…….??
    Both statements have the same message: A fundamental INPUT ERROR in ALL
    models, because the OUTPUT cannot be better than the INPUT….this is
    the smelling dog…..!
    You are mislead by the bean counters, jiggling with mini and micro-effects, which
    these guys are unable to prove on a historic time scale….
    If I propose an climate effect, it has to be substantiated on a MULTI-millenium
    time scale….short timism of 250 years (1750-2000) is too primitve to be science.
    A true paper (NOT wrong) will appear until years end to discuss.
    Sit back for a few more months…things are in development…
    “IF there is a TRUTH, it will come out… just matter of time”….therefore: ALL
    models are wrong except the TRUE ONE…

  53. David Shaw

    As a Statistician I do much simulation work. I’m given prior estimates which I use to create samples, I strain the assumptions to see what the range of outcomes of potential studies might conceivably be. My work is limited to pretty well understood mechanisms that can be tested with detailed studies/observations, dare I say based on real data.
    I’m acutely aware that the met office cannot accurately predict from one day to the next, or so it often seems. Further it’s only the arrogance of man that believes his intellect can in any way simulate the climate and provide us anything that is worthwhile as a prediction. In the hands of non-statisticians, current software allows anyone to ‘have a go’, thank heavens some statisticians bother to correct things before the world order changes on the back of a back of the envelope analysis of tree rings or something.
    Not only do we have to face the indisputable fact that we basically know very little about the climate, it is my belief that the temperature record is so contaminated with trickery and of course the inevitable changes in the capture of the longitudinal record as to be fairly useless. When we don’t even know about the validity of the response, how are we to support some model that predicts it? I firmly believe it is the responsibility of those modelling, simulating, whatever, to divulge the limitations of their approaches rather than hand over to someone who might make the conclusions wildly out of context by conveniently not understanding the limitations.
    I like your blog, I promise to read more.