A model of models

First, apologies for the delay after the overwhelmingly great start and my promises of new posts. I’ve been wanting to write for a week but had other urgent commitments (like teaching) I had to honour first. I hope to post once a week or fortnight, but it will be a bit variable depending on the day job and the interestingness of my activities and thoughts. I do have a lot of ideas lined up – I wouldn’t have started a blog if I didn’t – but at the moment it takes me time to set them down. I expect this to get faster.

Second, thanks for (mostly) sticking to the comments policy and making this a polite, friendly, interesting corner of the web.

Before I begin blogging about models, I ought to talk about what a model is. Aside from the occasional moment of confusion when describing one’s “modelling job” to friends and family, there are several things that might come to mind.

Model is a terribly over-burdened word. It can be an attractive clothes horse, a toy train, something reviewed by Top Gear, or a Platonic ideal. I will talk about three further meanings that relate to the sense of “something used to represent something else”: these are conceptual, statistical, and physical. They are distinct ideas, but in practice they overlap, which can add to the confusion.

A conceptual model is an idea, statement, or analogy that describes or explains something in the real world (or in someone’s imagination). It is ‘abstracted’, simpler than and separated from the thing it describes. In science, before you can do experiments and make predictions you must have an idea, a description, a concept of the thing you are studying. This conceptual model might include, for example, a tentative guess of the way one thing depends on another, which could then be explored with experiments.

A statistical model is a mathematical equation that describes the relationship between two or more things, ‘things’ being more commonly referred to as ‘variables’. A variable is a very broad term for something that varies (ahem), something interesting (or dull) that is studied and predicted by scientists or statisticians*: it could be the number of bees in a garden, the average rainfall in the UK, or the fraction of marine species caught in the North Atlantic that are sharks. A statistical model can often be represented in words as well as equations: for example, ‘inversely proportional’ means that as one variable increases a second variable decreases. The important thing about a statistical model is that it only describes and doesn’t explain.

A physical model is a set of mathematical equations that explains the relationship between two or more variables. It also refers to a computer program that contains these equations, and to help (or increase) the confusion, these computer models are often called simulators. By explain I mean that it is an expression of a theory, a physical law, a chemical reaction, biological process, or cause-and-effect: an expression not only of knowledge but understanding about the way things behave. The understanding might not be perfect – it might be a partial or simplified physical model – but the point is it attempts to describe the mechanisms, the internal cogs and wheels, rather than simply the outward behaviour.

Physical models are the main focus of this blog, but there are many interesting links between the three: physical models often incorporate statistical models to fill in the gaps where our understanding is poor; a statistical model may describe another model (conceptual or physical). There are myriad different types of physical model, and even more uses for them. In the next post, I will talk about a few physical models I use in my research.

A general note about my plans. I think it’s important to first set out some basic terms and concepts, particularly for those that not familiar with modelling, so please be patient if you are an expert. Before long I will also post more technical pieces, which will be labelled as such so as not to scare off non-experts. I’ll also start blogging about the day-to-day, such as interesting conference talks and random (mostly science-related) thoughts, rather than only pre-planned topics.

 

* The opposite of a variable is…a constant. These can be interesting too.

121 comments

  1. Barry Woods

    Sometimes some modellors forget what models are:
    I know the IPCC call them ‘projections’ but the are too easily described as ‘predictions’ and treated as evidence and proof rather than output of a computer program: (and the lobbyists/politicaisn donot know this, nor care?) Also do we see model data fed into othe rmodels and again considered as ‘data/evidence’

    Professor Kelly from one of the enquiries:

    “(i) I take real exception to having simulation runs described as experiments (without at least the qualification of ‘computer’ experiments). It does a disservice to centuries of real experimentation and allows simulations output to be considered as real data. This last is a very serious matter, as it can lead to the idea that real ‘real data’ might be wrong simply because it disagrees with the models!

    That is turning centuries of science on its head.”

    http://climateaudit.org/2010/06/22/kellys-comments/

    —————-
    The observant might take note that Prof Kelley was one of the 16 scientists of ‘that’ WSJ article:
    “No Need to Panic About Global Warming”
    http://online.wsj.com/article/SB10001424052970204301404577171531838421366.html?mod=WSJ_hpp_RIGHTTopCarousel_1

    http://wattsupwiththat.com/2012/01/27/sixteen-prominent-scientists-publish-a-letter-in-wsj-saying-theres-no-need-to-panic-about-global-warming/

    Those with a short memory, may recall the ‘upholders’ of the consensus made a reply, which included Peter Glieck and Katie Hayhoe

    http://online.wsj.com/article/SB10001424052970204740904577193270727472662.html?mod=wsj_share_tweet#articleTabs%3Darticle

    • Anteros

      Barry –

      Your point about model output being considered [even unwittingly] as ‘data’ is a good one. I hope Tamsin will expand on this. I was studying the ice2sea website and was surprised that [apparently] all of the input to one group of models is the output from another. And I guess the output from the latter models can then be used as input for some much larger, more general model.
      Hopefully this won’t then be used to inform us of the events of the future..

      Of course, few people make predictions any more, but it’s worth remembering that the IPCC FAR says

      Based on current model results we PREDICT: An average rate of increase of global mean temperature during the next century of about 0.3 degrees C per decade…

      [There was of course the caveat that this was based on a model that assumed few or no steps would taken to limit the emissions of greenhouse gases, but then again, that is exactly what has happened..]

      I wonder if the dropping of the word “prediction” in recent times is due to a changing understanding of what models can (and can’t do), or something else?

        • Phyllograptus

          Models. Especially numerical simulations. As one modeler I know likes to say “models are like masturbation, if one does it for too long one comes to believe it is the real thing”
          Simulations are useful but one always has to remember that they are “simulations” , an approximation of reality, and not to be exchanged or mistaken fo the real thing!

      • Sceptical Wombat

        When people include an ellipsis (…) in their quote I am always interested in what they have left out. Perhaps Anteros could enlighten us?

        • Anteros

          Sceptical Wombat –

          Do you mean the statement that there is a specified uncertainty range of 0.2 – 0.5 Degrees C? Or that the rise would not be steady? Or that this would lead to 1 degree of warming by 2025?

          None of these materially change the substantive point – the IPCC used to make predictions. This prediction [without, or especially with, it’s range of uncertainty] is notable for being in remarkably poor agreement with reality.

          We’re about two thirds of the way to 2025 – can you see why the IPCC no longer make predictions?

    • Barry Woods

      Professor Phillp Stott had these thoughts over ten years ago with respect to ‘scenarios’ and ‘projections’ from computer models: (from a BBC programme Stott & Sir John Hougton, in disagreement)

      Prof Stott:
      “The problem with a chaotic coupled non-linear system as complex as climate is that you can no more predict successfully the outcome of doing something as of not doing something. Kyoto will not halt climate change. Full stop.”

      Given that he seems to be paraphrasing TAR, why does the met office and others seem to think bigger and faster (and more expensive) computer will resolve this problem..

      “The climate system is a coupled non-linear chaotic system,
      and therefore the long-term prediction of future climate states
      is not possible.” – IPCC 2001 TAR (pg 771)

      That interview was enough for Mike Hulme of the Tyndal Centre to want to get those views off the airwaves…
      http://wattsupwiththat.com/2011/11/27/climategate-2-impartiality-at-the-bbc/

      I do not see that TAR referenced mentioned in AR4, why not as the issue remains with trying to model this type of system in computers… Yet, Prof P Stott is of course now labelled as a ‘sceptic’ or the d-word, like Lindzen)
      http://risingtide.org.uk/hallofshame

      Both are speaking next week in the House of Commons (22nd Feb)

      But my question here, is with refereence to TAR.

      • Sceptical Wombat

        The problem that I have with this oft used quote from the IPPC is that the term “climate state” is not defined in the report. Naturally people commonly assume that it is synonymous with “climate” but I suspect it is being used in a more technical sense. Any comment Tamsin?

        • Tamsin Edwards

          The part Barry quoted is in the Executive Summary of Chapter 14 (“Advancing Our Understanding”) and refers to the end of 14.2.2.2: here (emphasis mine).

          Fortunately, many groups have performed ensemble integrations, that is, multiple integrations with a single model using identical radiative forcing scenarios but different initial conditions. Ensemble integrations yield estimates of the variability of the response for a given model. They are also useful in determining to what extent the initial conditions affect the magnitude and pattern of the response. Furthermore, many groups have now performed model integrations using similar radiative forcings. This allows ensembles of model results to be constructed (see Chapter 9, Section 9.3; see also the end of Chapter 7, Section 7.1.3 for an interesting question about ensemble formation).

          In sum, a strategy must recognise what is possible. In climate research and modelling, we should recognise that we are dealing with a coupled non-linear chaotic system, and therefore that the long-term prediction of future climate states is not possible. The most we can expect to achieve is the prediction of the probability distribution of the system’s future possible states by the generation of ensembles of model solutions. This reduces climate change to the discernment of significant differences in the statistics of such ensembles. The generation of such model ensembles will require the dedication of greatly increased computer resources and the application of new methods of model diagnosis. Addressing adequately the statistical nature of climate is computationally intensive, but such statistical information is essential.

          So it refers to the motivation for performing groups of simulations with different initial states (initial condition ensembles). These are standard in weather because the initial state is so important. They’re also used in climate, because it gives a better estimate of the full statistical distribution of climate.

          For example, a single simulation gives a time series of annual bumps and wiggles. An initial condition ensemble gives you lots of time series, all with different bumps and wiggles, like this: http://www.ipcc.ch/publications_and_data/ar4/wg1/en/figure-9-5.html

          You cannot expect a single simulation to match every bump and wiggle in the real world, but you can try / hope that the statistical properties (e.g. the trend) of multiple simulations match the statistical properties of the real world.

          That’s what it means by not predicting the climate state – it means the instantaneous state of the system. It’s not a good choice of words, because “climate” is defined statistically: it is the long-term properties of multiple states in time. But by “climate” they mean “earth system” or perhaps “atmosphere”.

          You can try to predict the long-term *properties* of the atmosphere (e.g. trend) but not the long-term *instantaneous* snapshots (e.g. year to year).

          Hope that makes sense – am a bit tired.

          • Barry Woods

            Still useless if you have unknown unknowns, variables you do not know the magnitude if and psrameter that are very poorly understood.. ie clouds for exsmple.. whew dome factors the dign is not even underdtood, let alone the magnitude…

            Where will the models be… If the anomaly went negative for a few years or more..

            That alone would not disprove AGW (though CAGW very unlikely. But the gcm’s would just be scrap.

            y

          • Sashka

            The most we can expect to achieve is the prediction of the probability distribution of the system’s future possible states

            Yes, but what system? The system described by the model or our planet? How do we know these are even close, and (if they are) in what sense?

    • Joshua

      “…but the are too easily described as ‘predictions’ and treated as evidence and proof rather than output of a computer program: “

      If someone says that a model predicts that something is 90% likely, is that treating the model as “proof?” If not, then what do you suggest should be done if others characterize that as claiming “proof?”

  2. Sashka

    Tamsin,

    I believe it might be helpful if you illustrated your taxonomy with a few examples to separate conceptual models from physical.

    To me, all good conceptual models are physical, but not vice versa (of course). If your views are different it could be worthwhile explaining it a bit.

  3. Ed Hawkins

    I know many climate scientists are promoting a move to use ‘simulator’ rather than ‘model’ for our physical representations of the planet. One downside that I can see is that we might end up with the next Met Office ‘model’ called Hadley Centre’s Global Unified Earth System Simulator (HadGUESS), which probably isn’t the best acronym. 😉

      • Barry Woods

        Simulator – implies that it is all well understood and can provide accurate simulations..

        this is not the case. what are exact parameters for ‘clouds’, ocean oscillations, etc..
        in fact what is the actual sign for some of these parameters, let alone the magnitude..

        So simulator is just as problematic. it implies that enough is understood to be able to simulate.

  4. Anteros

    Tamsin –

    A great ‘first’ post.
    I think your descriptions are clear and succinct. They are also very dense (in a non-pejorative sense). I agree with Sashka that some examples might clarify your distinctions, otherwise you could expand your descriptions a little, just to bring into focus your differentiation between a model that describes and a model that explains. Unless I’m wrong, this differentiation is quite important!

    • Tamsin Edwards

      Anteros – I was planning to extend this post to give specific examples, but thought it better to get something out. Hopefully as my posts continue it will become more clear.

      Sashka – I think conceptual model is a rather broad term, so I wouldn’t agree that they were all physical – you could have a conceptual model of an unphysical dream, emotion, or thought process, I think?

      • Sashka

        > conceptual model of an unphysical dream, emotion, or thought process

        I would prefer to separate such things into their own little class. Conceptual models of “things” that is not physical reality. I don’t have a word for that.

      • Alexander Harvey

        Tamsin & Sashka

        This may confuse more than it helps but I think you can have conceptual, statistical and physical models of systems that have no known physical reality. They have occurred in mathematics. One example of a “physical” in the sense of mathematical, and conceptual model with no known physical reality was the development of non-euclidean geometry in particular Riemann’s generalisation in the mid 19th Century some half a century before it became to be used by Einstein. The geometry was well formed and well understood prior to having its current physical interpretation. Mathematics has the habit of constructing models with no known physical interpretation some based on contradictory alternatives which are not currently decidable. The sequence of primes can only be simulated i.e. calculated . There are statistical models of the distribution of prime numbers some based on the properties of the Riemann Zeta Function yet there is no physical nor mathematical link unless and until the underlying Riemann Hypothesis is proven.

        Alex

  5. John Carpenter

    I’m glad you are starting from the ground level, I look forward to how you expand on your modeling knowledge. I hope to learn a lot about the mechanics of how models are assembled and work from the conceptual stages to the physical. Glad you are doing this!

  6. Doug Cotton

    All models are wrong simply because the greenhouse conjecture is not based on real world physics.

    == snip ==

    Thus the IPCC “backradiation” cannot affect the temperature of the surface and there can be no atmospheric radiative greenhouse effect.

    * http://climate-change-theory.com/RadiationAbsorption.html

    Doug, I snipped this for comment policy (f)(e) – though I left your first and last sentences and link to be nice 🙂 — Tamsin

    • Steveta_uk

      Tamsin, policy (e) suggests that this is re-posted to the Unthreaded section of Bishop Hill.

      What’s the poor Bishop done to deserve this ?

  7. Alexander Harvey

    Hi,

    Language is marvelous and treacherous so it be best to clear be in our usage. I shall try to explain mine with recognition that it be but one opinion. What a simulator is and isn’t I view as partly decidable by their construction and part by their use, hopefully the following will clarify.

    I need first to mention the model class that I find most intriguing, the emulators, which I think would be part of the world of the statistical modelling. If I decribe the simulators as being synthetic, emulators would in addition be analytic in the sense of finding or even learning relationships. My distinction is clear that a pure simulator is unchanged by its operation whereas an emulator has a learning mode where it modifies itself in addition to a predictive mode. The rules followed seem different. The simulators rules are more general or universal whereas the emulators include some that are ad hoc or specific. One can simulate things that have never happened, may never happen, or may not be according to known physical laws for our universe. Whereas emulation seems to stem an initial instance or set of instances either gleaned from the real world, simulations of the real world, or of other worlds. To that extent an emulator can view a simulation as an experiment capable of being mimicked or duplicated.

    It ought to be possible to make a clear description of what a physical model is and isn’t but in reality more tenuous when a particular simulator doesn’t stick purely to simulation or is not used purely for simulation.

    (This is an area where I hope you can help us. There are real, and I think legitimate concerns, that the continued attempts to simulate the climate of the 20th century are prone to the risk of a creep towards emulation whether intented or not and it is difficult to prove otherwise. The instance is well known in advance and new information that could confirm or confound is rarely forthcoming. Going forward the distinction may also be problematic if and when climate models adopt data assimulation strategies which may blur simulation with emulation by way of the inherent learning process. Hopefully going back and simulating past epochs about which evidence is still emerging or was unheeded during the process of constructing the simulators is a better and more timely test of both simulator skill and the quality of the evidence than their performance with respect to either the whole of the 20th century or the 21st.)

    In my usage, what may be commonly deemed to be simulation may not be distinquished from a mix of simulation and emulation for we cannot know what, if anything, we have taught them. We do know that specific values have to be entered which must be picked from some plausible distribution. This is not a bar to their utility but highlights a need for continued investigation into which ranges of differing combinations of interrelated values and plausible initial conditions result in simulators that could plausibly include an emulation of known events or statistical relationships amongst their outcomes.

    (That may read like an attempt to kick the simulators into the long grass of uncertainty but is its converse, the need to realistically characterise the length of the grass and use whatever evidence there exists to shorten it.)

    I think that there is a clear distinction between the different classes of models in terms of their construction, their methodology, but given the state of play any current simulator is also both a statistical model about which we can infer the likelihood of differing locations in its parameter space, and an emulator should we allow it, even inadvertently, to learn such a location on the basis some best fit to some particular known historical instance.

    I see our best hope being that the simulators parameter space (that there are more than one makes them a parameter in a larger space) spans significant elements of our real world amongst all the innumerable non-real worlds that space contains. If it does then the force of the statistical models can be brought to bear fruitfully upon them. If it doesn’t then that may be shown by statistical modelling based on evidence.

    So I see a clear distinction in terms of construction but not in terms of use. I find that interesting, useful and encouraging. Others may not.

    Alex

    • Michel Crucifix

      Alex,

      Reading you I wonder whether there could be misunderstanding about the meaning given to ’emulator’, at least in climate science context. Emulator (in our field) is a surrogate for the simulator, calibrated on it, and the primary purpose of which is to bypass computational expenses of actually running the simulator. And, in this definition, ‘simulator’ has to be understood as the ‘big computer code’. Is that the sense you were giving to it?

    • Alexander Harvey

      Michel,

      I think I do mean emulator in the sense it is used in climate science but I actually have a less clear idea of how they work compared to how a simulator works. I understand them to be a type of learning machine. You feed them data which they attempt to mimic in some way, the more you feed them the better they perform, by some meaning of the word better.

      As I comprehend them, they attach no meaning to data variables, they do not understand the experiment they are emulating in the way that a simulator must.

      In climate terms I am identifying simulators wirh “the big computer code” but in general to any function derived from ‘first principles’ or the best quantitive understanding of the underlying mechanisms involved. I was also trying to express the degree to which their operation is pure in that they are not learning machines. This is operationally similar to an open loop, whereas in practice they may be used to simulate a known experiment e.g. the 20th Century and it becomes difficult to know whether knowledge of the known experiment has been fed back into the model closing the loop making them in part learning machines. That state of affairs I would regard as emulation albeit by simulator. I am trying to draw a distinction based not just on their internal method of operation but on how they are trained. With the simulators this seems to relate to how the parameters and parameterisations are chosen. Some sort of decision process must be occurring. My understanding is that we need to explore the parameter space using whatever prior knowledge we have of the likelihood of such choices on an individual basis but we are restricted by the cost of doing so as each location in that space requires a separate run of the simulator. This is I think where emulation of the simulator is used. That given a well designed experiment, e.g. a roman square or similar exploration of the space, the emulator can fill in the gaps using some statistical analysis based on the limited but still numerous training samples provided by the simulator. How this is achieved and what precisely is achieved is something that I hope to learn a great deal more about.

      My current understanding is that we can use comparison with real world evidence, to make inferrences concerning the likelihood of various locations in the parameter space and use such likelihoods to select plausible ranges in that space rather than individual locations chosen by the simulators operators. In part I would see that as an antidote to the risk of unintended training of the simulators based on parameter choice where the real world evidence is known in advance of the decision making process. I have been led to believe that the nature of that process of picking parameters and parameterisations has hitherto been opaque and not well documented. Put simply, if the simulator runs are regarded as experiments (or perhaps trials) they may not have been well designed experiments. This needs to be corrected for and made much more transparent. The good news is that it seems that we can do just that provided we run well designed experiments that allow us to explore the parameter space more fully and much more cheaply using emulation.

      That, right or wrong, is my understanding of the role of emulation. This is an intriguing aspect of climate science that gets very little public exposure and my knowledge is subsequently both slight and partial. I know I am ignorant and I expect confused regarding how the emulators function and what aspects they emulate.

      Alex

      • Michel Crucifix

        What you write is music to my ears, and I suspect Tamsin’s as well. If I read it well what you recommend is, with a few details, the approach that Tamsin, I and a few other people have been promoting for a few years now. It is also in the spirit of the ‘QUMP’ project promoted by the Met Office.

        Let’s be clear about it. The development of a GCM is not an open loop process. You can’t possibly reproduce the climate system from ‘first principles’, without a bit of tuning. And the question is how much information you introduce with this tuning (some go as far as saying that the GCM is inductive —can’t find just now where I read this— but I believe it is going too far) and how much information you actually ‘deduce’ from the ‘first principles’. Given the nature of data and its complexity it is difficult to ‘measure’ these information flows in an objective way, as you do with criteria such as AIC or BIC, so its all a bit qualitative, subjective,…. and it can be a bit emotional 😉

        So the great challenge is to sample uncertainties (parametric and structural) the best we can, and estimate a likelihood given observations. The ’emulator’ in this story (which you describe admirably) mainly serves as an interface between the simulator and the observations, to make the whole thing mathematically tractable. Brute-force sampling the GCM parameters is just impossible.

        The emulator may also be ‘augmented’ to account for statistical descriptions of model errors. Hope Tamsin will say more about this : this is our great passion.

        Finally, regarding experiment design, I guess it is fair to say that experimental design has been largely overlooked during the development of climate modelling. These are concepts not all physicists are very familiar with and which, fortunately, statisticians are introducing. But it is, hopefully, the coming thing. You may want to have a look at the website of Nathan Urban at Princeton.

        What all of this tells, is that physics alone won’t solve all the problems of climate; it is not a simple question of using the ‘right physics’ , because it is just impossible at the climate scale. Concepts of statistics, including likelihood, priors, experimental design, are immensely useful, even if it makes some of our good old physicists colleagues a bit suspicious by times.

        • Barry Woods

          suerly the right physics helps!!

          Ie cloud – water vapour feedbacks due to CO2 ‘assumed’ to be positive (and various orders of magnitude)
          if in actual real world atmospheric physics it is negative then where are we.
          In nature postive feedback for climate are a big leap of faith and would appearto be unlikely. ie how would earth get out of positive feedback loop.

          • Sceptical Wombat

            Positive feedbacks are common in many natural and economic systems. Provided that the feedback caused by a change of size x is less than x then the feedback does not run away. This is simple high school algebra.

        • Sashka

          > You can’t possibly reproduce the climate system from ‘first principles’, without a bit of tuning.

          That’s not even a question. The question is whether you can do it with as much tuning as you want. And if the answer is “yes” then what do we mean by “reproduce”. What’s the objective criterion.

          Even more importantly, supposing you can fine-tune the model to “reproduce” the current climate, what (if anything) does it tell you about the model’s predictive ability?

          • Michel Crucifix

            ok, fair point. You can’t reproduce climate exactly, even with lots of tuning.
            Given that the ‘climate’ of a model (be it conceptual or the most sophisticated simulator) is a simplified version of reality you have to decide on how to model the layer between the model and the reality. This is what statisticians call the discrepancy, so you hawe to learn both on the simulator parameters, and on the discrepancy (which can be formulated parametrically if it helps).

            Rougier nicely introduces this:
            Jonathan Rougier, Probabilistic inference for future climate using an ensemble of climate model evaluations, Climatic Change, 81, 247-264 2007

        • Alexander Harvey

          Michel,

          “What you write is music to my ears, and I suspect Tamsin’s as well. If I read it well what you recommend is, with a few details, the approach that Tamsin, I and a few other people have been promoting for a few years now. It is also in the spirit of the ‘QUMP’ project promoted by the Met Office.”

          This is not a coincidence, I have tried to restate, as best I can, what I know about [paleo]QUMP hopefully without commenting too directly. If by others you include Jonathan Rougier and Michael Goldstein, I have listened to them as well. If you recall, some of this was presented at the Isaac Newton Institute (Mathematical and Statistical Approaches to Climate Modelling and Prediction) and lives on for posterity. Hour for hour, those seminars may be the best publically accessible conceptual overview of the field (which seems broader than your specialities) but it is a lot of hours.

          In my case, these seminars increased my confidence in the process in proportion with the increase in uncertainty demonstrated. By which I mean that confidence, in its common usage, is boosted by transparency of presentation, which in this case includes the transparent characterisation of the uncertainties which are many. To steal from Tamsin: it was a presentation of climate science for grown-ups. To steal from you: it was music to my ears.

          Peter Challenor presented on experimental design which looks superficially similar to a paper by Nathan Urban I just looked up. I am content to stay at the conceptual level on this aspect, it is just good to know that people are working on it. Someone joked about “ensembles of missed opportunity” but I cannot remember who.

          I think the closed loop aspect gives rise to a far bit of justifiiable scepticism, which is why I mentioned it. Hopefully understanding how such suspicions are addressed will help.

          Alex

          • Michel Crucifix

            I definetely meant Jonty Rougier and Michael Goldstein, and other friends from the Newton Institute

            ” Someone joked about “ensembles of missed opportunity” but I cannot remember who. ”
            Ha ha ha !!! It would be nice to know who, just to be allowed to quote him / her !

          • Sashka

            (Sorry for replying in the wrong place again, the Reply button totally went missing where I need it.)

            Thanks to Alex Harvey I have a something that looks like a version of the same paper. It begins as follows:

            “A simple question will help to motivate this paper: What is the probability that a doubling of atmospheric CO2 will raise the global mean temperature by at least 2C? This seems to be a well-posed question”

            Regrettably it is not. First of all, there is a “small” matter of resolving whether the climate is chaotic or deterministic. A lot of people think that climate (defined as GMT averaged over suitably long time scale) could be deterministic and therefore (possibly) predictable but I believe it remains unproven. If the climate is deterministic, the REAL probability is either 0 or 1. Otherwise it may or may not be possible to define the probability for real climate. As far as I can tell no such effort is made.

            Defining the probability in (1)-(2) he seems to silently assume that the climate is deterministic. To introduce the probability he wraps the uncertainty around x* which is essentially a set of forcings. The equation (1) states y=g(x*). I read it as a statement that the climate is deterministic but indeterminable because x* is and will never be known exactly.

            In equation (3), g() becomes the “climate simulator, y is constrained to be those components of the climate that match the simulator’s outputs”. This sounds a bit weird because we have no way to know which components of the climate that match the simulator’s outputs. But the most interesting part now is in the joint distribution F.

            Please do note the elegance of the construct. The paper is written to define the probability “correctly”. But the definition is based on an unknown (possibly non-existent or unknowable) joint distribution F that we are supposed to construct by calibrating the simulator to observations.

            How do we know that such sequence of calibrations will ever converge? How long should it take in practice? How do we know that F is stationary? What if it isn’t? What if we get completely different result from each simulator?

            Let’s say I’m not impressed, conditional on continued reading beyond Section 2. Sorry if I missed something important. I have loads of real work to do.

          • Alexander Harvey

            Sashka,

            “A lot of people think that climate (defined as GMT averaged over suitably long time scale) could be deterministic and therefore (possibly) predictable but I believe it remains unproven.”

            If you imply that a long term averaging of temperatures will not not necessarily converge, i.e that there is no mean, then I think that the paper assumes that is not the case and that there is a well defined region Q of Y where the (presumably) mean temperature is at least 2ºC higher in a future where CO2 has been doubled than would have occurred. Even if averages do not converge they still can be computed so a slightly different question is I think valid. A question of the form: after a doubling of CO2 will the average temperature during the period N years later and N + M years later be increased by at least 2ºC, are decidable as Yes/No provided that x* is certain, or as a probability if x* is uncertain but has been drawn from a distribution. That is true whether the system is chaotic or not, once determinism has been assumed which I think it has. I think an answer in the form of a probability is defined when the system is non-deterministic provided something can be said of its statistics.

            In section 2 where g(·) refers to the real climate, y = g(x*) is determined soley by x* as one would expect, but as y includes all future states it doesn’t imply as to whether or not the climate is chaotic so it doesn’t imply that Q can be defined, but it does assume that y is determined, and if x* is certain then y is certain. For certain x* the probability that y is in Q indeed either 0 or 1.

            When x* is uncertain, y is uncertain and g(·) would indeed map that correctly but Q is not I think well defined as stated. In this case the baseline for the 2ºC rise also has an uncertainty distribution. That said it is an overview and that could be dealt with.

            x* is more than a set of forcings, it is everything necessary to determine y which includes initial conditions, all externally imposed boundary conditions, and physical constants. As I read it g(·) maps the initial conditions, physical constants, and boundary conditions over time to y for both the future and past time. As defined, I cannot see how problems of stationarity are relevant to the joint distribution F for uncertaion x*. Neither the initial conditions nor the physical constants evolve. The externally imposed boundary conditions are inputs and only need to be stated for the required time period. They may be, probably will be unstationary, but I cannot see that matters.

            The phrase “constrained to be those components of the climate that match the simulator’s outputs” admits the interpretation: “limited to just those components that the simulator outputs”. E.G under CMIP3 many desirable climate components were not output by the simulators, OHC being commonly absent from the output.

            Alex

          • Sashka

            Alex,

            Thanks for your comments.

            For a given forcing (in the broadest sense, i.e. x*) scenario, consider GMT as a function of N and M in your notation: y=GMT(N,M). The question is whether y is stable (I am using stable instead of deterministic now b/c it is the deterministic chaos that we suspect) WRT initial conditions and small variations in x*.

            If it is stable than the probability in reality is 0 or 1. If it isn’t then for all means & purposes we are dealing with unknown distribution of future outcomes. Moreover, in the latter case, we don’t know whether (think of parametric description) such distribution itself is stable. In the worst case the distribution could strongly depend on N, M and details of CO2 emission scenario.

            The bottom line is that we don’t even what the sample space looks like. In particular, the set of achievable “y” Y={y} is unknown.

            The concept of modeling discrepancy between a model and reality that we cannot describe even in terms of sample space strikes me as wishful thinking. Put in other words, just because someone wrote an integral expression for expectation doesn’t mean that the definition has a meaning.

            Regarding calibrations: they propose to update the model based on ongoing comparisons with observations. Suppose the model is misspecified in such a way that it will generate the distribution that is different from unknown one in reality. For example, what if the true climate dynamics are chaotic but the model converges to the same quasi-steady state (over M years) no matter what. Do you think that periodically adjusting the model parameter to best fit the (incomplete and imperfect) observation will eventually kick the model to chaotic regime and help generate correct distribution in the end? Based on what?

            Regarding stationarity: these corrections could be a function of climate state and/or model state and/or (but most likely) current configuration of the model. In practice it means that the recalibration procedure could depend, potentially, on all of the above. Which brings me to the same set of questions again:

            How do we know that such sequence of calibrations will ever converge to yield the correct distribution in the end? What if we get completely different result from each simulator?

          • Alexander Harvey

            Sashka,

            Thanks for continuing, you ask questions that need answering, I will deal with one that I am best equiped to answer.

            “Do you think that periodically adjusting the model parameter to best fit the (incomplete and imperfect) observation will eventually kick the model to chaotic regime and help generate correct distribution in the end? Based on what?”

            As a matter of what I think the answer is NO. Based on my thinking that this is unlikely and if it occurred, as it could perhaps by happenstance, we would not know this to be the case decidable on evidence.

            In your examples and in general, I do not hope for a correct answer and hence must settle for something that I could endorse as our best possible current judgement, and ultimately form my best possible judgement. I think I have chosen my words to best reflect what I mean.

            I think I understand, at least sufficiently for now, your concerns, which are seem mostly directed to whether such question are answerable in some meaningful way. I hope that is fair, and even if not I think worth considering.

            Given that they are answerable and indeed meaningful, what is our best current judgement; is it the best possible (in the sense of have we striven sufficiently), and can or should I endorse it (be content to hazard my name on it)? I will come back to the alternative; that the questions are unsnswerable or absurd.

            The answer to that is NO. I do not think that we have formed our best current judgement in the sense that a judgement is not just a choice, it is a transparently reasoned (perhaps explained is a better word) decision. Further I do not believe it is sufficiently close to anything I could consider to be best possible. It is not sufficient for my endorsement; I don’t think we have tried hard enough.

            Whether these questions are inherently absurd is a matter amenable to the making of a best possible current judgement. Is there evidence that the climate system is chaotic? In one narrow sense, whether there is eveidence for behaviour explicable in terms of a simple bifurcation, this question is being looked at and the answer seems to be a tentative YES. It may be, plausibly is, chaotic all the time but this doesn’t seem to make itself obvious by way of evidence. I will put that more simply by saying that I do not judge this to be the giving rist to the greatest of current uncertainties which are more prosaic than profound. I think you are discussing profound uncertainties, I am saying that we haven’t sufficiently characterised the prosaic or mundane.

            For instance, let us suppose that the climate system, in which I have included how we observe it, may inexplicably, and sufficiently quickly to be salient, give rise to a 1ºC increase or decrease in the GMT temperature record. That would be surprising, interesting and hopefully informative but would that constitute a serious widening of our then best possible current judgement for the unfolding of GMT over a century timescale. Normally, in an historical sense, that should be the case, but there is a suspicion, based on the more recent record, human activities, and doubtless other matters, that we have made centenial scale predictability much less certain than a 1ºC increase or fall in the GMT record. As things stand I am not even sure a more gradual addition of 1ºC of variance would even be discernable from the evidence given our current understanding.

            If we should suspect that we have made the current century much less predictable than it would otherwise be or we judge has been historically the case since the last glacial epoch then we might be prudent to investigate this and form our best possible current judgement.

            If suspicions were that the GMT might increase by around 1ºC (from the present which would be nearly 2ºC above pre-industrial levels) in time for the 22nd century, I might well shrug, if they were that it was odds on to happen, I might well be concerned, but there are suspicions of increases far beyond those and that really gets my attention.

            I think the existence of such suspicions is sufficient cause for the efforts involved in reaching our best possible current judgement. Amongst those would be a striving to make such models, including the climate simulators, as informative as possible, which does not preclude a determination that they are worse than useless, i.e founts of misinformation, if and when that can be determined. That needs to be determined in a best possible current judgement, which I see as being a matter decidable on evidence (or perhaps more broadly information) both objective and subjective.

            I do seem to share many of your concerns and I do think that they may have a practical significance should it ever become the case that decisive (judged capable of making a discernable difference) action be embarked upon. It is my subjective current judgement that a separable issue, climate navigation, poses many more and many tougher practical difficulties and uncertainties. Post such a decision, climate models, become navigational aids, and we do need to make them as informative as is deem necessary or failing that possible.

            If and when we arrive at such a decision I cannot but see that we will become more rather than less dependent on climate models. Viewed as a plan of action, how else will we best judge if such actions are appropriate, or even effective, and importantly how long must we wait for such important matters to be decidable.

            My views have directed me to, and been informed by, the work of people such as Jonathan Rougier, Michael Goldstein, Michel Crucifix, and others, from whom I have borrowed views which I have modified and perhaps mangled. It is my opinion that they are pointing in a direction that we need to investigate for it seems to lead to a place where at least a best current judgement can be seen to have been determined.

            Alex

        • Sashka

          In reply to Michel Crucifix comment of Deb 15, 9:41 PM. Oddly, the “Reply” button is no longer there (a bug maybe?) so I am replying to the previous comment.

          Unfortunately the paper doesn’t seem to be available in its entirety. The claim in the abstract that the paper “clarifies the nature of probability in this context” is quite fascinating. Could you possibly re-tell the story in your own words, at least conceptually?

          I don’t follow how one can model the discrepancy as long as we know little (if anything) about the discrepancy. Surely you can assume something but what does it have to do with reality?

          • Alexander Harvey

            Hi,

            The link I have given has a different abstract but matches the first couple of paragraphs of the main text quite closely. I suspect it is a draft but I found it worth the read.

            It has some parts worth quoting. I particularly liked this from section 5:

            “We believe our simulators are useful, but we have to be realistic about how accurate they are, otherwise they will appear to be less useful than they actually are. Therefore effective calibration requires a discrepancy.”

            This section highlights the importance of the role of the discepancy in making the simulator (given its imperfactions) more rather than less imformative.

            Alex

          • Anteros

            Sashka –

            The ‘reply’ button only exists for the third nesting. For the fourth, you have to scroll up to the last comment that has a reply button, which will be indented by one line i.e. will only be the third ‘nest’. It’s a way of stopping the nesting getting out of control and careering across the page! If, in doubt, do as you did, and note the time of the comment you want to respond to.

            BTW your comments are always worth reading 🙂

      • Lindsay Lee

        Hi Alex,

        I hope this might be useful for you: http://www.atmos-chem-phys.net/11/12253/2011/acp-11-12253-2011.html

        Lee, L. A., Carslaw, K. S., Pringle, K. J., Mann, G. W., and Spracklen, D. V.: Emulation of a complex global aerosol model to quantify sensitivity to uncertain parameters, Atmos. Chem. Phys., 11, 12253-12273, doi:10.5194/acp-11-12253-2011, 2011.

        We have attempted to explain emulation (particularly Gaussian process emulation) for the aerosol modelling community but hopefully it will appeal to all!

  8. Steve Fitzpatrick

    Tamsin,

    OK, I will be patient for the more technical content. 😉

    I’m not really wild about “simulator”; sounds a bit too confident. I think “simulation” may be better.

    If you really want to explain physical models and their utility to non-scientists, there are lots of good, simple examples, like the ideal gas law, black body radiation, the first law of thermodynamics, etc. The key I think is to convey that physical modeling is an extension of rational thinking about how the world works. More complicated models directly relate our understanding of more basic processes to much more complex processes; all physical models must be consistent with our basic understandings. It is also (I think) important to show that modeling, especially conceptual modeling, is at the core of all rational understanding of the physical world. I do not know how I could think about the physical world without using conceptual models.

  9. Irie

    In fact all models can be located somewhere on a spectrum from statistical at one end to physically based at the other. It is arguable whether a purely physically based model exists for anything other than the simplest of systems. Certainly climate models are not purely physically based – they contain loads of conceptualised process representations, in particular associated with the land and the hydrological cycle – though they are towards the physically based end of the spectrum. Climate modellers are a bit naive on that point, in my humble, but respectful, opinion.

    • Richard Betts

      Irie

      You may have a point (although please note Tamsin’s rule about not making generalisations!)

      The difficulty of the whole problem of sub-gridscale parametrization of physical process hit home to me fairly early in my career at the Met Office Hadley Centre, when I was just starting work on the land surface scheme, and I took a flight from London to Edinburgh. After an hour spend looking out of the window at the complexity of the land surface moving below me, and the clouds around me, I realised that I’d only passed through two GCM grid boxes 🙂

      Having said that, on a longer flight I am always pleased to find that things like the jet streams and ITCZ are in the place where the model expects them to be (good job too, since the plane was fuelled and routed on the basis of a GCM forecast!), so even though capturing the sub-gridscale stuff is clearly hard, we must be doing something right at some level!

      • Barry Woods

        ‘doing something right’ !! – GCM’s forecasting hours, days ahead…

        JUst the decades and to 2050,-2100 is what we are talking and very concerned about… 😉

        Same model. with the updates turned off and allowed to run (at least what Prof Arnell said, Walker inst)

      • Alexander Harvey

        Hi Richard,

        I guess you are refering to the MOSES land surface schemes. I get the impression that such schemes are becoming seen to be increasingly important yet are not exactly well known.

        I believe similar schemes were in the last round (AR4) of simulations but I thnk I am right that dynamic vegetational schemes such as TRIFFID and carbon cycle schemes were not. If it is the case that more dynamic carbon schemes will be in the current round (AR5) are they likely to be in all or just some of the models?

        If it is only some of them, the increased complexity and some resultant increase model output spread could make those that include such schemes appear to perform less well than or simply differently from those that don’t, which might leave some explaining to be done.

        If that is the case it is usually best to do the explaining prior to rather than after the event. Just a thought.

        Alex

      • David Young

        Richard, This is indeed where the interesting issues arise. If you look at a simpler field, fluid dynamics, you will see subgrid models for turbulence. These models are I think generally thought to be better than they are. They are generally too dissipative and often less accurate than simpler models based on integrated quantities and empirical data. Anyway, subgrid models seem to me to be a frontier in modeling that is just beginning to be explored. By the way, the “something the models must be getting right” seems to me to be based on very shaky theoretical foundations. It requires rather strong assumptions about the nature of the attractor and strong assumptions about the numerics which are almost certainly not satisfied. But perhaps Tasmin will address these issues. I hope to be enlightened.

        • Paul Matthews

          David, yes, definitely. Basically, people who claim they are modeling anything below the scale of the grid are kidding themselves. There is always numerical dissipation on the scale of the grid. This means that the models are massively over-damped. Of course this doesn’t mean that the large-scale features are wrong.

  10. Michel Crucifix

    Juste a note: Winsberg analyses at least six possible meanings of the word ‘model’ in the context of simulation. To a good extent they apply to conceptual models as well. For example, model may mean the ‘mechanical image’ (e.g.: a box model), its adaptation to a particular phenomenon, the numerical implementation (the ‘code’), the model ‘ad-hoc assumptions’, simulations with this model (the “projections” or whatever), or the qualitative description of the phenomenon.
    Ref:
    Eric Winsberg, Sanctioning Models: The Epistemology of Simulation, Science in Context, 12, 275-292 1999

  11. Jim Cripwell

    I have been involved with models for a long time. There is nothing wrong with the models themselves – ever. The problem is the misue of models. In physics, the use to which models should be put, most of the time, is help with designing the next experiment. The main misuse of models is using them to pretend one can predict the future, when they have never been validated. And the only way to validate a model is to have it predict the future with the required accuracy, a sufficient number of times, so that predictions, when compared with the hard measured data, could not have occurred by chance.

    So, please, Tamsin, can we see some emphasis on the use to which models are put, rather than on the models themselves.

    • ThePhysicsGuy

      Jim Cripwell says:

      The main misuse of models is using them to pretend one can predict the future, when they have never been validated. And the only way to validate a model is to have it predict the future with the required accuracy, a sufficient number of times, so that predictions, when compared with the hard measured data, could not have occurred by chance.

      Bravo. Could not have said it any better.

  12. John Shade

    My general predisposition is to value simple models, and to be very wary of complex ones. Simple models can illustrate aspects of theories and observations, and serve to encourage a productive environment for further development of theories (cf. another George Box aphorism about statistical methods – and so of course, models – being useful as a catalyst for discovery and invention). This requires the models not to get in the way of clear thinking, not to muddy the waters by bringing in poorly-understood complexities of their own. This is therefore a concern of mine with regard to climate models, the mighty GCMs. I can see that such computation could have some value for extrapolating the next transitions of existing systems for which there is a fair bit of data, and, crucially, a regular flow of new data to correct and update the model. Weather forecasting in other words, where I believe one benefit has been a reduced need for surface stations taking observations since the models have shown some merit in interpolating amongst them. But climate? I have not seen much to assure me of their competence there – especially at the level of providing guidance for policy decisions. It is one thing to produce speculations in the computing laboratory, where little harm may follow from blunders except for wasted time. Higher standards are required in policy world since blunders there can have huge and harmful consequences. Perhaps my fears about GCMs are ill-founded, and I will learn this through this blog. But in the meantime, here is my conceptual model: GCMs may be too complex for the human mind to readily assess their quirks, and too simple to cope adequately with the complexities of the climate system.

    • Tamsin Edwards

      Thanks John.

      GCMs may be too complex for the human mind to readily assess their quirks, and too simple to cope adequately with the complexities of the climate system

      This is an excellent summary of the question at hand (though I do want to talk about other models than GCMs). How can we understand and assess the huge and complex output of GCMs? And how simple is too simple? No easy answers I’m afraid…but these are exactly the issues to discuss.

    • Michel Crucifix

      “But in the meantime, here is my conceptual model: GCMs may be too complex for the human mind to readily assess their quirks, and too simple to cope adequately with the complexities of the climate system.”

      Fair statement, but it depends on what you call ‘adequately’. Estimating sea-ice cover in 2020 is one thing; studying climate vegetation interactions during ice ages is another one. To remain in the spirit of the title of this blog, the model may be useful, but not useful for everything. In particular, it is not because a model is not ‘policy-relevant’ that it is useless.

      To comment on your first sentence: the simplest a model, to more it looks general,… often the more easily it links with other theoretical edifices, and so the more comfortable we are to rely on it to satisfy ourselves that have ‘explained’ a phenomenon. This is the reason why climate scientists sometimes use more conceptual models to satisfy themselves that they have understood a phenomenon simulated by a GCM (e.g.: a baroclinic wave phenomenon).

      Now, what is the right approach to planetary scale climate dynamics ? GCMs remain one interesting approach (i.e. they generate useful, non-trivial info), but they will never be as nearly complex as the real system and as such they cannot be the ‘sole’ source of knowledge.

      Finally (just a thought), it may be useful to distinguish ‘modelling complexity’ (in the sense : accounting explicitly for as many interactions as possible) from ‘modelling the complexity’, i.e., reasoning on the complexity itself and its consequences on energy dissipation, production of entropy and related concepts.

    • David Young

      I agree that simple models are often best because they can be understood and analyzed rigorously. But, they need to be validated against actual data. In principle GCM’s should have a reasonable handle on the dynamics, assuming the numerical errors are small (and this is probably not the case with current methods). But the actual small scale processes like convection and vortex dynamics and boundary layer behaviour require sub-grid models. These are bound to be limited because the underlying phenomena exhibit nonlinear chaotic behaviour. One question I want to have answered is how these subgrid models are calibrated and how sensitive the overall behaviour is to the model constants and assumptions.

      • Tamsin Edwards

        Absolutely, that’s the raison d’aitre of my job and many others like climateprediction.net. We run the models with many different parameter values to get an overall picture of their contribution to uncertainty (as opposed to tuning, i.e. finding ‘best’ values for a single simulation). More detail will come in a post 🙂

  13. billc

    Thanks for the update Tamsin. I think your terminology is fine. Since I’m here I will add one thing: I think when in discussions of climate, we might do well to always identify when we are talking about GCMs, as opposed to any other kind of model. I mean, like after you make a post do a search and destroy for the term “model” and check yourself as to whether you mean “GCM”.

  14. lucia

    The opposite of a variable is…a constant. These can be interesting too.

    I think conversations and arguments are especially interesting. We also see them all the time at climate blogs. In certain settings these constants can suddenly be called ‘adjustable parameters’, ‘tuning variables’ or ‘fudge factors’. These words get bandied around when the ‘constant’ is a factor in something that is called a “law” but is really a parameterization or closure. Von Karman’s constant in “The law of the wall” represents an example in the more respectable range for models and people do call the parameter in that law “Von Karman’s constant”. But less respectable laws and constants do exist.

    Someone asked Tamsin for an explanation of conceptual model. I’m not her, but I’ll try to give an example by volunteering how I think we figure out the value of things that can be called “constants”. The magnitude of the constants (or fudge factors) are often estimated by:
    1) Creating a conceptual model.
    2) The conceptual model will suggest some algebraic form for a relationship. If the conceptual model is completely right, we expect data will follow this algebraic form.
    3) The magnitude of the constant is determined by fitting data to that fit. That is, the magnitude and uncertainty in our knowledge of the magnitude are determined using a statistical model that is supported by a conceptual model.
    4) Finally, the accuracy of the conceptual model itself will to a large extent be tested against data and the tests will generally involve using statistical models.

    Taking the law of the wall as an example:
    1) Conceptual model includes idea that length scale for turbulent eddies is proportional to distance from wall. This is an idea or concept.

    2) Taking the various concepts together, one concludes the velocity profile will vary logarithmically with distance from the wall but the law involves one “constant”.

    3) The magnitude of this constant is determined by collecting velocity data and performing a statistical fit between velocity data and distance from the wall. Depending on circumstances (including how many different people have taken this data and whether the constant turns out to be the identical in all experiments ever performed or whether it varies from experiment to experiment.) the ‘constant’ will be called a ‘constant’, ‘adjustable parameter’ or “fudge factor”.

    4) The utility of the concept that length scales for turbulent eddies is proportional to distance from walls was and is judged based on how well the data fit the algebraic model. A robust conceptual model will hold up in numerous labs using data collected by independent groups in a large variety of circumstances. But people working in industry also sometimes use conceptual models for complex processes, find they apply in limited circumstances and use algebraic models and constant fit to industry specific applications to predict outcomes. Depending on who is discussing those models, they parameters will be called “fudge factors” or “adjustable parameters”.

  15. Sashka

    lucia,

    I think you described just one of the species in the universe of conceptual models. There are other kinds.

    > 2) The conceptual model will suggest some algebraic form for a relationship. If the conceptual model is completely right, we expect data will follow this algebraic form.

    Algebraic form is not necessary. For example, some of the more useful models in climatology are made of boxes. You may end up with treatable algebraic relationships in the end but that’s not required. The end product could be, for example, determining stability properties of the system depending on the strength of system components coupling.

  16. lucia

    Sashka–
    Could you elaborate on what you mean by “are made of boxes”. I’m aware that climate models grid things up. I consider climate models computational models rather than conceptual models. The conceptual model underlies the computational model.

    I agree I’m probably going to far by saying conceptual models result in an algebraic relations. Likely there are some that don’t; I could probably think of some. But I’m still not sure what you are driving at with the mention of boxes.

    The end product could be, for example, determining stability properties of the system depending on the strength of system components coupling.

    But couldn’t the onset of instability a a function of something used to quantify the strength of system components coupling? (Maybe, for example temperature gradients and properties of air and water vapor?)

    • Sashka

      Basically, the box approach is the opposite to what you got used to in fluid mechanics. Instead of thinking of continuous media and discretizing it on a grid where the properties would vary smoothly, you imagine the system as a collection of boxes with homogeneous properties. For example, one box for equatorial band, two for mid-latitudes and two for poles. Then you can try to define (parameterize) the heat fluxes and find the equilibria. Then nudge the system by adding, say, albedo and see if the system reverts into ice ball. Things like that.

  17. Tamsin Edwards

    Been a busy day at the ice2sea meeting, so I’ve been grabbing the odd minute between sessions to read your posts – thanks for such interesting contributions.

    I was going to do the next post on physical models, but if there’s room/time I’ll add in statistical models such as emulators too.

  18. lucia

    Sashka–

    Instead of thinking of continuous media and discretizing it on a grid where the properties would vary smoothly, you imagine the system as a collection of boxes with homogeneous properties.

    This is not the opposite of what we do in fluid dynamics. Lots of things are done in fluid dynamics (and heat transfer) including using lumped parameter models where we have what might be called “boxes” which we treat as having homogenous properties.

    But I still don’t see how the “box” part is the conceptual model. Isn’t to conceptual part that heat flux flows from hot to cold according to some principle (e.g. conduction, convection, radiation?) What I’m trying to understand is how you are differentiating between what you call the conceptual and computational aspects.

    I know what stability is– but I don’t know how your linking to a book answer my question to you. Maybe I didn’t express it clearly enough. You seemed to be suggesting that “The end product could be, for example, determining stability properties of the system depending on the strength of system components coupling.” is some thing obviously different from an algebraic relationship. But I’m not sure how you think it differs. (I grant that it might be a constitutive relationship involving intergrals or differentials. But somehow I don’t think your quibble whether that sort of thing would be included in algebraic.)

    Without telling me to read an entire book to learn what you mean– could you clarify what you are trying to convey so I can better understand it?

    • lucia

      Sashka– I should add: Looking at chapter 11, figure 11.5 is precisely what I would describe as “an algebraic relation” and seem to be the end result for the conceptual model. Also, the sort of modeling in that chapter looks like a exactly what is done in engineering fluid mechanics and heat transfer all the time. But I would not constitute the “conceptual” as really being the boxes by themselves; the concept involves applying conservation of mass, momentum and energy, positing constitutive relations for transport mechanisms, and then deciding on a certain level of simplification relative to the full problem.

      • Sashka

        Google doesn’t show that figure to me but I certainly agree with the rest of your comment. I don’t think I said anything else. Certainly didn’t mean to.

    • Sashka

      “Opposite” was not the best word to use. When you do finite elements (or even finite differences like particles in cells) you essentially do the same thing, i.e.integrating over the box. However, in fluid dynamics, unless you looking for an analytical approximation, you are usually into a big calculation. Your boxes or your grid are meant to represent smooth solution. In box models this is not a goal. The point is that there are very few of the boxes so that you can compute a lot on paper.

      I certainly didn’t mean to get you to read the whole book. You asked to elaborate on what I meant as “made of boxes” so I thought pointing to a book chapter would be appropriate. Barry explaines it better than I would.

      I’m not sure why my views are so interesting here (it’s not my blog) but FWIW. To me, the line in the sand between conceptual and other models is the “computability”. If you can end an algebraic relationship (you are right: this is most useful case) in the end then it’s certainly conceptual. An integral is the same. I would include even an ODE.

      But if your end product is a 3-D array of numbers then you are certainly on the other side of the great divide.

  19. Sceptical Wombat

    For what its worth, as a mathematician, I prefer to think of “variables” as functions – most commonly from some set of real world or conceptual entities to the “real numbers”. “Random variables” in particular are functions from a sample space, most commonly to the real numbers.

    Thinking of variables as functions avoids getting confused by trying to explain the difference between a variable and an arbitrary constant or what “holding the variable constant” means etc.

    • Vaughan Pratt

      That’s certainly how I think of variables, namely as functions.

      When you have two variables x and y taking values in K, one can type them as respectively x: X –> K and y: Y –> K where X and Y are their respective domains of variation.

      In that case it is also convenient to treat the pair (x,y) as itself a variable taking values in K, of type (x,y): XxY –> K. (Usually (x,y) is written with angle brackets but this awkward in html.) Here XxY is a larger domain of variation and X and Y are projections of it that hide part.

      Hiding the whole of XxY gives the domain I such that for any X, XxI and IxX are both isomorphic to X, and hence to each other. A constant k can then be defined as a function k: I –> K.

      For any X there is a unique function !: X –> I.

      Some functions are “more constant” than others in that more of the “big picture” domain of variation is hidden from them. Thus (x,z) is more constant than (x,y,z) because the domain Y of y is hidden from it.

      The notation X –x–> K as a synonym for x: X –> K is sometimes convenient. The distinction between a constant k: I –> K, or I –k–> K, and a constant function x: X –> K, or X –x–> K, can be drawn by calling x a constant function just when it can be presented as the composite X –!–> I –k–> K. Whereas a constant is not offered X to begin with, a constant function is offered X but chooses to ignore it — think of ! as the function that erases the information in X. The outcome of erasing all the information in X is unique whence there is only one such erasing function for each X. The difference between the erasing function for X and that for Y is that they erase information obtained from different sources.

  20. John Costigane

    Tamsin,

    Lines seem to be drawing between both sides which might lead to the heat usually seen in other sites, unfortunately. If you consider the climate models as a work-in-progress, deficient in natural variability for example, both sides can make positive contributions, necessary for the improvement of climatology, not just the messaging aspect. Open-mindedness is essential on both sides to achieve this.

    • Tamsin Edwards

      I’m not sure which lines you are referring to John? I think this comment thread has been polite and productive. And by ‘you’ do you mean me, or ‘one’? Do you think I or others have been messaging or not open-minded? Just interested to hear more detail.

      • John Costigane

        Tamsin,

        Open-mindedness for sceptics, like myself, is to look at the climate models, see the good points, and offer positive comments, accepting that they are not all bad.

        Open-mindedness for modelers, like yourself, is to consider the CO2 issue as possibly overblown in importance and to consider alternatives which also affect the situation.

        Thinking ‘out the box’ covers both perspectives and should help with the merging to a common viewpoint, ie promoting good science, for itself.

        Things are fine at the moment but I sense the hardening of positions, pointing no fingers to keep the peace.

  21. Pharos

    I can well relate to your three model categories (conceptual statistical and physical) by analogy to subsurface geology, in for example reservoir characterisation. The conceptual model will be the geologist’s interpretation of the sedimentary depositional environment of the reservoir, the statistical model would be, for example, predicted reservoir porosity and permeability from consideration of known well data control points and regional trends due to progressive porosity loss due to burial compaction, and the physical model the 3D gridded geometry of the reservoir layers, layer boundary discontinuities and fault compartmentalisation.

    In this analogy, the physical model subdivides into the static model, showing the geometry of the reservoir in its virgin state, and the dynamic physical model, predicting fluid flows and pressures under various production scenarios.

  22. tonyb

    Tamsin

    With regards to a GCM-what parameters would be used to construct it- for example land surface temperatures and Sea Surface temperatures?
    Also would you just accept the data you use as being 100% reliable?
    How do you factor in natural variability?

    I saw the Met Office (I think) constructing a GCM years ago but there seem to be many more parameters these days.
    tonyb

  23. Ben B

    I like your clarity in defining conceptual, statistical or physical models.
    I wonder, however, whether we need to extend the definition of “physical model” to something like “Mechanistic model” . This was touched on by Irie and Richard Betts earlier in the thread. As climate models have evolved into earth system models, they have increasingly needed to incorporate information from chemical and biological processes. Representation of these processes (physical or not) is mechanistic. For example sulphur dioxide gas can be oxidised to form sulphates (aerosol particles, which interact with clouds, radiation etc). The model prescribes the mechanism (not the resulting sulphate concentration, which emerges when the simulations is driven by observed emissions of sulphur dioxide). While this all falls under your description physical I think there is a property of physical simulators which doesn’t necessarily extend to wider mechanistic simulators. Physical systems are bound by very strong (physical) constraints. Your physical simulator has got to conserve energy, momentum and mass (something which must be as true for future climate as it is today) and failure to do so renders your simulator non-physical . Our faith (or not) for representations of many chemical or biological processes is based solely on how well a given processes is observed in the real world.

    Regarding physical models vs simulators – I have only started to use simulators after excessive exposure to statisticians who mis-interpreted what I meant by “model“. I think I agree that simulator is less ambiguous

  24. Joshua

    Tamsin –

    I don’t know if you’ve already seen this, but I thought you might find it interesting. It is from Isaac Held’s first blog post:

    I call myself an atmospheric or climate dynamicist/theorist/modeler. I am sure that there are philosophers of science who distinguish between the terms “theory” and “model”, but I don’t. I work with a range of theories of different kinds; when these reach a certain level of complexity they are typically referred to as computer models. The most relevant distinction relates to the purpose of the model. Some models are meant to improve our understanding of the climate system, not to simulate it with any precision. I like to talk about building a hierarchy of these models designed to improve and encapsulate our understanding. The most comprehensive models can be thought of as our best attempts at simulation, limited by available computer resources and our understanding of the effective governing dynamics on space and time scales resolvable with those resources.

    http://www.gfdl.noaa.gov/blog/isaac-held/2011/02/17/1-introduction/#comments

    • Anteros

      Joshua –

      Well posted. I took a while to identify what’s different in Held’s approach to most modellers (and indeed climate scientists in general – at least those in the public eye). I think it’s that the emphasis is on furthering understanding; if you like, assembling something to see if it fits. It is about putting something in to the models. In comparison, much of climate science seems to be about taking something out of the models – predictions, confirmation of expectations, answers to worrying questions. Obviously, that is true for almost all the of climate science that finds its way into the domain of the CAGW debate.

      An example of the latter is from Tamsin’s ice2sea group, part of whose task is to provide information to the IPCC about what is likely to happen in the future. This is clearly less about academic understanding than is implied by Held’s approach.

      The ice2sea website has a Policy page where it describes what it is going to give to the IPCC –

      A collective view of the likelihood of catastrophic sea-level rise

      Apart from the (to my mind) rather insidiously unscientific use of ‘catastrophic’, isn’t this quite a contrast with what Isaac Held describes as the essence of his work?

      I think it is an interesting contrast.

      P.S. Perhaps off topic, but I remember now the ‘flavour’ of the distinction that I was trying to recall. Years ago I went on a yoga retreat and in one of the talks we were informed that instead of being concerned with what we could get out of our practice, we should concern ourselves with what we could put into it. The analogy might imply that I think there is something wrong with trying to make predictions about the future – I don’t, but I think the comparison is very interesting.

      • Tamsin Edwards

        Hi Anteros and Joshua,

        I went to a good winter school / workshop co-organised by Isaac Held – it was big on the usefulness of ‘toy’ rather than complex models (Reducing the Uncertainty of Global Warming, Jerusalem, Jan 2009). The spectrum of models is my planned next post, when I find a minute 🙂

        There is a lot of “furthering understanding” going on in climate science, but it gets less attention than the predictive stuff. For example, Bristol (BRIDGE) does a lot of palaeoclimate work which fits into both “understanding the earth system” and “understanding climate model behaviour outside the 20th century”. We use complex GCMs but also simpler ones like GENIE. We also use energy balance models for teaching, and Jonty Rougier and Michel Crucifix have been using very simple (dynamical stochastic) models to represent ice-age cycles. Probably there are other examples.

        Ice2sea is also doing a lot of work in improving understanding – for example, taking more measurements to help understand whether basal lubrication is important, and using simple models to try and understand which are the important drivers of calving. That policy list you quote from has as number 1 “Improved understanding of the key processes that control how glacial systems respond to atmospheric and oceanic climate change.”

        • Anteros

          Tamsin –

          Thanks for the comprehensive reply. I’ll be a bit more judicious of my ice2sea quotes in future…

          I occasionally look at Isaac Held’s work but always find it is simply too technical for me. I did however have no problem with your captcha sum 😉

      • Joshua

        Anteros –

        From where I sit, we must all be careful of building falsely dichotomous constructions. The tendency to do so is a very fundamental attribute of our reasoning processes. In fact, dichotomous relationships is one of the fundamental models that we use in order to reason – so the danger of building false ones is ever present.

        For me, the key part of that excerpt from Held, in my view, was the following:

        I am sure that there are philosophers of science who distinguish between the terms “theory” and “model”, but I don’t.

        I feel that much of the criticism that I see of climate modeling is based on, in fact, models, but the sometimes the “modelers” (e.g. “skeptics”) are either not recognizing their own models as models (by claiming a false dichotomy between their own theories and other people’s models more generally), or relying on a flawed model/theory which falsely characterizes the models of climate scientists as something other than an method for estimating probabilities (in other words, a theory) – and instead and as a result, claiming that any existence of uncertainty invalidates the climate scientists models.

        This is what I was going for in an exchange with Theo on a previous thread – and in my comment above to Barry Woods.

        This is also why I feel that you can’t spend too much time on explicating the foundational definitions of what we’re discussing. If we have different foundational definitions, then it stands to reason that we could never reach a point of mutual understanding, and if we can’t reach a point of mutual understanding, we can’t reach a point of mutual agreement. And so then, all we’re trying to do is win a battle (a battle I think is unwinnable because the power is too evenly distributed on either side).

        And this goes back even further, in my view, because so much of how these foundational definitions are formed, as it turns out, are situational; “skeptics’ and “realists” change their foundational definitions, often, to conveniently lay the groundwork for the conclusions they want to reach. Debaters on both sides alternately differentiate their models from theories, and conflate them, depending on the desired outcome. When it suits their purpose, their own theories are seen as models (which are absolute in their correctness) and the models of others are seen as mere theories (as distinguished from models which are well-qualified conceptualizations of probability) that are born out of tribalism.

        “Skeptics” see climate models as nothing other than biased theories. “Realists” see skepticism as a mere theory (as opposed to a qualified model – say, that “consensus” science has a certain propensity towards error). “Realists” see climate models as non theory-like predictions of the future, and “skeptics” see “climategate” as a foolproof model that invalidates the work of climate scientists.

        So that’s my preamble to explaining the following statement: no climate model or skeptical theory is anything other than a method of furthering understanding. There might be a spectrum of models/theories with models/theories as a method of exploration at one end and models/theories as a method for prediction at the other, but if anyone views a climate model as separate and distinct from being a theory, and in fact the web interrelated of “theories” and “models” that are the basis for how we all reason, then in my view, there must be some bias involved. That could be a bias towards rejecting the climate model for being insufficiently foolproof, or it could be a bias towards blind acceptance of a climate model as being being fully predictive.

        Obviously, most others engaged in the climate debate, those from both sides, will disagree with me on this.l But I have a hard time accepting the input of anyone in the climate debate if they can’t state the following as an underlying principle.

        I am sure that there are philosophers of science who distinguish between the terms “theory” and “model”, but I don’t.

        • Joshua

          And Anteros –

          I’m reasonably sure that the writing in that comment reads like 10 lbs of sh!t stuffed into a 5 lb. bag. Believe it or not, it makes sense in my head. I’d appreciate any questions that might help clarify any of that comment outside of the tiny % that might be readily comprehensible. The worst thing that could happen would be that I’d try to explain it more clearly and fail to be able to do so.

        • Anteros

          Joshua –

          I go along with much of what you say.

          There is for me a little irony in you saying –

          we must all be careful of building falsely dichotomous constructions

          – and all the while persisting in using the terms “skeptic” and “realist”. One of which is an insult, and one of which isn’t. I have to persuade myself that that is just a jibe you think is part of the partisan discourse but I think it does you no favours if you seriously want some good faith discourse. It means before you’ve even sat down in the cafeteria, you already flicked a couple of blobs of jello into someone’s face.

          My primary objection – apart from the false dichotomy – is that there isn’t even any pretence at a link between philosophical scepticism and the disbelief in CAGW, as I’m sure you know. Much more appropriate would be ‘disbelievers’. The letters in “skeptic” are the same – it even has the same spelling – but there is almost as little connection as there is between people who are worried about the future and the term “realist”. Except when used with enormous irony.

          In a mirror image to your claim that “skeptic” and “realist” are taxonomically neutral, I could offer up “alarmist” and “realist” which I imagine have the same kind of resonance for you as the two you use have for a realist like myself.

          It seems to me that “alarmed” and “not-alarmed” are the relevant neutral terms – however clunky, especially because the words are from the appropriate lexicon – that of emotion.

          This leads me to perhaps my one substantive disagreement over this area of the topic, which is that the beliefs of the people involved in the climate debate have very little to do with reasoning. I think reasoning is noticeable by its absence when people are imagining how the future will be – and how they feel about it. The two operative words already used there are ‘imagining’ and ‘feel’. To suppose that reasoning processes are involved misses how we get our picturing of potential events.

          I think it is true that ‘reasoning’ or at least some cognitive processing occurs after we have an emotional, imaginative picture, and indeed can bolster, support, and make consistent our feeling-vision, but I don’t think that such processes have much part to play in us arriving at our pictures in the first place. When you talk of potential catastrophe, I don’t think that it as a reasoning process that has led you there. The same is true of course for those of a realistic disposition whose imaginations don’t conjure up doom and destruction – there is equally little reasoning involved.

          What I would say though, is that it is possible to use reasoning to reduce fearful imagining, although my experience is that once people have a good strong picture of ‘bad things’ happening in the future, they are unbelievably reluctant to let go of them. And if one fear comes up against an unfortunate reality [ie no doom] then imagination can produce dozens more in a matter of minutes.

          One example of where people have been persuaded to let go of worries concerns the prospect of things ‘running out’. It is possible for an odd reason. Obviously it helps to have a look at the history of things not running out, but people can still convince themselves that the running out happens to be just about to occur.. However, what seems to be useful in ameliorating people’s irrational worry is finding out that throughout the history of the resource in question, people have persistently been worried about the resource running out. To see the persistent history of irrational worry seems to be a method of diminishing the force of the imagination that creates it in the first place. It is, however, a rare event – primarily because most people will let go of whatever pleasure or happiness you like rather than let go of their worries.

          To link up with what you were saying I would maybe substitute ‘imagination’ for theory – I think some post-imagining theory gets created, but mostly to make sense of the picture. Your favourite, motivated reasoning, perhaps. But importantly, for me, it makes sense of, rather than creates.

          So, people imagine something worrying to some degree or they don’t [ although I’ll grant the echo of worry about the worriers or “alarm about the alarmed” in all the partisan people you encounter, especially among your conservative American friends]

          A way of supporting this notion is that if we genuinely used reasoning to come to our beliefs about the future, they would be susceptible to reasoning to change them. It seems to me that they are not, because they are arrived at by what we find it easiest to imagine – once we have a clear [and emotionally charged] picture, it tends to stay stuck.

          • Anteros

            Joshua –

            I take your second comment seriously.

            I went off on my tangent [which incidentally is pretty much how I see the whole climate debate] because I wasn’t 100% sure I understood where you were coming from. It’s late here – I’ll spend some thinking [not imagining..] time on it tomorrow, and get back to you.

          • Joshua

            Anteros –

            One of which is an insult, and one of which isn’t.

            I don’t see it that way. I do think that people are responsible for considering how other people hear what they say in addition to how they intend to sound, but…

            You and I have been here a number of times before. I use the terms realist and skeptic because those are the terms that each side likes to use to refer to themselves. I put them both in quotes because I think that at least in some cases, on both sides, I can only use the terms ostensibly and I think that the quotation marks reflect that connotation. I’ve actually given a lot of thought to what terms to use.

            I’m not particularly happy with these terms, and if I found better terminology, I’d use it. The ones that you have propose to me, in my view, are inherently biased – much more so than the terms I use. So – you know how I intend the terms. If you still take them in a way I don’t intend, I think it’s unfortunate, but if you know how I intend to use the terms, then I think that you should be able to determine whether you consider it an insult or not. I will take no offense to whatever terms you use. You can call me a warmist, a putz, an idiot, a religious zealot, self-loathing (in the context of a discussion of my being Jewish), a racist, a misogynist, a bigot, arrogant, a liar, and a long list of other things. I’ve been analogized to eugenicists, to Lysenkoists (ironically, very specific and denigrating parallels being drawn by people who voice great objection to being called deniers – even though I don’t think that term is inherently linked to the term holocaust denier) . I’ve been told that I’m indifferent to the deaths of millions in my zeal to impost a totalitarian socialist state in my goal to destroy capitalism. I take no offense. I’m not a victim of whatever terms someone wants to use for me, I don’t live in any pigeonhole that anyone might want to put me into.

            Regardless of how you refer to me or anyone else, I will focus on the intended content of your words. There is always something of a gap between what one person intends to say and what the next person hears, and I will try to clarify my understanding of your words to see if I can understand you clearly, and upon clarification of your meaning, I will not insist that the meaning of your words is something other than what you tell me they mean. And quite honestly, Anteros, I’m tired of talking with you about this. I am interested, however, in your reaction to the other content of my post – I was struggling to say something that was on the edge of my brain and only loosely coherent, and I look forward to you getting past your negative reaction to the terms “realist” and “skeptic.”

          • Anteros

            Joshua –
            I am indeed struggling with your substantive point. I nearly grasp it and it vanishes..
            Leave it with me for a bit.

            I can say for now that in some way I’m not sure I see the importance of either distinguishing between models and theories, or alternatively making no distinction. Are there two things or one? Does it matter? – by that I mean can we not sometimes make use of a distinction and sometimes accept that the ideas conceptually (and functionally) serve the same purpose?

            My guess is most people/scientists/modellers would indeed agree with the philosophers of science and say in many circumstances that a useful and functional distinction can be drawn between model and theory and that no problem is created by doing so..

            As I say, I’ll ponder it a little more, but already I’m going to say that I don’t think reasoning has very much (at all) to do with this debate/disagreement. The closest I see it coming is ‘rationalising’ – which is no more than adjusting various meanings such that there is a reasonable amount of consistency and coherence. All the important processes are something other than reasoning.

          • Joshua

            Anteros –

            Thanks.

            I find myself in agreement with this comment:

            I can say for now that in some way I’m not sure I see the importance of either distinguishing between models and theories, or alternatively making no distinction.

            And I think it is consistent with what I was saying about theory and model being at opposite ends of a continuum – where the extreme ends are distinct but not mutually exclusive.

            I find myself in disagreement with this comment:

            I can say for now that in some way I’m not sure I see the importance of either distinguishing between models and theories, or alternatively making no distinction.

            Well – in agreement with the first clause but not with the second. Although I think making a distinction can be useful as a frame of reference, doing so brings an inherent problem that needs to be addressed – the problem of a false dichotomy. That said, I have to think about whether the statement of Held that I excerpted might need to be reinterpreted.

            I’m having some trouble putting together the ideas here:

            I’m going to say that I don’t think reasoning has very much (at all) to do with this debate/disagreement. The closest I see it coming is ‘rationalising’ – which is no more than adjusting various meanings such that there is a reasonable amount of consistency and coherence. All the important processes are something other than reasoning.

          • Anteros

            Joshua –

            The paragraph you quoted was a hint of a recapitulation of the 2nd half of my comment at 11.10pm on the 20th. After the bit about realist/skeptic.. My point is that we don’t arrive at our beliefs about the future by the process of reasoning. We do it by imagining, looking, seeing, and feeling. Later, we rationalise to assemble the picture coherently, to rule out any inconsistencies and to justify our rapidly forming prejudices.

            I believe that a lot of the time, ‘thinking’ about what we see and expect and fear is not much more than an epiphenomena. A tweaking of the picture we’ve created just so it all fits together [including our demonising of opponents, ‘seeing’ what we want to see, and so on]

            Some people, of course, do actually reason more than others. Feynman was somebody who appeared to reason a great deal. Stephen Schneider very much less – being extremely motivated by emotion. I’m currently reading a book of his which has the subtitle “Inside the battle to save Earth’s climate”. I don’t think he arrived at this picture of what he was doing by reasoning – I don’t think reasoning had the faintest thing to do with it. I don’t mean that as a condemnation, but I’m very sceptical of statements that people have used ‘evidence’ to come to a view of the world. Mostly the view comes first, and the evidence is found later.

            As I said in the earlier comment – if that wasn’t the case, why do so few of us ever change our view of things?

          • Anteros

            Joshua –

            Just a topical reference for this. I was reading a Gleick related piece this morning and apparently he was described by the BBC in 2001 as ‘visionary’. Given my current thinking, that seemed very poignant.
            My impression is that he’s obviously bright enough to be an academic, but like you say, not very smart. But in an important sense, he’s a ‘visionary’. He actually ‘sees’ the future – or what he thinks will be the future – which I think informs his passion. He thinks that the sceptics are attacking science because he thinks science aligns with his ‘vision’.

            I can understand his frustration – really. But it makes him something of a fundamentalist – a believer. Which is ironic when we think of some of the connotations of dispassionate ‘science’.

            Perhaps Gleick (I’m only guessing) is someone who doesn’t do a great deal of a particular type of thinking. Self-critical, or something like that.

          • Joshua

            Perhaps Gleick (I’m only guessing) is someone who doesn’t do a great deal of a particular type of thinking. Self-critical, or something like that.

            I think that is a reasonable speculation. For him to fail to connect his “concern” about the politicization of the science to his own politicization of the science suggest to me a lack of self-critical analysis. Maybe not (he may have just rationalized his own politicization due to his tribal affiliation), but I suspect he was not open to self-critical examination. Bringing it back to the subject at hand, I think that it suggests he had a skewed vision of his own “theory” as some objective and scientific “model.” Which brings me back to the dangers of a false dichotomy. We all use models in our reasoning, models that are theories, that are models, which are theories. Needless to say, I don’t see him as being anything near unique in that sense – and that is why I object when I see “skeptics,” in my view, demonizing the notion of “models” through mischaracterizing the uncertainty they rest upon (see, again, my comment above to Barry Woods), and falsely creating a dichotomy between their own theories and the notion of the inherent fallibility of models.

    • Anteros

      Joshua –

      I think pondering for a while was a good idea.. Am I right in thinking that you’re making a distinction between models/theories that are made relatively explicit [climate models, say] and those we use implicitly but are maybe unaware of? If so, I agree with you – other’s models that are brought into the open can be criticised, whereas the ones we use unwittingly are unavailable for scrutiny. We may assume ours are perfect and objective; we may not even realise we have them or are using them – but they certainly won’t receive the force of our criticism in the way that our opponent’s ‘published’ models do.

      I think there is something more generally true here. The way we see the world obviously makes sense – our beliefs appear true to us. The fact that other people see the world differently often makes us think not only that they are wrong but that they are often wilfully wrong. Reading round the blogosphere recently perhaps more than I usually do, I have been amazed by the prevalence of the accusation of ‘lying’. It seems that everybody is accusing everybody else of lying!

      A bit baffling. I know this ‘not to do with science’ debate often gets the ‘good versus evil’ landscape thrust underneath it bit really. Surely, anyone with a tiny bit of understanding will understand that a general accusation of lying should be accompanied by an admission of lying and the denial of lying should be accompanied by no accusation of lying? Too much to ask perhaps.

      One slightly self-referential result of this is that when I hear a group of people claiming the moral high ground [also painfully prevalent] I think they by definition don’t have it. The moral high ground is not claiming [or even believing you have] it.

      I came across a comment by Lubos Motl that I thought would exemplify some of this, and which might resonate with you – the second comment here –http://scienceblogs.com/classm/2012/02/peter_gleicks_alleged_crime.php?utm_source=networkbanner&utm_medium=link

    • Tamsin Edwards

      Ha, thanks. I think one point of this blog is to uncover more of the quiet majority of climate scientists (colleagues etc) that are perfectly objective but not necessarily proactive in challenging misrepresentations of science by the media or activists (I am working on them… 🙂 ). Certainly the vast majority of working climate and glaciology modellers that have voiced an opinion to me (Twitter and in person) are very supportive of the blog name and angle.

  25. Swiss Bob

    Hi, Prof Betts directed me here, and now I understand the blog naming furore that had gone a little over my head!

    Good luck and don’t take the commenters too seriously, after a while you’ll hardly notice the idiots 🙂

  26. Just another hillbilly

    Great blog! Just fantastic…
    As a finance modeler, I am struck by the length of time climate model projections are allowed to be considered reliable while observable data continues to diverge farther and farther from the projections. I would like to 2nd or 3rd the request to discuss the criteria for falsification of models. In the world of finance, there is no criteria necessary. Either your models make money for clients or they don’t. The ones that don’t consistently make money don’t last long. However, because humans are fallible and prone to psychological biases, often these models are abandoned at the exact wrong time and end up roaring back to life leaving their investors with losses and an acute sense of bad timing. Since humans tend to make significant changes in behavior when psychological pain reaches a crescendo, that there needs to be a serious discussion of falsification. Else, I fear that climate models may be abandoned, much like financial models, at the exact wrong time.

    This analogy can be stretched further, since climate models are now being used to generate their very own field of finance, which is green energy. If the models appear to have failed, and the money spigots get turned off, it might be a very grave situation for planet and humankind if the models are indeed correct but have simply become out-of-sync with observable data for a period of time.

    I wish you the best of luck with your blog and models!

  27. Bernd Felsche

    I hope I rose to the intellectual challenge to achieve posting.

    Climate models fit “none of the above”. But would fit the category of “computer game”; where one doesn’t have to honour physical laws, address uncertainties or achieve anything of value other than enjoying the playing of the game.

    For models to be useful for anything more than amusement, one has to incorporate all factors that are plausibly significant. The significance test of each factor must be considered carefully in quasi-chaotic systems. One must understand the range of magnitude of the factors and how that variability could influence the system as a whole. A very small change in one factor may not immediately produce a “significant” change. The significance is determined by the quanta of perturbation that results in the system changing overall. Bur one should not ignore that parts of the system may become more sensitive to another factor’s change as they themselves change.

    Think of it as a small river into which you dump some rocks in one place so that you can jump to the other side. There are already dozens of factors that determine how the system will behave overall; the effect of the river’s flow a long way downstream. They include the flow rate and level of the river, the structure of the river’s bed and the edges. The overall topography. The size of the rock(s) you place in the river. Flora and fauna in and around the river.

    The short-term local effect may be significant, with e.g. a rivulet causing a slight diversion of excess flow, but downstream, there’s no change because the topography and soil structure results in the rivulet rejoining the main flow shortly after the obstruction. An even smaller change is simply one of greater streamflow velocity around the rocks with a (necessarily) slightly higher pressure immediately upstream of the rocks.

    But there are plausible long-term effects on the year to decadal scale. Especially when one takes into account the variability of flow from upstream and vegetation and wildlife around the waters. Branches, leaves and sediments may accummulate around teh rocks, leading to a greater obstruction over time, with greater effects than would be calculable using the state of teh system at the outset.

    Fundamental uncertainties of deterministic models can only we held within reason if the state of the system is “sufficiently deterministic” (i.e. can be measured within sufficient tolerance) and the results of computations, when taking all the accummulated errors are still within useful bounds.

    I dealt briefly with just one aspect of “climate models” in my own blog posting some months ago.
    Global Warming? which addresses the lack of concern for the instantaneous enthalphy; something that would actually tell us if there is a change in the amount of heat energy within the climate system.

    (So much for my initial intent simply to have the 100th comment.)

    • Steven Mosher

      “For models to be useful for anything more than amusement, one has to incorporate all factors that are plausibly significant. ”

      No. Lets build a simply model of you walking to the store. That model is simple

      D=R*T. And let’s say that your rate is 3-4 mph. So, modelling your walk to the store, we note that
      your store is 8 miles from your house. Our model of you walking tells us that it will take you
      no less than 2 hours to walk to the store. Our model doesnt simulate hills, or head winds or whether it is snowing or any other number of factors that could slow you down. Yet These are plausibly significant. But the model is still useful. How?

      Lets suppose its 5 oclock and the store closes at 6. Is your model useful in your decision about whether
      to try to make it to the store before it closes? yup. Even though it lacks fidelity, it has good enough fidelity to rule out certain things.

      Lesson; you CANNOT determine usefulness without first specifying the decisions you need to make and error you can live with.. eg the cost of a bad decision.

      • Bernd Felsche

        Steven,

        I spent the best part of a decade designing, writing and testing software for structural analysis. While those models took into account the typical variability of dimensions, material properties and loads, they were still deterministic and ignored a gamut of real-world factors which had to be considered by the design engineer (usually me) in a different way.

        When constructing a design “close to the edge” of the notional design capacity, it was necessary to perform structural testing as the design would “fail” when modelled according to traditional design standards. Test loads were applied in a “ono-standard” way; but closer to how the loads were actually applied in nature. For products of large production runs, material samples were also taken and tested priorto structural testing; with material properties re-entered into the model to try a) to predict when the structure would actually fail in testing and b) to validate the model.

        Although the structures were built of steel; a “well-known” material, non-linear elasticity was difficult to model as specifications thereof were non-existent. The low-end (specified) elasticity, represented by Young’s modulus was used, which works for small deflections within the lower elastic range, but as strain increases, non-linearity increases. In a structure utilising large deflections to take the load (such as a power transmission pole), that tends to increase the secondary loads on the structure which e.g. lead to buckling. The last 5% to 0% of (design) elastic strength was always unpredictably variable.

        The model was only useful if one understands the assumptions, limits, quality of data and the purpose of the program. If one didn’t understand all of those things, then the model was just like a computer game. This I observed as the sales manager (not an engineer) at one stage started to use the program to “design” products on which to quote… he twiddled with the parameters until the design “looked right”. Nobody got killed.

        The model for a long walk to the shops has implicit assumptions. It’s not useful for anything but a very limited universe. If the model tells me that I can walk to the shop before closing if I leave at 4 p.m.; what happens if there’s an imperfection in the universe? What are the consequences if the predictions of the model are wrong?

        “Deterministic” models such as the walk, or even the structural analysis are one-off calculations. There’s no iteration which can lead to an increase in errors (not strictly true for the structural analysis because that did iterate until the strain in the structure balanced the quasi-static loads; but those were only 2 to 5 iterations unless there was a catastrophic failure). Errors from rounding. Errors from assumptions that no longer hold true as values shift on subsequent iterations.

    • Sashka

      “For models to be useful for anything more than amusement, one has to incorporate all factors that are plausibly significant. ”

      Quite the opposite.

  28. ThePhysicsGuy

    Models, if I understand correctly, are essentially a scientific hypothesis of, for example how a real world physical system works, only in computer code. And as with any scientific hypothesis, it must be challenged by the rigors of the scientific method. A major component of the scientific method is, “test your hypothesis”.

    I’ve read the IPCC Summary for Policymakers, Climate Change 2007: The Physical Science Basis. Contribution of Working Group I. Figure SPM-5 of the report displays various model scenarios and projected temperature increases all the way to the year 2100. With these scenarios, the IPCC Working Group ll then evaluates their impacts, assigning various “confidence” or “likelihood” scales to these impacts.

    My question is this. How can the IPCC make model projections all the way to the year 2100 without TESTING the models all the way to the year 2100. It seems to me the IPCC are using hypotheses that cannot be tested (they are not falsifiable). And all the “alarming” impacts of climate change are based on these models.

    Many other scientist have noted the same issue with the IPCC models, including Dr. Roger Pielke Sr.

    Scientists have also tested the AR4 models in hindcast mode to see if they could accurately replicate the known climate of the 20th century, and the models were failures (D. Koutsoyiannis et al 2010).

    This is why I remain a skeptic. Climate science is a relatively new science. I don’t believe scientists have an adequate grasp on how the earths’ climate system functions, much less being able to accurately model it.

    • Steven Mosher

      Imagine an auto maker who wants to simulate a head on crash.
      You build a model based on physics. You run that code. You get results, say for a 30mph head on crash
      You run a few tests with real cars and crash dummies.
      You calibrate the model.
      Based on the model and the crash dummy test, you predict a rate of driver survival.
      Then you look for real world data to verify both of these.
      Reality does not cooperate and you dont have exact data on car speed when the crash occured.
      But, the model peforms ok.
      Then You are asked to run the model at 90mph. But you cant test that in the lab and you have no field data to check.

      Still your model provides an estimate of passenger survival.
      Does it have value? despite the fact that you cannot test or verify it exactly?
      What does it give you? it gives you the best information you have. It’s not right, but it is useful.

      Same with all sorts of models ( how will this building react in an earthquake? what happens when you shoot
      a bullet throgh the planes wing ) that we use in engineering. Not true, but useful. The ‘scientific’ method isnt some sort of panacea.

      • Sashka

        I don’t see how your reasoning supports your conclusion that the model useful. If you apply the model outside the range of conditions where it was calibrated and tested you won’t necessarily get useful results. It may be the best at your disposal and yet uselss.

      • Bernd Felsche

        It is insufficient to “calibrate” the model. It the vehicle doesn’t behave exactly according to the model, after allowing for tolerances, then it’s back to rethinking the model.

        Physics alone is inadequate to describe material behaviour under increasing rates of deformation. One has to do the materials testing to determine the energy absorbtion of materials under high rates of deformation. 0.1 mm in one dimension can mean the difference between insufficient energy absorbtion, just-right; or too much stiffness, increasing the acceleration experienced by the occupants, increasing the likelihood/severity of injury.

        Predicting crash survivability is also enormously variable in practice. Standardised testing removes a lot of variables; driver position, mass (distribution), restraint application, state of health; what they are wearing; even what they have in their pockets.

      • ThePhysicsGuy

        Stephen,
        I understand your point about models used in other disciplines. I’m a registered professional civil engineer by trade. In the area of reinforced concrete beam design, theoretical models were developed based on known properties of steel and concrete. Then actual concrete beams were constructed and tested to failure, to verify the theoretical beam design formulas. We are talking thousands of tests.

        Now in the case of climate models, for example those used by the IPCC, where is the rigorous testing? The earths’ climate system is a million times (wild guess) more complex than a concrete beam. And some climate features such as ocean circulation patterns are long term, like a decade or more.

        I have a strong science background, just not in atmospheric physics or climate science. And my skeptical nature as a scientist/engineer is having a hard time believing what the IPCC tells me about climate forecasts based on models. That’s why I happened on this site. So I could maybe expand my knowledge base.

      • Sashka

        Here’s another example.

        Suppose you need a weather forecast for April 1. Your best bet is to run the usual set of weather forecast models. But is it useful?

  29. Anteros

    Tamsin –

    I wonder if your eye caught the recent thread at Climate etc “What can we learn from climate models? Part 2.”
    Any thoughts?

  30. stefanthedenier

    unless all ESSENTIAL data is included; simulation is programmed to be misleading. Children are simulating firemen on the sandpit – if they use water pistol; no harm done.

    // snip //

    [I edited your comment for the policy on use of the word “liar”. Whether incomplete models are still useful is on topic though 🙂 – Tamsin]

  31. Peter Major

    I am fairly new to the arguments presented here. I became interested in climate change after visting Svalbard and Greenland. I was amazed at how much the glaciers had retreated in such a short time in comparison to Admiralty charts ten years ago. Clearly the polar regions were warming but there seemed to be much debate about the cause.

    I have never understood why this should be. Many satellites have infra red detectors and, given the potential for harm, it would seem cost effective to even launch more satellites in order to measure the heat radiated from the planet at many points around the globe over a decade and determine if the radiated heat is reducing or increasing. I am sure the radiation from the sun is already constantly measured.

    If it is reducing then the planet is warming because radiated heat is increasingly unable to escape due to greenhouse gases increasing. So in ten years we would know if we are causing the gases to increase. Well we could have done. Now, with permafrost melting and giving off methane, we will only be able to establish the rate at which heat is not escaping. The radiation budget I have heard it is called.

    Perhaps someone more knowledgeable can tell me whether or not this measurement has been or has been done.

    Much of the argument seems redundant. If the warming is natural it doesn’t stop it being potentially damaging to human beings so why speculate about the cause, why not put that energy and debate into determining what to do about it. Of course, if the radiation monitoring establishes that the radiation budget does not change much over time then we don’t have to do anything but it seems to me that it is crucial to know one way or the other and the only way to know is to take measurements.

    Should we take steps to reduce greenhouse gas emissions? If we use it as an opportunity to develop new technologies and new economic activity that seems perfectly reasonable to me. If we subsidise people to create inappropriate technology that cannot recover the greenhouse gases it took to make them let alone reduce the total emissions (something we in the UK are doing now) then that is just lunacy.

  32. Karl Kuhn

    “The important thing about a statistical model is that it only describes and doesn’t explain.”
    Well … I thought statistics is perhaps also a little little bit about testing of theories … nothing to learn from that?

    Well, first of all I am really happy that I finally found this weblog about climate modelling. I myself am an agricultural economist with quite some experience in modelling and simulation. I got sceptical of global warming when I became aware of the sheer size of the multi-billion dollar climate change industry that feeds on the AGW narrative. I see two camps of very different size and resources that claim different facts and insights on all kinds of levels along the cornerstones of the narrative … temperature trends, climate history, climate sensitivity, impact of climate change on what not, costs and benefits of AGW mitigation and adaptation.

    The bibles of the AGW industry are the IPPC reports. Their relevance is inseparably connected to model-based climate … projections, predictions … useless to discuss the fine difference between these two concepts. Would these models (it is a model club, actually) project or predict that global temp would not change due to rising CO2 levels, there would be no further need to sustain the AGW industry at its current size. We could use those billions for a better purpose or just for party, and thousands of bright consensus scientists could use their brainpower to solve other urgent problems, e.g. overcoming poverty.

    But these models keep telling basically the same results since 20 years. So I am really interested in how these models work.

    A good model should be rooted in sound statistics. (I am aware that I am grossly simplifying things in the following …): before you can APPLY something resembling a climate sensitivity in a simulation model, you first have to ESTIMATE it, or whatever is behind it. Regarding global or regional temp and CO2, that’s obviously a nightmare! While temp wildly fluctuates (a high variance, perhaps driven by a trend or oscillations), CO2 just rises steadily. Most of the information that comes along with temp and other fluctuations can’t be explained by CO2, so it is very difficult to statistically establish a cause-effect link here. So you can only explain a longer term upward residual trend in temps with CO2, and ONLY if you have understood the entire rest of the climate system VERY well. If you miss any other important variable or process, your residual claim about CO2 is bunk. Bottom line: the CO2 hypothesis is very difficult to test convincingly.

    But since almost two decades climate scientists keep saying two things:
    – we know for sure that rising CO2 causes considerable warming ( = our research is important for mankind)
    – but nevertheless we still have to learn a lot about the climate system ( = please keep funding our research)

    Given the nature of the dependent (temp) and the independent (CO2) variable, this all does not add up.
    If everything is so sure about AGW, if ‘the science is in’, why do we keep funding it at this level, with ever bigger models that always end up with the same results? I hope I will learn more about this in this blog.

  33. Theo Goodwin

    “Still your model provides an estimate of passenger survival.
    Does it have value? despite the fact that you cannot test or verify it exactly?
    What does it give you? it gives you the best information you have. It’s not right, but it is useful.”

    You used a rigged example. We know that the 90mph crash will be worse. We do not need a model to arrive at that conclusion. Try an unrigged example. Try an increase in the intake of illegal steroids by a professional athlete. In this example, we do not know whether the increase will be detrimental to health or performance. Experimentation will be necessary. Suppose the athlete asks you if you believe that doubling his dose will be helpful and harmless? Are you going to tell him that the best information available is that his current dose has been helpful and harmless? Is the best information that you have available useful?

    The worst error in your thinking is that you never consider the fact that modelers are claiming the authority of science for their models when their work is held to no scientific standard whatsoever. Your claim about scientific method is not worthy of a comic book version of science but maybe worthy of a version scrawled with a crayon on a padded wall.

  34. Lindsay Lee

    Hello all. Sorry i’m so late to this party but after some encouragement, some backseat driving and a boost of confidence i’m finally ready to join in. And to quote Tamsin, ‘grab yourself a cuppa, it’s a long one’ but I realised I have a lot to say!
    For anyone interested, this is me: http://www.see.leeds.ac.uk/people/l.lee
    And this is my job: http://www.see.leeds.ac.uk/research/icas/atmospheric-chemistry-and-aerosols/other-links/aerosol-modelling/current-research/aeros/
    I am a statistician and work in the field of aerosol modelling, after my PhD in vegetation modelling. The core of my work revolving around quantifying uncertainty in such models, in particular my current role is in the evaluation of such models using emulation – the specifics i’ll save for a later post that I think is on its way. I’d like to share some experiences of working with both statistical models and ‘physical’ models, though I like Ben B’s use of mechanistic models which I think best describes the ‘physical’ models I work with. I don’t work with GCMs and I’m not particularly interested in expressing my views on the global warming debate.
    I always introduce myself as a statistician when presenting to the ‘scientists’. There are two main reasons for this, apart from the fact that that’s exactly what I am: one, my understanding of the science is limited and I won’t be able to answer the questions that might come from years of experience in aerosol modelling, and two, I have found my train of thought is often different to most of the audience. It is also a bit dangerous to the point where someone actually walked away before I have a chance to say anything else (thankfully the person in question has since become very interested in the work). I think in general this is changing and certainly where I work the need to use statistics properly is a key priority.
    So, back to the topic, in terms of the blog title…… as a statistician i’ve been hearing this quote for the last ten years and I agree with it. A statistical model, such as a linear model, very rarely fits the data perfectly but they can still prove useful by telling us about overall trends etc. For me, the important thing about a statistical model, and statistics in general, is that it provides us with a framework for our thinking and any assumptions on which conclusions are based are transparent. I’m a Bayesian statistician because I think that incorporating personal beliefs, normally based on vast experience, is an important source of information. I also think that people that say you can’t use a Bayesian framework because it is too subjective are kidding themselves because every statistical model is in some way subjective – the important thing is that all these subjective ‘choices’ and their implications are declared and can be tested.
    In terms of the model definitions……. I do agree with the definitions but I think the statistical model can be misunderstood. Without meaning to generalise, I think one of the reasons people switch off when I mention i’m a statistician is because some assume that my job is to come in and replace parts of the model with a simpler statistical representation of what physicists have been working on for year – this is absolutely not my job. I think statistical models are essential in the field of climate models but only in assessing the uncertainty and carrying more detailed model evaluations than is possible without them and for having a framework for understanding the ‘usefulness’ of the conclusions that can be drawn from any single model.
    So, how do I think statistical models can help to justify any conclusions made from models? Tamsin has a nice post on some of the work they are doing and I hope to post on further topics as they come up. In general ( ;o) ) though, using a good statistical design to explore the model within the limits of its parameter in the first instance provides key information on the models capabilities and can be a reasonably quick way of identifying that a model cannot match observations in its current form no matter how long we spend changing parameter values – the model structure is wrong. Emulation is used to ‘interpolate’ the model runs so given some probabilistic information on the model parameters and some reasonable assumptions about the model behaviour we can further investigate the model using a probabilistic framework where all the assumptions made are clear and any conclusions drawn can be tested for robustness. The emulation allows us to carry out variance-based sensitivity analysis which means the uncertainty in the model prediction can be decomposed into its sources and we therefore have quantitative information on the drivers of the model predictions which can highlight irregular model behaviour and provide information on which parts of the model might be the next focus of future research. In this case I have not looked at the model compared to observations or attempted to calibrate. Calibration is a crucial next step. I’m not sure I understand how useful calibration is before you are comfortable with the capabilities of the model you are trying to calibrate – this is a discussion i’m sure will come up in the blog and I look forward to it.
    Anyway, thanks for sticking with me if you did, especially after my statistician confession ;o)
    Til the next time,
    Lindsay.