Category: blogging

We have nothing to fear

[This is a mirror of a post published at PLOS. Formatting may be better over there.]

 

I’m scared.

I must be, because I’ve been avoiding writing this post for some time, when previously I’ve been so excited to blog I’ve written until the early hours of the morning.

I’m a climate scientist in the UK. I’m quite early in my career: I’ve worked in climate science for six and a half years since finishing my PhD in physics. I’m not a lecturer or a professor, I’m a researcher with time-limited funding. And in the past year or so I’ve spent a lot of time talking about climate science on Twittermy blog and in the comments sections of a climate sceptic blog.

So far I’ve been called a moron, a grant-grubber, disingenuous, and Clintonesque (they weren’t a fan: they meant hair-splitting), and I’ve had my honesty and scientific objectivity questioned. I’ve been told I’m making a serious error, a “big, big mistake”, that my words will be misunderstood and misused, and that I have been irritating in imposing my views on others. You might think these insults and criticisms were all from climate sceptics disparaging my work, but those in the second sentence are from a professor in climate change impacts and a climate activist. While dipping my toes in the waters of online climate science discussion, I seem to have been bitten by fish with, er, many different views.

I’m very grateful to PLOS for inviting me to blog about climate science, but it exposes me to a much bigger audience. Will I be attacked by big climate sceptic bloggers? Will I be deluged by insults in the comments, or unpleasant emails, from those who want me to tell a different story about climate change? More worryingly for my career, will I be seen by other climate scientists as an uppity young (ahem, youngish) thing, disrespectful or plain wrong about other people’s research? (Most worrying: will anyone return here to read my posts?)

I’m being a little melodramatic. But in the past year I’ve thought a lot about Fear. Like many, I sometimes find myself with imposter syndrome, the fear of being found out as incompetent, which is “commonly associated with academics”. But I’ve also been heartened by recent blog posts encouraging us to face fears of creating, and of being criticised, such as this by Gia Milinovich (a bit sweary):

“You have to face your fears and insecurity and doubt. […] That’s scary. That’s terrifying. But doing it will make you feel alive.”

Fear is a common reaction to climate change itself. A couple of days ago I had a message from an old friend that asked “How long until we’re all doomed then?” It was tongue-in-cheek, but there are many that are genuinely fearful. Some parts of the media emphasise worst case scenarios and catastrophic implications, whether from a desire to sell papers or out of genuine concern about the impacts of climate change. Some others emphasise the best case scenarios, reassuring us that everything will be fine, whether from a desire to sell papers or out of genuine concern and frustration about the difficulties of tackling climate change.

Never mind fear: it can all be overwhelming, confusing, repetitive. You might want to turn the page, to change the channel. Sometimes I’m the same.

I started blogging to try and find a new way of talking about climate science. The title of my blog is taken from a quote by a statistician:

“essentially, all models are wrong, but some are useful” – George E. P. Box (b 1919)

By “model” I mean any computer software that aims to simulate the Earth’s climate, or parts of the planet (such as forests and crops, or the Antarctic ice sheet), which we use to try to understand and predict climate changes and their impacts in the past and future. These models can never be perfect; we must always keep this in mind. On the other hand, these imperfections do not mean they are useless. The important thing is to understand their strengths and limitations.

I want to focus on the process, the way we make climate predictions, which can seem mysterious to many (including me, until about a month before starting my first job). I don’t want to try and convince you that all the predictions are doom and gloom, or conversely that everything is fine. Instead I want to tackle some of the tricky scientific questions head-on. How can we even try to predict the future of our planet? How confident are we about these predictions, and why? What could we do differently?

When people hear what I do, one of the first questions they ask is often this:

“How can we predict climate change in a hundred years, when we can’t even predict the weather in two weeks?”

To answer this question we need to define the difference between climate and weather. Here’s a good analogy I heard recently, from J. Marshall Shepherd

“Weather is like your mood. Climate is like your personality.”

And another from John Kennedy:

“Practically speaking: weather’s how you choose an outfit, climate’s how you choose your wardrobe.”

Climate, then, is long-term weather. More precisely, climate is the probability of different types of weather.

Why is it so different to predict those two things? I’m going to toss a coin four times in a row. Before I start, I want you to predict what the four coin tosses are going to be: something like “heads, tails, heads, tails”. If you get it right, you win the coin*. Ready?

[ four virtual coin tosses…]

50p coin on cafe table

[ …result is tails, tails, tails, heads ]

Did you get it right? I’m a nice person, so I’m going to give you another chance. I’m going to ask: how many heads in the next four?

four more virtual coin tosses… ]

 

__________

 

…results is two heads out of four ]

The first of these is like predicting weather, and the second like climate. Weather is a sequence of day-by-day events, like the sequence of heads and tails. (In fact, predicting a short sequence of weather is a little easier than predicting coin tosses, because the weather tomorrow is often similar to today). Climate is the probability of different types of weather, like the probability of getting heads.

If everything stays the same, then the further you go into the future, the harder it is to predict an exact sequence and the easier it is to predict a probability. As I’ll talk about in later posts, everything is not staying the same… But hopefully this shows that trying to predict climate is not an entirely crazy idea in the way that the original question suggests.

My blog posts here at PLOS will be about common questions and misunderstandings in climate science, topical climate science news, and my own research. They won’t be about policy or what actions we should take. I will maintain my old blog allmodelsarewrong.com: all posts at PLOS will also be mirrored there, and some additional posts that are particularly technical or personal might only be posted there.

At my old blog we’ve had interesting discussions between people from across the spectrum of views, and I hope to continue that here. To aid this I have a firm commenting policy:

  • be civil; do not accuse; do not describe anyone as a denier (alternatives: sceptic, dissenter, contrarian), liar, fraud, or alarmist; do not generalise or make assumptions about others;
  • interpret comments in good faith; give others the benefit of the doubt; liberally sprinkle your comments with good humour, honesty, and, if you like them, cheerful emoticons, to keep the tone friendly and respectful;
  • stay on-topic.

I’m extremely happy to support PLOS in their commitments to make science accessible to all and to strengthen the scientific process by publishing repeat studies and negative results. I’m also very grateful to everyone that has supported and encouraged me over the past year: climate scientists and sceptics, bloggers and Tweeters. Thank you all.

And thank for you reading. My next post will be about another big question in climate science:

How can we do scientific experiments on our planet?

See you next time.

* You don’t, but if you were a volunteer at one of my talks you would.

Many dimensions to life and science

This post is timed to coincide with a meeting tomorrow, the Royal Meteorological Society’s “Communicating Climate Science”. If you are going, do come and say hello. If you aren’t, look out for me tweeting about it from 2-5.30pm BST.

On not blogging

I haven’t forgotten about you. I’ve still been churning over ideas and wanting to share them with you. I’ve thought of all of you that comment here, and those that silently lurk, whether friends, family, scientists, sceptics, passers-by, or a combination of these. But two big things this year have had to take priority over blogging (and the even more time-consuming process of moderating and replying to comments).

The first was a deadline. As some of you know well, the Intergovernmental Panel on Climate Change (IPCC) produces a report summarising the state-of-the-art in climate science research, and related topics, about every six years. They do this so policymakers have a handy (in practice, enormous and not very handy) reference to the evidence base and latest predictions. The IPCC set cut-off dates for including new research: one date for submission to journals, and another for acceptance after the peer-review process. The first of these dates was the 31st July this year. Translation: “try to finish and write up every piece of work you’ve ever started by this date”. Not every climate scientist chose to do this. But the project I work for, ice2sea, actually had it written into a contract with its funders, the European Union. We had no choice but to submit whatever was our current state-of-the-art in sea level predictions. I was a co-author of six papers* finished and submitted during June and July, and had several other studies on the go that didn’t make the deadline. So it was a rather intense time, and science had to take priority over talking about science.

The second was personal. I hesitated about whether to say this here. But part of my motivation for being a climate scientist in the public eye was to show the human side. And I also wanted to let you know that this blog is so important to me, has been so transformative, that it took something very big to keep me away. My husband and I separated two months ago.

I’m back, and I’m preparing for a big move. The US-based publisher and organisation PLoS (Public Library of Science) has invited me to be their climate blogger. It’s a fantastic opportunity to gain a big audience (more than 200,000 visitors per month, and a feed to Google News). I’m very happy to support PLoS because they publish open access journals, and because one of these (PLoS ONE) goes even further in its commitment to transparency in science. It will publish anything scientifically valid, whether or not it is novel. This might not sound important, or even a good idea, but it is an essential counter to the modern problem that plagues journals: that of only publishing new results, and not repeat studies. For the scientific method to work, we need studies that repeat and reproduce (or contradict) previous research. Otherwise we risk errors, chance findings, and very occasionally fraud, remaining unnoticed for years, or forever. I’m hosted at PLoS from the second week in December and will be posting twice a month.

The first post at PLoS will be a (long overdue) introduction to predicting climate change. It will probably be based around a talk I gave at the St Paul’s Way summer science school, at which I was the final speaker, which made Prof Brian Cox my warm-up act.

In other news, I talked about the jet stream and climate change live on BBC Wiltshire (9 mins), which was well received at the climate sceptic site Bishop Hill, and did a live Bristol radio show, Love and Science (1 hour). I also returned to my particle physics roots, with a Radio 4 interview about the discovery of the Higgs Boson (3 mins).

Our new(-ish) paper

Now the science bit. This is an advertisement for a paper we published in August:

Stephens E.M., Edwards T.L. and Demeritt D. (2012). Communicating probabilistic information from climate model ensembles—lessons from numerical weather prediction. WIREs Clim Change 2012, 3: 409-426.

It’s paywalled, but I can send a copy to individuals if they request it. Liz Stephens is a colleague and friend from my department at Bristol that did a great study with the UK Met Office and David Spiegelhalter on the interpretation of probability-based weather forecasts, using an online game about an ice cream man. I’ve never met David Demeritt, except in one or two Skype video calls. He’s interested in, amongst other things, how people interpret flood forecasts. I haven’t passed this post by them, but hopefully they will comment below if they have things to add or correct.

We noticed there was quite a bit of research on how well people understand and make decisions using weather forecasts, such as the probability of rainfall, and uncertainty in hurricane location, but not much on the equivalents in climate change. There have been quite a few papers, particularly in the run-up to the new IPCC report, that talk in general terms about how people typically interpret probability, uncertainty and risk, and about some of the pitfalls to avoid when presenting this information. But very few actual studies on how people interpret and make decisions from climate change predictions specifically. We thought we’d point this out, and draw some comparisons with other research areas, including forecasting of hurricanes, rain, and flooding.

Ensembles

The ‘ensembles’ in the title are a key part of predicting climate and weather. An ensemble is a group, a sample of different possibilities. Weather forecasts have been made with ensembles for many years, to help deal with the problem of our chaotic atmosphere. The most well-known explanation of chaos is the ‘butterfly effect’. If a butterfly stamps its foot in Brazil, could it cause a tornado in Illinois? Chaos means: small changes can have a big effect. A tiny change in today’s weather could lead to completely different weather next week. And in the same way, a tiny error in our measurements of today’s weather could lead to a completely different forecast of the weather next week. But errors and missing measurements are inevitable. So we try to account for chaotic uncertainty by making forecasts based on several slightly different variations on today’s weather. This is one type of ‘ensemble forecast’. It’s simply a way of dealing with uncertainty. Instead of one prediction, we make many. We hope that the ensemble covers the range of possibilities. Even better, we hope that the most common prediction in the ensemble (say, 70% of them predict a storm) is actually the most likely thing to happen. This gives us an estimate of the probability of different types of weather in the future.

Ensembles are at the heart of our attempts to describe how sure we are about our predictions. They are used to explore an uncertain future: what are the bounds of possibility? What is plausible, and what is implausible? Some climate prediction ensembles, like the weather forecast ensemble above, relate to the information we feed into the model. Others relate to imperfections in the models themselves. Some specific examples are in the footnotes below.**

The question we ask in our paper is: how should we express these big, complex ensemble predictions? There are too many dimensions to this problem to fit on a page or screen. Our world is three dimensional. Add in time, and it becomes four. There are very many aspects of climate to consider, such as air temperature, rainfall, air pressure, wind speed, cloud cover, and ocean temperature. We might have a prediction for each plausible input value, and a prediction for each plausible variation of the model itself. And one of these ensembles is produced for each of the different climate models around the world. Frankly, ensembles are TMI***.

To simplify or not to simplify

Scientists often think that the more information they can give, the better. So they dump all the raw ensemble predictions on the page. It’s a natural instinct: it feels transparent, honest, allows people to draw their own conclusions. The problem is, people are a diverse bunch. Even within climate science, they have different knowledge and experience, which affects their interpretation of the raw data. When you broaden the audience to other scientists, to policymakers, businesses, the general public, you run the risk of generating as many conclusions as there are people. Worse still, some can be overwhelmed by a multitude of predictions and ask “Which one should I believe?”

To avoid these problems, then, it seems the expert should interpret the ensemble of predictions and give them in a simplified form. This is the case in weather forecasting, where a meteorologist looks at an ensemble forecast and translates it based on their past experience. It works well because their interpretations are constantly tested against reality. If a weather forecaster keeps getting it wrong, they’ll be told about it every few hours.

This doesn’t work in climate science. Climate is long-term, a trend over many years, so we can’t keep testing the predictions. If we simplify climate ensembles too much, we risk hiding the extent of our uncertainty.

Our conclusions can be summed up by two sentences:

a) It is difficult to represent the vast quantities of information from climate ensembles in ways that are both useful and accurate.

b) Hardly anyone has done research into what works.

We came up with a diagram to show the different directions in which we’re pulled when putting multi-dimensional ensemble predictions down on paper. These directions are:

  1. “richness”: how much information we give from the predictions, i.e. whether we simplify or summarise them. For example, we could show a histogram of all results from the ensemble, or we could show just the maximum and minimum.
  2. “saliency”****: how easy it is to interpret and use the predictions, for a particular target audience. Obviously we always want this to be high, but it doesn’t necessarily happen.
  3. “robustness”: how much information we give about the limitations of the ensemble. For example, we can list all the uncertainties that aren’t accounted for. We can show maps in their original pixellated (low resolution) form, like the two maps shown below, rather than a more ‘realistic-looking’ smoothed version, like these examples.

Here’s the diagram:

The three ‘dimensions’ are connected with each other, and often in conflict. Where you end up in the diagram depends on the target audience, and the nature of the ensemble itself. Some users might want, or think they want, more information (richness and robustness) but this might overwhelm or confuse them (saliency). On the other hand, climate modellers might reduce the amount of information to give a simpler representation, hoping to improve understanding, but this might not accurately reflect the limitations of the prediction.

In some cases it is clear how to strike a balance. I think it’s important to show the true nature of climate model output (blocky rather than smoothed maps), even if they are slightly harder to interpret (you have to squint to see the overall patterns). Otherwise we run the risk of forgetting that – cough – all models are wrong.

But in other cases it’s more difficult. Giving a map for every individual prediction in the ensemble, like this IPCC multi-model example, shows the extent of the uncertainty. But if this is hundreds or thousands of maps, is this still useful? Here we have to make a compromise: show the average map, and show the uncertainty in other ways. The IPCC deals with this by “stippling” maps in areas where the ensemble predictions are most similar; perhaps the unstippled areas still look quite certain to the hasty or untrained eye. I like the suggestion of Neil Kaye, fading out the areas where the ensemble predictions disagree (examples of both below).


This brings us to the second point of our conclusions. The challenge is to find the right balance between these three dimensions: to understand how the amount of information given, including the limitations of the ensemble, affects the usefulness for various audiences. Do people interpret raw ensemble predictions differently to simplified versions of the same data? Do full ensemble predictions confuse people? Do simplifications lead to overconfidence?

There is very little research on what works. In forecasting rainfall probabilities and hurricanes, there have been specific studies to gather evidence, like workshops to find out how different audiences make decisions when given different representations of uncertainty. People have published recommendations for how to represent climate predictions, but these are based on general findings from social and decision sciences. We need new studies that focus specifically on climate. These might need to be different to those in weather-related areas for two reasons. First, people are given weather forecasts every day and interpret them based on their past experiences. But they are rarely given climate predictions, and have no experience of their successes and failures because climate is so long-term. Second, people’s interpretation of uncertain predictions may be affected by the politicisation of the science.

To sum up: we can learn useful lessons from weather forecasting about the possible options for showing multi-dimensional ensembles on the page, and about ways to measure what works. But the long-term nature of climate creates extra difficulties in representing predictions, just as it does in making them.

 

* Papers submitted for the IPCC Fifth Assessment Report deadline:

  • Ritz, C., Durand, G., Edwards, T.L., Payne, A.J., Peyaud, V. and Hindmarsh, R.C.A. Bimodal probability of the dynamic contribution of Antarctica to future sea level. Submitted to Nature.
  • Shannon, S.R., A.J. Payne, I.D. Bartholomew, M.R. van den Broeke, T.L. Edwards, X. Fettweis, O. Gagliardini, F. Gillet-Chaulet, H. Goelzer, M. Hoffman, P. Huybrechts, D. Mair, P. Nienow, M. Perego, S.F. Price, C.J.P.P Smeets, A.J. Sole, R.S.W. van de Wal and T. Zwinger. Enhanced basal lubrication and the contribution of the Greenland ice sheet to future sea level rise. Submitted to PNAS.
  • Goelzer, H., P. Huybrechts, J.J. Fürst, M.L. Andersen, T.L. Edwards, X. Fettweis, F.M. Nick, A.J. Payne and S. Shannon. Sensitivity of Greenland ice sheet projections to model formulations. Submitted to Journal of Glaciology.
  • Nick, F.M., Vieli, A., Andersen, M.L., Joughin, I., Payne, A.J., Edwards, T.L., Pattyn, F. and Roderik van de Wal. Future sea-level rise from Greenland’s major outlet glaciers in a warming climate. Submitted to Nature.
  • Payne, A.J., S.L. Cornford, D.F. Martin, C. Agosta, M.R. van den Broeke, T.L. Edwards, R.M. Gladstone, H.H. Hellmer, G. Krinner, A.M. Le Brocq, S.M. Ligtenberg, W.H. Lipscomb, E.G. Ng, S.R. Shannon , R. Timmerman and D.G. Vaughan. Impact of uncertainty in climate forcing on projections of the West Antarctic ice sheet over the 21st and 22nd centuries. Submitted to Earth and Planetary Science Letters.
  • Barrand, N.E., R.C.A. Hindmarsh, R.J. Arthern, C.R. Williams, J. Mouginot, B. Scheuchl, E. Rignot, S. R.M. Ligtenberg, M, R. van den Broeke, T. L. Edwards, A.J. Cook, and S. B. Simonsen. Computing the volume response of the Antarctic Peninsula ice sheet to warming scenarios to 2200. Submitted to Journal of Glaciology.

** Some types of ensemble are:

  1. ‘initial conditions’: slightly different versions of today’s weather, as in the weather forecasting example above
  2. ‘scenarios’: different possible future storylines, e.g. of greenhouse gas emissions
  3. ‘parameters’: different values for the control dials of the climate model, which affect the behaviour of things we can’t include as specific physical laws
  4. ‘multi-model’: different climate models from the different universities and meteorological institutes around the world

*** Too Much Information

**** Yes, we did reinvent a word, a bit. 

Push button to talk to a scientist

My apologies for the lack of posts recently. I have plenty of topics planned, but no free time right now. Service will resume shortly (ish).

 

Written by Comments Off on Push button to talk to a scientist Posted in blogging

How to be Engaging

I’ve started writing my promised post on models used in climate science, but thought I’d get this more topical post out first.

I went to an interesting conference session yesterday on communicating climate science, convened by Asher Minns (Tyndall Centre), Joe Smith (Open University), and Lorraine Whitmarsh (Cardiff University). A few people presented their research into different practices, and the speakers and convenors discussed audience questions afterwards. Paul Stapleton has also blogged about the session here.

A good stand-out point was presented by Mathieu Jahnich: research has found that the public prefer hopeful campaigns (in communicating climate science), not shocking images or negative, hopeless campaigns. I think most of us instinctively know this.

Hebba Haddad, a PhD student from the University of Exeter, spoke on topics close to my heart: the effect of communicating uncertainties in climate science, and the effect of the ‘voice’ in which it is presented. The first relates to the amount of information given about the uncertainty in a prediction: for example, saying “60-80% probability” rather than “70% probability”. The second relates to the phrasing: for example, using the warmer, more friendly and open phrasing of “We…” on an institute website, rather than the cooler, more distant “The centre…”.

She pointed out that scientists, of course, often attempt to transfer as much information as possible (the deficit model – a view that if only enough information were given, people would make rational decisions…), highlight the uncertainties, and use technical language. Science communicators, on the other hand, are more likely to understand their audience, understate uncertainties, convey simpler messages, and use a warmer, friendlier style.

Hebba carried out a study on 152 psychology students. The standout results for me were that:

  1. greater communication of uncertainty reduced belief in climate science;
  2. if little uncertainty is communicated, then the tone makes little difference to the level of engagement;
  3. if a lot of uncertainty is communicated, then a warm tone leads to much greater engagement than a distant tone.

This makes sense: if there is a lot of uncertainty, people use heuristics (short-cuts) to determine their trust in information. These particular students responded well to a personal, friendly tone. And in a later session, someone made the distinction between “relational trust”, which is based on similarity of intentions or values, and “calculative trust”, or “confidence”, based on past behaviour. They said that in everyday situations people tend to make decisions based on calculative trust, but in unfamiliar situations they use relational trust: another heuristic in times of uncertainty.

But this is interesting, because I think a large part of the audience who visit this blog (thank you) contradict these findings. Your trust in the science increases the more I talk about uncertainty! And I think you place greater importance in “calculative” rather than “relational” trust. In other words, you use the past behaviour of the scientist as a measure of trust, not similarity in values. I’ve found that whenever I talk about limitations of modelling, or challenge statements about climate science and impacts that I believe are not robust, my “trust points” go up because it demonstrates transparency and honesty. (See previous post for squandering of some of those points…). Using a warm, polite tone helps a lot, which supports Hebba’s findings. But I would wager that the degree of similarity to my audience is much less important than my ability to demonstrate trustworthiness.

Lorraine commented that Hebba’s finding of the importance of a warm tone is a challenge for scientists, who are used to talking (particularly writing) in a passive tone: “It was found that…” rather than “We found…”. To combat this, and increase public trust, Joe urged climate scientists to be “energetic digital scholars”, “open” and “public.” He thought we should not try to present climate science as “fact” but as “ambitious, unfolding, and uncertain”.

A US scientist in the audience asked for advice on how to engage online in such a polarised debate, and another audience member asked if giving simple messages (without all uncertainties) might compromise public trust in scientists. Joe kindly invited me to comment on these social media and uncertainty aspects. I speedily dumped the contents of my brain onto the room about how this blog and related efforts, giving a transparent, warts-and-all view of science as an unfolding process, had been very successful in increasing trust. In fact I had so much to say that I was asked to stop, would you believe (er, perhaps you would…).

For those of you that don’t trust the IPCC too much, I merely note that Jean-Pascal van Ypersele tapped me on the shoulder after I spoke about the importance of communicating uncertainties transparently, and asked me to email him the blog link…

Some tweeting about the session led to some lovely supportive messages from across the spectrum of opinions (thank you) and also some criticisms by people you might expect to be supportive. I’ve Storified these below.

And finally, Leo Hickman welcomes our ‘Rapunzel’ approach to communication. I was one of the invited palaeoclimate scientists at that meeting (actually, I probably invited myself), and can confirm it was very civil and productive.

 

Storify of the post-session Twitter conversation:

http://storify.com/flimsin/engaging

 

The Sceptical Compass

First, thank you. I have been overwhelmed by the response to this blog, and privileged to host the conversation of ninety five individuals on my first post. Here is a Wordle of the comments (not including my own):

Second, some thoughts on terminology. Over the last year I have started to talk with people who do not agree with the majority view on climate science. And there is no homogenous “sceptic” viewpoint. No binary grouping, Us and Them. I do use the terms “scientist” and “sceptic” for convenient shorthand (more on this later), but whenever I talk about public engagement I bring up the same points:

a) there is a continuous spectrum of viewpoints;

b) a large number of the unconvinced have numerate backgrounds (off the top of my head, physics, chemistry, computing, engineering, geology and finance seem to come up most frequently);

c) for various reasons, they have lost trust in the way we do, or the way we communicate, our science.

This week I’ve been thinking that the ‘spectrum’ description can be pushed further. If you’re familiar with the Political Compass, you’ll know that it extends the usual left-right political spectrum to a two dimensional graph of left-right and libertarian-authoritarian (if you don’t know it, I recommend you do the quiz). Here’s my proposed equivalent.

The horizontal axis is sceptism: the degree to which one critically evaluates evidence, does not accept arguments of authority, and updates ones viewpoint according to new information. This is the ‘Approach’ axis.

The vertical is the resulting ‘Conclusion’ axis: the degree to which one is convinced that humans are causing climate change and (if there is some degree of human cause) the scale and speed of that change. The sceptic/scientist shorthand I use corresponds to this axis. I have also started to use the less well-known upholder/dissenter and convinced/unconvinced.

The compass doesn’t include policy preferences, of course.

I’ve marked some examples. I don’t think it is a simple categorisation: like the Political Compass, people can move around through their lifetime, can be in different locations for different topics, and may be ‘smeared out’ vertically in the case of large uncertainty. I am not trying to label anyone here, and these are not rigidly defined regions. This is purely illustrative.

Convinced: horizontally, scientists and many non-scientists aspire to be sceptical; vertically, people in this region are convinced by the majority of these statements (for example, the majority of climate scientists).

Lukewarmer: horizontally, as previous; vertically, somewhat convinced (for example: concluding that humans cause some change but the rate is likely slow or very uncertain).

Unconvinced: horizontally, as previous; vertically, not convinced (for example, concluding there is warming but the human influence is small or negligible).

Believer: horizontally, uncritical and trusting of sources they consider authoritative; vertically, convinced of rapid, intense climate change and impacts caused by humans.

Unbeliever: horizontally, as previous; vertically, not convinced (for example, concluding there is no warming).

For the Bayesian nerds, I’ve just noticed the horizontal axis could be considered the width of one’s prior, and the vertical axis the mode of the resulting posterior.

I’ve chosen to put the dots at the vertical extremes for the uncritical side (Believer/Unbeliever) to reflect the fact that people who are not critically evaluating each statement, only trusting in another source or opinion, may be more likely to agree with the extreme ends and see the issues in black & white. I’ve chosen the Sceptical dots to be more moderate in the vertical (Convinced/Lukewarmer/Unconvinced) to reflect the fact that critical evaluations may lead to a more nuanced view with shades of grey. But I think of this as a continuous space.

There are no value judgements intended here. There are several reasons why there is not a one-to-one relationship between critical evaluation and conclusion: access to evidence; availability of time or technical expertise to evaluate it (reliance on judgement of others); general fallibility of humans. Scientists have differing opinions and interpretations of the same evidence, and we are not perfectly critical, so we can be at different levels on the vertical axis. For example:

– a scientist who models the physics of ice sheets might judge the statistically-based (‘semi-empirical’) methods that predict a rapid sea level rise as “not credible”: they would therefore be lower down the vertical scale;

– a scientist might search for an estimate of the current health impacts of climate change and, for lack of time or another reason, use a non-peer-reviewed estimate that reported severe impacts: they would therefore be higher up the vertical scale and further left horizontally.

I’d be interested to hear if people think this is a useful framework. If you don’t like it, please (kindly) suggest changes.

 

Third, the scope of this blog. I said to Peter Gleick that my aims were: to communicate my own research, because I am publicly funded, and because it gives the research greater exposure; to engage sceptics (see above!), and to practice writing for a general audience. This post is already too long, and the time too late, for me to list every topic I intend to cover but it will become apparent as I write posts. Some things I cannot do on this blog:

a) answer every question asked: this will depend on my knowledge and the extent to which I have time to answer (both can be improved by postponing to a later post);

b) address everyone’s problems with climate science: I am only one person, an early career researcher with a lot of things to wrap up by 31st July, and although I try to read outside my area I cannot promise to have the expertise or time to address every issue;

c) comment on policy choices.

I suppose this is just a restating of not pleasing all of the people.

 

Fourth, a comments policy.

So far I have let through every non-spam comment and automatically allowed previous posters to comment. I would like to trust people to be sensible with this and not have to start moderating out comments.

Therefore I ask you to comply with the following:

a) civility is essential;

b) accusations are not to be made;

c) the words denier, liar and fraud are not permitted (this list may increase): see (a) and (b);

d) generalisations are to be avoided;

e) if you have a particular bugbear or issue with earth system model uncertainty that is not related to the post topic please invite us once, perhaps twice, to discuss it in the very suitable Unthreaded section of Bishop Hill;

f) if you have a particular bugbear or issue with some other topic, or with policy, please discuss it elsewhere;

g) interpret comments in good faith: each is from a person, with limited free time, and frazzled nerves, and good intentions;

h) liberally sprinkle your comments with good-humour, honesty, and ‘smiley’ or ‘winky’ faces, to keep the tone convivial.

 

Thank you.

All Blog Names are Wrong

As soon as I thought of the name for this blog, I thought I might be on to a good thing. The George Box quote from which it is taken is one I repeat in my public talks and university lectures, to make the points that:

(a) climate* scientists do not believe their models can exactly reproduce the real world; and

(b) climate models are imperfect, but they can still be useful tools to understand the planet.

* I say ‘climate’ because it is more recognisable, but I mean ‘earth system’: the whole or any individual part of the planet. For example, I currently work with glaciologists modelling the ice sheets of Greenland and Antarctica.

Not everyone agreed with my assessment when I asked for opinions on Twitter. I was surprised that a senior academic tried to persuade me, fairly forcefully, not to use the name.

I’ve put most of the conversation here (emphasis mine). It highlights two schools of thinking on how best to communicate climate science and partly reflects, I think, the difference between the relatively calm conversations of the UK and the polarised, antagonistic debates more common in the USA. The scientists over there are attacked and are therefore (understandably) defensive. Over we are prodded, or huffed at, in the British way, and it is easier to respond candidly.

@flimsin: Probable title of my new blog: allmodelsarewrong.com. (George Box quote). Main point of my job is estimating how wrong. Whaddya think?

Hydrologist Peter Gleick (Pacific Institute) was not keen…

@PeterGleick: @flimsin Title is serious error.Buys into “everything is uncertain” meme.And argument that politicians don’t hear about uncertainties is BS.
@PeterGleick: @flimsin Another comment on your proposed blog title. Look at this essay, especially item 2 on “uncertainty” and “knowns versus unknowns.”

In this essay, Donald Brown writes that the climate ‘disinformation campaign‘ is

a social movement that…consistently uses scientific uncertainty arguments as the basis of its opposition

I started to defend my position…

@flimsin: @PeterGleick I just think we shouldn’t attempt to hide or spin the fact that models are not reality. My research is in quantifying uncerts.
@PeterGleick: @flimsin Of course. Do you really think the climate debate is about scientists claiming models are reality? And do you not see the
@PeterGleick@flimsin intentional efforts of many to overemphasize uncertainties while ignoring certainties?
@flimsin: @PeterGleick There’s more than one debate. I want to reflect the conversations inside sci community about best ways to quantify uncert.
@flimsin@PeterGleick More of a publically-accessible blog about my own research than a blog aimed at the public.
@flimsin: @PeterGleick Of course I see it. But I also see ppl in other research areas wanting to know more about how we deal with predictive uncerts.

He pressed the point, asking what kind of people supported me:

@PeterGleick: @flimsin great idea, but title is important, and using the first half of that famous quote would, I think, be big, big, mistake.
@PeterGleick: @flimsin @ret_ward other “climate scientists” think it good idea? Most positive comments I saw weren’t from climate scientists but skeptics.

I pointed out that several climate scientists had approved, including:

@AidanFarrow: @flimsin allmodelsarewrong.com > strongly approve
@icey_mark@flimsin it sounds a great space for conversations. You’ll have to have your armour on sometimes! Good luck and thanks for engaging
@ed_hawkins@flimsin Good name! I wouldn’t pick .com though. How about .org instead?
@richardabetts: @flimsin @d_m_hg @ret_ward @Realclim8gate Yep, I really like allmodelsarewrong.com (sub-heading “…but some are more useful than others”)
@clv101@flimsin Box quote is a great starting place for a blog. Not easy topic to cover well for a broad/public/sceptic audience though. Good luck!

though one was cautious:

@d_m_hg: @flimsin The 2nd part ‘some are useful’ finishes the idea-can it be incorporated somehow? Otherwise you might attract skeptic troublemakers.

(but I do want to attract them!) and Bob Ward, policy and communications director of the London School of Economic’s Grantham Research Institute, politely suggested an alternative:

@ret_ward@flimsin Some might confuse it with allmodelsareuseless! How about howskillfularemodels? 

But this tweet from Peter was the most unexpected:

@PeterGleick: @flimsin Last comment…. not all models are wrong.

Er…pardon? This is the crux of it. How can anyone make that claim? My best guess is that to make his point he is wilfully misinterpreting the word in the way he says others will, i.e. that wrong = useless.

@flimsin: @PeterGleick Sir, it appears we have a profound philosophical disagreement 🙂 Nothing can precisely simulate reality, only approximate.
@PeterGleick: @flimsin Does that make them “wrong?” “Wrong” to you means “uncertain.” “Wrong” to public means “you don’t know what you’re talking about.”
@flimsin: @PeterGleick Exactly – all the better to explain the difference. Better to improve scientific literacy than to patronise, I think.
@PeterGleick: @flimsin But who’s the audience? The public? Policymakers? Other scientists or science communicators? It matters, as does the title.
@flimsin@PeterGleick All those welcome. 1. Publicly funded -> communicate my research. 2. Research exposure 3. Engage sceptics. 4. Practice writing.

The excellent Richard Betts of the Met Office Hadley Centre put it rather well:

@richardabetts: @PeterGleick @flimsin Which model is right? Please can I have it?
@PeterGleick: @richardabetts flimsin Richard, which model is “wrong?” Wrong is the wrong term. It’s not what you mean, and it is misunderstood by public.
@flimsin: @PeterGleick @richardabetts All are wrong…better to try and educate that science has shades of grey than try to give appearance of B&W
@PeterGleick: @flimsin @richardabetts I repeat “wrong” is the wrong term. It WILL be misunderstood and misused. Read that essay: rockblogs.psu.edu/climate/

I found this a little heavy-handed. We are all entitled to our opinion, and I didn’t enjoy being shoehorned into someone else’s vision of science communication. I think this is a very dangerous approach, as Richard pointed out:

@richardabetts@flimsin @ret_ward Be wary of advice “This might be misused by the sceptics” Start of slippery slope from objective science into advocacy.
@richardabetts: @PeterGleick @flimsin Brown says “climate denial machine … has made claims that mainstream climate scientists are corrupt or liars” (cont)
@richardabetts: @PeterGleick @flimsin IMHO only way to combat this piece of disinformation is to prove otherwise by public discussion of science warts & all

As did physicist Jonathan Jones:

@nmrqip@richardabetts Yep. Lying “to avoid being misunderstood” never ends well @PeterGleick @flimsin

One of the problems we need to overcome is a lack of trust in climate scientists by some members of the public – or even other scientists – by showing that we do science no differently from anybody else. If we start to ‘spin’ the science, to gloss over the known unknowns, then we deserve these accusations.

Anyone that wants to talk about the ways we estimate confidence in predictions of the future (or studies of the past) is very welcome to come here and discuss it, at any level. Anyone that wants to misrepresent climate science by cherry-picking snippets of sentences will do that regardless, no matter what what the blog name or content.

Conclusion: if my blog causes this much debate before I’ve written anything, I think I’ve chosen the right name…

Hello world!

My first blog was about knitting. It had one post. I’m hoping to stick this one out for longer.

More soon…