Category: language

How to be Engaging

I’ve started writing my promised post on models used in climate science, but thought I’d get this more topical post out first.

I went to an interesting conference session yesterday on communicating climate science, convened by Asher Minns (Tyndall Centre), Joe Smith (Open University), and Lorraine Whitmarsh (Cardiff University). A few people presented their research into different practices, and the speakers and convenors discussed audience questions afterwards. Paul Stapleton has also blogged about the session here.

A good stand-out point was presented by Mathieu Jahnich: research has found that the public prefer hopeful campaigns (in communicating climate science), not shocking images or negative, hopeless campaigns. I think most of us instinctively know this.

Hebba Haddad, a PhD student from the University of Exeter, spoke on topics close to my heart: the effect of communicating uncertainties in climate science, and the effect of the ‘voice’ in which it is presented. The first relates to the amount of information given about the uncertainty in a prediction: for example, saying “60-80% probability” rather than “70% probability”. The second relates to the phrasing: for example, using the warmer, more friendly and open phrasing of “We…” on an institute website, rather than the cooler, more distant “The centre…”.

She pointed out that scientists, of course, often attempt to transfer as much information as possible (the deficit model – a view that if only enough information were given, people would make rational decisions…), highlight the uncertainties, and use technical language. Science communicators, on the other hand, are more likely to understand their audience, understate uncertainties, convey simpler messages, and use a warmer, friendlier style.

Hebba carried out a study on 152 psychology students. The standout results for me were that:

  1. greater communication of uncertainty reduced belief in climate science;
  2. if little uncertainty is communicated, then the tone makes little difference to the level of engagement;
  3. if a lot of uncertainty is communicated, then a warm tone leads to much greater engagement than a distant tone.

This makes sense: if there is a lot of uncertainty, people use heuristics (short-cuts) to determine their trust in information. These particular students responded well to a personal, friendly tone. And in a later session, someone made the distinction between “relational trust”, which is based on similarity of intentions or values, and “calculative trust”, or “confidence”, based on past behaviour. They said that in everyday situations people tend to make decisions based on calculative trust, but in unfamiliar situations they use relational trust: another heuristic in times of uncertainty.

But this is interesting, because I think a large part of the audience who visit this blog (thank you) contradict these findings. Your trust in the science increases the more I talk about uncertainty! And I think you place greater importance in “calculative” rather than “relational” trust. In other words, you use the past behaviour of the scientist as a measure of trust, not similarity in values. I’ve found that whenever I talk about limitations of modelling, or challenge statements about climate science and impacts that I believe are not robust, my “trust points” go up because it demonstrates transparency and honesty. (See previous post for squandering of some of those points…). Using a warm, polite tone helps a lot, which supports Hebba’s findings. But I would wager that the degree of similarity to my audience is much less important than my ability to demonstrate trustworthiness.

Lorraine commented that Hebba’s finding of the importance of a warm tone is a challenge for scientists, who are used to talking (particularly writing) in a passive tone: “It was found that…” rather than “We found…”. To combat this, and increase public trust, Joe urged climate scientists to be “energetic digital scholars”, “open” and “public.” He thought we should not try to present climate science as “fact” but as “ambitious, unfolding, and uncertain”.

A US scientist in the audience asked for advice on how to engage online in such a polarised debate, and another audience member asked if giving simple messages (without all uncertainties) might compromise public trust in scientists. Joe kindly invited me to comment on these social media and uncertainty aspects. I speedily dumped the contents of my brain onto the room about how this blog and related efforts, giving a transparent, warts-and-all view of science as an unfolding process, had been very successful in increasing trust. In fact I had so much to say that I was asked to stop, would you believe (er, perhaps you would…).

For those of you that don’t trust the IPCC too much, I merely note that Jean-Pascal van Ypersele tapped me on the shoulder after I spoke about the importance of communicating uncertainties transparently, and asked me to email him the blog link…

Some tweeting about the session led to some lovely supportive messages from across the spectrum of opinions (thank you) and also some criticisms by people you might expect to be supportive. I’ve Storified these below.

And finally, Leo Hickman welcomes our ‘Rapunzel’ approach to communication. I was one of the invited palaeoclimate scientists at that meeting (actually, I probably invited myself), and can confirm it was very civil and productive.

 

Storify of the post-session Twitter conversation:

http://storify.com/flimsin/engaging

 

A model of models

First, apologies for the delay after the overwhelmingly great start and my promises of new posts. I’ve been wanting to write for a week but had other urgent commitments (like teaching) I had to honour first. I hope to post once a week or fortnight, but it will be a bit variable depending on the day job and the interestingness of my activities and thoughts. I do have a lot of ideas lined up – I wouldn’t have started a blog if I didn’t – but at the moment it takes me time to set them down. I expect this to get faster.

Second, thanks for (mostly) sticking to the comments policy and making this a polite, friendly, interesting corner of the web.

Before I begin blogging about models, I ought to talk about what a model is. Aside from the occasional moment of confusion when describing one’s “modelling job” to friends and family, there are several things that might come to mind.

Model is a terribly over-burdened word. It can be an attractive clothes horse, a toy train, something reviewed by Top Gear, or a Platonic ideal. I will talk about three further meanings that relate to the sense of “something used to represent something else”: these are conceptual, statistical, and physical. They are distinct ideas, but in practice they overlap, which can add to the confusion.

A conceptual model is an idea, statement, or analogy that describes or explains something in the real world (or in someone’s imagination). It is ‘abstracted’, simpler than and separated from the thing it describes. In science, before you can do experiments and make predictions you must have an idea, a description, a concept of the thing you are studying. This conceptual model might include, for example, a tentative guess of the way one thing depends on another, which could then be explored with experiments.

A statistical model is a mathematical equation that describes the relationship between two or more things, ‘things’ being more commonly referred to as ‘variables’. A variable is a very broad term for something that varies (ahem), something interesting (or dull) that is studied and predicted by scientists or statisticians*: it could be the number of bees in a garden, the average rainfall in the UK, or the fraction of marine species caught in the North Atlantic that are sharks. A statistical model can often be represented in words as well as equations: for example, ‘inversely proportional’ means that as one variable increases a second variable decreases. The important thing about a statistical model is that it only describes and doesn’t explain.

A physical model is a set of mathematical equations that explains the relationship between two or more variables. It also refers to a computer program that contains these equations, and to help (or increase) the confusion, these computer models are often called simulators. By explain I mean that it is an expression of a theory, a physical law, a chemical reaction, biological process, or cause-and-effect: an expression not only of knowledge but understanding about the way things behave. The understanding might not be perfect – it might be a partial or simplified physical model – but the point is it attempts to describe the mechanisms, the internal cogs and wheels, rather than simply the outward behaviour.

Physical models are the main focus of this blog, but there are many interesting links between the three: physical models often incorporate statistical models to fill in the gaps where our understanding is poor; a statistical model may describe another model (conceptual or physical). There are myriad different types of physical model, and even more uses for them. In the next post, I will talk about a few physical models I use in my research.

A general note about my plans. I think it’s important to first set out some basic terms and concepts, particularly for those that not familiar with modelling, so please be patient if you are an expert. Before long I will also post more technical pieces, which will be labelled as such so as not to scare off non-experts. I’ll also start blogging about the day-to-day, such as interesting conference talks and random (mostly science-related) thoughts, rather than only pre-planned topics.

 

* The opposite of a variable is…a constant. These can be interesting too.

The Sceptical Compass

First, thank you. I have been overwhelmed by the response to this blog, and privileged to host the conversation of ninety five individuals on my first post. Here is a Wordle of the comments (not including my own):

Second, some thoughts on terminology. Over the last year I have started to talk with people who do not agree with the majority view on climate science. And there is no homogenous “sceptic” viewpoint. No binary grouping, Us and Them. I do use the terms “scientist” and “sceptic” for convenient shorthand (more on this later), but whenever I talk about public engagement I bring up the same points:

a) there is a continuous spectrum of viewpoints;

b) a large number of the unconvinced have numerate backgrounds (off the top of my head, physics, chemistry, computing, engineering, geology and finance seem to come up most frequently);

c) for various reasons, they have lost trust in the way we do, or the way we communicate, our science.

This week I’ve been thinking that the ‘spectrum’ description can be pushed further. If you’re familiar with the Political Compass, you’ll know that it extends the usual left-right political spectrum to a two dimensional graph of left-right and libertarian-authoritarian (if you don’t know it, I recommend you do the quiz). Here’s my proposed equivalent.

The horizontal axis is sceptism: the degree to which one critically evaluates evidence, does not accept arguments of authority, and updates ones viewpoint according to new information. This is the ‘Approach’ axis.

The vertical is the resulting ‘Conclusion’ axis: the degree to which one is convinced that humans are causing climate change and (if there is some degree of human cause) the scale and speed of that change. The sceptic/scientist shorthand I use corresponds to this axis. I have also started to use the less well-known upholder/dissenter and convinced/unconvinced.

The compass doesn’t include policy preferences, of course.

I’ve marked some examples. I don’t think it is a simple categorisation: like the Political Compass, people can move around through their lifetime, can be in different locations for different topics, and may be ‘smeared out’ vertically in the case of large uncertainty. I am not trying to label anyone here, and these are not rigidly defined regions. This is purely illustrative.

Convinced: horizontally, scientists and many non-scientists aspire to be sceptical; vertically, people in this region are convinced by the majority of these statements (for example, the majority of climate scientists).

Lukewarmer: horizontally, as previous; vertically, somewhat convinced (for example: concluding that humans cause some change but the rate is likely slow or very uncertain).

Unconvinced: horizontally, as previous; vertically, not convinced (for example, concluding there is warming but the human influence is small or negligible).

Believer: horizontally, uncritical and trusting of sources they consider authoritative; vertically, convinced of rapid, intense climate change and impacts caused by humans.

Unbeliever: horizontally, as previous; vertically, not convinced (for example, concluding there is no warming).

For the Bayesian nerds, I’ve just noticed the horizontal axis could be considered the width of one’s prior, and the vertical axis the mode of the resulting posterior.

I’ve chosen to put the dots at the vertical extremes for the uncritical side (Believer/Unbeliever) to reflect the fact that people who are not critically evaluating each statement, only trusting in another source or opinion, may be more likely to agree with the extreme ends and see the issues in black & white. I’ve chosen the Sceptical dots to be more moderate in the vertical (Convinced/Lukewarmer/Unconvinced) to reflect the fact that critical evaluations may lead to a more nuanced view with shades of grey. But I think of this as a continuous space.

There are no value judgements intended here. There are several reasons why there is not a one-to-one relationship between critical evaluation and conclusion: access to evidence; availability of time or technical expertise to evaluate it (reliance on judgement of others); general fallibility of humans. Scientists have differing opinions and interpretations of the same evidence, and we are not perfectly critical, so we can be at different levels on the vertical axis. For example:

– a scientist who models the physics of ice sheets might judge the statistically-based (‘semi-empirical’) methods that predict a rapid sea level rise as “not credible”: they would therefore be lower down the vertical scale;

– a scientist might search for an estimate of the current health impacts of climate change and, for lack of time or another reason, use a non-peer-reviewed estimate that reported severe impacts: they would therefore be higher up the vertical scale and further left horizontally.

I’d be interested to hear if people think this is a useful framework. If you don’t like it, please (kindly) suggest changes.

 

Third, the scope of this blog. I said to Peter Gleick that my aims were: to communicate my own research, because I am publicly funded, and because it gives the research greater exposure; to engage sceptics (see above!), and to practice writing for a general audience. This post is already too long, and the time too late, for me to list every topic I intend to cover but it will become apparent as I write posts. Some things I cannot do on this blog:

a) answer every question asked: this will depend on my knowledge and the extent to which I have time to answer (both can be improved by postponing to a later post);

b) address everyone’s problems with climate science: I am only one person, an early career researcher with a lot of things to wrap up by 31st July, and although I try to read outside my area I cannot promise to have the expertise or time to address every issue;

c) comment on policy choices.

I suppose this is just a restating of not pleasing all of the people.

 

Fourth, a comments policy.

So far I have let through every non-spam comment and automatically allowed previous posters to comment. I would like to trust people to be sensible with this and not have to start moderating out comments.

Therefore I ask you to comply with the following:

a) civility is essential;

b) accusations are not to be made;

c) the words denier, liar and fraud are not permitted (this list may increase): see (a) and (b);

d) generalisations are to be avoided;

e) if you have a particular bugbear or issue with earth system model uncertainty that is not related to the post topic please invite us once, perhaps twice, to discuss it in the very suitable Unthreaded section of Bishop Hill;

f) if you have a particular bugbear or issue with some other topic, or with policy, please discuss it elsewhere;

g) interpret comments in good faith: each is from a person, with limited free time, and frazzled nerves, and good intentions;

h) liberally sprinkle your comments with good-humour, honesty, and ‘smiley’ or ‘winky’ faces, to keep the tone convivial.

 

Thank you.