Skip to content

Talking Headlines: Professor Dorothy Bishop on Science in the Media

by on 2015/01/27

Welcome to a new series of posts, Talking Headlines, where we interview world-leading researchers and journalists about their experiences and views on how the media portrays research. This week, Silvia Paracchini interviews Dorothy Bishop, Professor of Developmental Neuropsychology and Wellcome Principal Research Fellow at the Department of Experimental Psychology in Oxford. Her main research focuses on understanding why some children have language problems despite normal development in other cognitive areas. She is a Fellow of the British Academy, a Fellow of the Academy of Medical Sciences and a Fellow of the Royal Society. In addition, she writes a popular blog on a range of academic-related topics, including many posts on journalism/science communications.

Dorothy, science in the media is a topic dear to you. What are the top three most common problems encountered in bad science reporting?

  1. Confusing causation and correlation. It’s so elementary yet people do it all the time, especially when it is an association that makes sense. For instance, people who exercise are healthier; children whose parents read to them are better readers. But maybe unhealthy people don’t feel able to exercise. Maybe parents who don’t read much share a genetic predisposition to reading problems with their children. Sorting out causation is one of the big problems that science tries to address, but it is seldom possible to do so from observational data: experiments are usually needed.
  2. Cherrypicking. Suppose I did a study of the effect of fish oil on children’s attention, and I found that those who received the intervention did much better than other children on a measure of concentration. It sounds exciting, but we would be less excited if we knew that the study took twenty different measures of attention and only one of them was statistically significant, and some went in the other direction. Even scientists get this wrong and fail to grasp the difference in meaning of an effect that was predicted in advance and one that emerges from a mass of exploratory tests.
  3. Sensational headlines. These often accompany a more reasonable article. I’ve found that if you complain to a journalist they’ll argue that they weren’t responsible for the headline. I’ve even suffered from this myself when pieces I’ve written have appeared in the press: they virtually never use a title the author supplies and can subtly distort meaning. Historically, this is because in newspapers the headline has to both be catchy and fit in a specific space – and is often written late in the process of assembling the paper after the journalist has gone home. But it is so often a problem that I reckon newspapers need to rethink and at least give authors right to approve it before going to press.

Have you noticed any trend in science reporting getting any better…or worse?

I think the standard of reporting in the UK quality newspapers is now generally good, whereas in the past it was more patchy – often excellent but sometimes diabolical. We still have problems with the tabloids, who seem to think their readers won’t be interested in anything that is nuanced or complicated, and so simplify and sensationalise stories. Even there, though, I think we are now seeing occasional very good pieces. For instance, this Daily Mail coverage of the Philae landing seemed good to me, whereas pieces like ‘Plants can talk, say scientists’ represent the sillier side of the newspaper.

Who holds responsibility for bad reporting; journalists, scientist or institutions like Universities?

I’ve found it quite an eye-opener blogging about this. When I first got interested I thought the fault was mostly with the journalists. One of my first blogposts years ago was to announce the “Orwellian Prize for Journalistic Misrepresentation“, in an attempt to name and shame the worst offenders. I did award the prize for two years running, but even by the second year I was having doubts for two reasons. First, there was growing evidence that University press offices were often putting out over-hyped accounts of research in an attempt to get media attention, and furthermore they were often aided and abetted by the researchers themselves. I had been naïve in assuming this was a rarity, and indeed a recent article in the BMJ has confirmed it is common [and you can read our Research the Headlines summary of the findings here]. Second, journalists at some papers were under pressure from editors to slant stories in particular ways. For the last article where the Orwellian prize was awarded, I decided it should go to the editor, Paul Dacre, rather than to the journalist who wrote the story. The explanation is on my blog here.

Do you think the increasing demand on researchers to produce “high impact” rather than basic research might be linked, at least in some cases, to publicising studies that in reality are still inconclusive?

No. Both are problematic issues, but I doubt that they are linked. People doing basic research have always had the problem that it is seldom newsworthy. You could, for instance, make a dramatic breakthrough in understanding the mechanism of how genes affect neural migration – to take an example close to your heart. But imagine trying to explain this to a journalist – I’m sure you have had this experience! They would have difficulty seeing any interest in it. In fact, for any research that is remotely linked to human development or disorders, the journalist will try to turn it into a story about either improving diagnosis or curing disease – even though this is a very distant prospect.

The ‘impact agenda’ has various interpretations, but mostly refers to the REF2014 evaluation of universities, where it had a very specific definition: you had to demonstrate how a particular piece of research had influenced non-academics, be they policymakers, patients, or the general public. Furthermore, you had to come up with hard evidence for the effect: hand-waving was not enough. This was quite different from ‘public engagement’ or getting one’s research into the media. Indeed, time spent on those activities would not be recognised as impact in REF2014.

Blogs, like yours, are used to help interpret findings of studies that might have generated hype in the media. However, blogs are sometimes labelled as unreliable tools because they’re not controlled by peer-review processes. What is your response?

I see blogging as a very different kind of medium from conventional publication, with its own pros and cons. But overall blogging is a very positive method of post-publication peer review – and again, I have blogged about this. Of course, anyone can say anything on a blog, which means it could be used to disseminate nonsense, but, provided comments are enabled, you can also get instant reaction and debate. I’ve found a blogpost is a good way to check if there are flaws in my argument, because if there are, you can guarantee someone will pick them up. The traditional journal article is hopeless for debate: the most one ever sees are occasional position pieces with a commentary and response, but the whole process is typically very slow and involves just one or two people. We do need the option of more informal and rapid interactions to discuss issues of interest in a public forum, and a blog is probably the best way doing this.

You have a category of “celebrity scientists/quackery” posts on your blog which describes the reporting of claims, sometimes quite strong and involving financial interests, not supported by scientific evidence. What is the role of the media in promoting this phenomenon?

The media do love someone who will be outspoken, especially on controversial topics, and the people I have cited as celebrity scientists do just that. They also seek out opportunities to promote themselves, whereas many scientists run a mile from the media. So I can understand how this happens, but the media take a lot of blame for failing to do sufficient background checks about such people. They tend to take self-proclaimed expertise at face value, as in this case. We do now have the Science Media Centre, which can help journalists distinguish genuine researchers from those who have no real credentials.

And finally, what is the best tip you would like to give to members of the public wishing to interpret how far they can trust claims of sensational discoveries?

First, if somebody says A causes B, consider whether the association between A and B might have another explanation. Maybe B causes A. Maybe A and B are both the result of C. Second, join Twitter and look for mentions of the work. The Twitterati are often the first to note when research has been overhyped. Of course, you can’t believe everything that is said on Twitter either, but if respectable academic commentators are querying the claims, see what reasons they give. Finally, if it sounds too good to be true, it probably is.

Dorothy, thank you so much for your time and above all for your enlightening blog, which also provides a useful guide on how to use Twitter for academics!

This is the first in our new Talking Headlines series. Follow us on Twitter or sign up for our email alerts to make sure you hear more from our interviews with researchers and journalists about their experiences and views on how the media portrays research.

Leave a comment