Skip to content

Talking Headlines: with Mo Costandi

by on 2015/05/22

Today we talk with someone who writes the headlines.

Mo Costandi is a freelance science writer with a background in neuroscience. He writes the popular and influential Neurophilosophy blog, hosted by The Guardian, and is the author of 50 Human Brain Ideas You Really Need to Know, published by Quercus in 2013.

Mo, can you take us through the process that turns a press release into a news story?

I see hundreds of press releases every week, from various sources – especially websites for members of the press, such as EurekAlert!, and the press offices of the scientific journals, which send out releases about forthcoming papers and also send journalists their Table of Contents in advance.

If I find one that looks interesting, I’ll click through to find out a bit more and then look at the paper. I might then pitch a story about to one of the news editors I work with (at Nature, New Scientist, or Science). If they like the pitch, then I’ll go ahead and write the story.

Not all my news stories start out this way. I imagine that most newspaper reporters rely on press releases to find and report on the week’s biggest science stories. I don’t work on a science desk, so I have more time to find cool papers in slightly more obscure journals, which might not otherwise be reported at all. (See, for example, this story I wrote back in August 2012, which got picked up by the Daily Mail.)

I don’t rely on press releases for scientific information. My information comes from the paper, and from interviews with the researchers who did the work and others who know the field. I won’t write about a study without reading the paper first.

When working to tight deadlines, how much fact checking can be done?

Every story should be fact-checked properly before it’s published, but that’s the news editor’s responsibility, so it’s up to them to ensure there’s enough time for it.

How easy it is to balance newsworthy stories with actual scientific merit?

It’s not really a problem. There’s a lot of research being published, so it’s not too hard to find robust studies that are designed well.

And, of course, the scientific merit of research papers, or rather lack thereof, is now big news itself. I’m talking about the on-going debates about the reliability and validity of neuroscience and psychology research. I think it’s important for journalists to report on this. As well as highlighting the limitations of research, it informs readers about the scientific process, and reminds them that science is a human endeavour, done by people who are just as prone to biases and error as everyone else. (See, for example, my recent article about the methodological and interpretative problems inherent in functional neuroimaging research.)

Would you say demand for science stories is growing and people are more interested in science?

No. Public interest in neuroscience seems to have grown a lot in recent years, overall, but I don’t think this translates to a growing interest in science more generally. Most newspapers have dramatically cut back their science coverage, and some have even stopped publishing their dedicated science sections altogether.

Social media can be quick to react to hype in science stories. Does this make science writers more cautious with how they tone a piece?

I’ve always tried to make my writing as accurate as possible, so the growth of social media hasn’t really changed how I work. Scientific studies can be very complex, often producing results that are often difficult to interpret and/or inconclusive, and I think it’s important to convey these nuances when writing about science.

Social media like blogs and Twitter make scientists open to criticism, too, as they are increasingly being used for post-publication peer review. Notable examples of this are the “arsenic life” paper, which was debunked by Rosie Redfield on her blog, and last year’s STAP cell papers, which were immediately scrutinised on social media. In this case, scientist bloggers like Paul Knoepfler crowd-sourced and published failed attempts to replicate the findings, and as a result both papers were subsequently shown to be fraudulent, and were retracted.

Here at Research The Headlines we quite often report on sensationalism and hype in news stories not supported by the science. A recent study suggested that in many cases the hype originates in the original press release prepared by the scientists. Do you agree?

I don’t doubt the results. Scientists have always been under pressure to publish their work in prestigious journals, in order to advance their careers and secure funding for their research, but academia appears to have become increasingly competitive in recent years, due to funding cuts, fewer permanent positions, and various other reasons that I’m sure you’re familiar with.

They also have the added pressure of trying to promote their work in the mass media, because many university departments now emphasise the importance of what they call “public engagement,” and also because publicity draws the attention of funding bodies. It could be argued that this situation gives some scientists an incentive to hype up their results in order to make them more newsworthy. I daresay these same issues contribute to the recent perceived increase in scientific fraud.

Having said that, I believe there are other reasons for sensationalist reporting. Departmental press officers sometimes over-hype new research findings to make them more attractive to reporters and, of course, journalists themselves sometimes exaggerate the implications of the research.

On the other hand scientists are often labelled as poor communicators. Would you like to end our chat with a few tips for how scientists can talk about their findings more effectively and engage with the public?

Members of the general public don’t understand technical scientific language, so I think the best advice is to use plain English instead of jargon. Say that something “goes up and down” instead of “fluctuates”, say that a finding is “unexpected” rather than “anomalous”, and say “caused by humans” instead of “anthropogenic”. Instead of blinding them with science, use clear and simple language to help them to see.

I’ll remember that! Next time I need to say a gene is repressed or over-expressed I’ll say it is switched off or on. Mo, thank you very much for chatting with us.

Sumner, P. et al. (2014). The association between exaggeration in health related science news and academic press releases: retrospective observational study. The British Medical Journal. DOI:10.1136/bmj.g7015.

One Comment

Trackbacks & Pingbacks

  1. Looking Back on Research the Headlines in 2015 |

Leave a comment