Skip to content

Talking Headlines: with Dr Suzi Gage

by on 2015/09/01

Dr. Suzi Gage is a post-doctoral research associate working as part of the MRC Integrative Epidemiology Unit at the University of Bristol, looking at associations between substance use and mental health. She is also a blogger for the Guardian science network, where she writes about those topics related to her research area and epidemiology more broadly. In 2012 her blog, Sifting the Evidence, won the first UK Science Blog Prize awarded by the Good Thinking Society.

Hi Suzi, what got you into blogging initially?

The main reason I started was because I was at the beginning of a PhD – I knew that at the end of it all I had to write a thesis, so I thought writing practise would be helpful. Two other PhD students wanted to start a blog too, so initially we set one up together.

Is there a particular type of science reporting that prompts you to blog?

If research is published in my field, I’m always keen to write about that. For example, I have recently covered the story looking at the link between psychosis and cannabis. I am even more interested if a study gets a lot of media coverage and gets hugely overblown; with a blog you can often delve a bit deeper in to where this new evidence sits in the literature as a whole. Like the “skinny jeans story“, a few months ago, where a single case study was wildly extrapolated in the press. Those are quite often fun ones to write!

Ah, we mentioned your post when discussing the “skinny jeans“!

Where do you think the problem of poor science reporting lies and who holds responsibility? Scientists, institutions or journals?

There’s some cool research been done by Chris Chambers at Cardiff University which seems to suggest that hype gets added all the way along the chain. Over-egged abstracts, hyped press releases and then sensationalised articles. So I think everyone holds responsibility, really.

In your blogs you are particularly careful in explaining the stats behind the science; are there common mistakes/problems linked to statistical interpretation at the basis of bad science reporting?

I’m a bit of a stats geek I suppose – but it is a bit infuriating when stats are manipulated, deliberately or not, and presented in a way that makes the evidence seem much stronger than it is. The particular culprit is relative versus absolute risk – a difference in risk of 20% might seem massive, but if the absolute risk is only 0.0001%, then a 20% increase is barely noticeable.

Yes! That is something we see too often, as in the claim that C-sections increase autism risk.

As a young scientist, have you ever felt your blogs could expose you too much and might not be entirely welcome by potential colleagues/reviewers? Is it a risk worth taking?

I think about this quite a bit, but I try and be fair, and never personal in my blog, because after all it’s the evidence I’m interested in. I hope that it wouldn’t be damaging to my career, but I think that if people did have a problem with what I was writing, then I probably wouldn’t want to work with them anyway! Equally, I’m always pleased when people respond and disagree with what I’ve written and we can have a discussion about it – I make mistakes sometimes and it’s really good to get them corrected so I learn from them.

Do you think the rise of science blogging and social media make press releases and headlines more cautious? Or is this a trend yet to be seen?

Hmm, I don’t think there’s a difference yet. Sadly I think press-release churnalism is still very much a problem. Though there are also really brilliant science journalists working in the media who do a great and critical job of providing us with science news stories. See, for example, Hannah Devlin (Guardian), Tom Chivers (Buzzfeed), Tom Whipple (Times) or Ian Sample (Guardian).

Finally, what is the best tip you can our readers should keep in mind for the next time they come across sensational headlines? How can they distinguish a real breakthrough from a bad article?

Check to see what isn’t there? Quite often key information is glossed over to make a story work better. In particular, if a change in risk is mentioned, think about what the absolute risk might be. Also, what type of study was it? Evidence from one patient is obviously much weaker than a large randomised trial or a meta-analysis. And if in doubt, find the original research paper and see for yourself! Sometimes journals will have lay summaries of their research, and even if not, it’s worth having a look anyway and seeing what you make of the research paper.

This is exactly what we would like to encourage students and school children do with our just launched competition! Thank you Suzi, we’ll be looking forward to your next blog-post.

Leave a comment