Skip to content

The science of science in the media

by on 2017/03/07

Our blog’s typical pieces cover how science headlines in news outlets tend to favour sensationalism over accuracy in the reporting of scientific studies. This time instead, I am going to take the opposite approach and cover a scientific paper about media reporting which received relatively little media coverage. This study, conducted by a team at the University of Bordeaux, looked at more than 5000 articles about risk factors for different categories of diseases and how their results were covered in the media. The study analysed the extent of media coverage distinguishing between the initial results versus following follow up studies. The key message of this French study is not really surprising, yet quite revealing, even for us at Research the Headlines who are used to deal with this phenomenon.

The analysis found that media coverage increased with the prestige of the journal that published the article, especially when it was accompanied by glossy press releases. The most consistent trend was the high coverage for studies that reported positive associations for risk and protection for diseases. In contrast, lack of associations for different conditions received basically zero interest. This was true also for follow-up analyses which failed to replicate the positive associations reported in the original studies, and which were only rarely covered in the media. In fact, of the 156 studies reported by newspapers that initially described positive associations, only about 50% received support in subsequent studies.

This means that news outlets selectively choose to write up stories about curing diseases with diet and life style  and to tell us about catastrophic effects of eating bacon and burned toast, but they are silent when the same effects find pretty much no support in independent studies. The recommendation of this paper is that journalists should make a bigger effort in reporting null findings particularly when they invalidate their original stories.  This study tells us that, in about 50% of cases, reports about biomedical research are wrong. Before getting too excited about new research findings then, and perhaps before running stories about research, journalists could wait for independent validations or at the very least note that these are not available at present.

Quite disappointingly, in line with its own findings, this research paper, that describes pretty much negative results (i.e. lack of coverage for null effects), did not receive much media attention. According to Altmetric this study received good Twitter attention (>350 shares) but was covered by only four news outlets.

But journalists are not the only culprits. Scientists should be more cautious when preparing their press releases and aim to indicate the level of reliability of their findings. In fact, the final conclusion of the study urges scientists to take responsibility for communicating their own work.  In the words of the authors:

they [scientists] are responsible for the accuracy of the press releases covering their work … This is not only a moral duty but also in the interest of science. Indeed, contrary to common beliefs, reception studies show that scientists are viewed by the public as more trustworthy when news coverage of health research acknowledges its uncertainty, especially if this is attributed to scientific authors.

 

“Poor replication validity of biomedical association studies reported by newspapers”, Estelle Dumas-Mallet, Andy Smith, Thomas Boraud and François Gonon. PLoS One, Published on February 21, 2017 http://dx.doi.org/10.1371/journal.pone.0172650

One Comment

Trackbacks & Pingbacks

  1. Dyslexia, eyes, light and hype |

Leave a comment