Talking Headlines: with Kevin Mitchell
Kevin Mitchell is an Associate Professor in the Smurfit Institute of Genetics at Trinity College Dublin. His research is aimed at understanding the role of genes in contributing to neurodevelopment and their involvement in psychiatric and neurological disorders. Kevin runs a popular blog on genetics and neuroscience topics, which has been cited as an influential and authoritative source for post-publication peer review. He is listed among the top 100 neuroscientists on Twitter and has been cited as an influential Scientist in the Twitterverse.
Hi Kevin, what got you into blogging and who is your main audience?
I started the Wiring the Brain blog after organising an interdisciplinary conference of the same name, which ran first in Ireland in 2009 and 2011 and every two years since at Cold Spring Harbor Laboratory. The goal of the blog was to provide an academic forum as a follow-up to the conference, for discussion of issues spanning genetics, neuroscience, neural development, psychology and psychiatry as a means of bridging the gaps between these traditionally disparate disciplines. The main audience initially was intended to be other scientists, but the blog has changed quite a bit over the years and I now write as much for the general public as for scientists. It turns out in fact that these two things are not so different from each other – scientists are so specialised these days that for many topics outside each of our chosen fields, we might as well be members of the general public.
You are also very active on Twitter where you can get quite vocal about poor science reporting. Can you give a recent example of bad reporting that caught your attention?
A recent example of bad reporting was the coverage of a study that claimed to show that C-sections were associated with an increased risk of autism. Headlines in Irish and UK papers trumpeted a 23% increased risk of autism in babies born by C-section! This is an alarming statistic that would certainly worry any parents who had or were scheduled to have a baby by C-section. As is often the case, however, this increase of 23% was not in absolute but in relative risk – i.e., it was an increase of 23% of the baseline risk of about 1%. So the risk went from 1% to 1.23%, not from 1% to 24%, as the headlines seemed to suggest! I wrote a blogpost exploring this study in detail as an example of how poorly epidemiological data are reported and how misleading this can be. As it happens, the data were not particularly compelling to begin with and any correlation that does exist certainly does not imply causation.
Yes! This story caught also our attention and C-sections were in the media again for effects on brain development.
And an example of good reporting?
The recent WHO announcement that processed meats are definitively carcinogenic and can be considered in the same category of risks as cigarette smoking gave ample opportunity for both good and bad reporting. The bad reporting typically did not distinguish between the statistical strength of the evidence for some association with cancer (which is quite compelling) and the importance or size of that effect (which, unlike the case for cigarette smoking, is very small). Dick Ahlstrom wrote an excellent piece for The Irish Times making these distinctions, which I was pleased to comment on. Ted Underwood (@Ted_Underwood) captured the reaction to the coverage of this WHO announcement by tweeting that: “A stubborn love of bacon just taught more Americans the difference between p values and effect size than 100 stats courses could”.
[We could not ignore the bacon story either!]
More generally, I think an important aspect of good science reporting is deciding what not to write about. Journalists are inundated with press releases hyping all kinds of studies, some much worthier of attention than others. I am always pleased to chat with journalists looking for an opinion on a study to help them decide whether to write a piece on it or not (It often ends up not).
Why do you think autism is a regular feature of poor media reporting? What do we really know about what causes autism?
Autism seems to attract far more than its share of misinformation, with various people claiming it is caused by vaccines, fluoride, GMOs, iPads (yes, really), cold parenting (yes, still), C-sections (as above), and, inevitably, gluten. I think it attracts these kind of random ideas for several reasons. First, it is an extremely emotive issue; parents are understandably keen to know the cause of their child’s distress and, hopefully, to find some way to correct it. Second, the scientific and medical communities had until recently been able to provide very little in the way of answers. And so, finally, that void has often been filled by people arguing for various environmental triggers, usually without any scientific data to back up those claims. These are often presented as the real science that the (evil) medical establishment doesn’t want you to know; in fact, they are commonly being pushed by opportunists looking to profit by selling alternative treatments to vulnerable parents.
The irony is that, of all the disorders to suggest has an environmental cause, autism is probably the worst choice, as it is one of the most heritable conditions we know of. By far the biggest risk factor for autism is being related to someone with autism. If you wanted to express that in the kind of terms used in Daily Mail headlines (i.e., an increase in relative risk), you would say that having the same genotype as someone with autism increases risk by >8000%! In short, autism is genetic, and the field is making rapid progress in identifying specific genetic causes in an ever-growing proportion of autism cases.
But it is not unusual to see “Gene for autism found” in the headlines. This applies to other disorders and traits ranging from diabetes, happiness to homosexuality. Why are these headlines misleading?
This is another common problem in reports of genetics research and one that is promulgated by researchers themselves. It stems from a conflation of the two meanings of the word “gene”: either as a unit of heredity – some element that is passed from parent to offspring that affects a trait – or as a section of DNA that codes for a particular protein. When we say a “gene for autism”, what we really mean is a genetic difference (a mutation) that causes autism. But that does not mean that the function or purpose of the gene, as a piece of DNA, is to cause autism (nor is its function to prevent autism). The function of the gene is to encode a protein. When that protein is disrupted, autism may result, though there may be no direct link at all between what the protein does in cells and the emergent phenotype that we recognise psychologically as autism.
The other misleading aspect of that phrase, a “gene for X”, is that it makes it sound like the gene and the trait it causes must exist for a reason, like there must be some selective advantage associated with it or else natural selection would have purged it from the population. If you word it instead as a “mutation that causes X”, then you avoid that conceptual trap. In fact, the continued existence of a genetic disease like autism does not imply that it brings some kind of advantage that counteracts the negative effects on evolutionary fitness. Natural selection works extremely efficiently to remove mutations that cause severe disease from the population; it is the continued production of new mutations in a very large set of potential “disease genes” that explains the continued prevalence.
Do you think an increasing pressure for researchers to demonstrate the importance of their research is among the factors behind poor press releases from the researchers themselves?
Yes, there is clear evidence that a lot of poor science reportage stems initially from hyped and sensational press releases. And yes, the temptation to indulge in this kind of hyperbole stems in part from funding agencies demanding more and more evidence of “impact”. Extensive press coverage of some discovery is one metric that can be used to illustrate such impact, regardless of whether the study justified that coverage or of the accuracy of the reporting.
You have recently published a piece in The Irish Times advocating for the support of basic research. Do you think that media reporting and public perception might somehow influence how research funds are distributed?
The piece I wrote for The Irish Times was presenting the evidence for the economic value of investment in basic research and higher education. As scientists funded with taxpayers’ money, we have a duty to show that some value accrues from that investment. Unfortunately, we have done a poor job of communicating the true nature of scientific progress and have allowed the debate to be framed around short-term deliverables from individual projects. The misperception of how science works is accentuated by media coverage, which focuses on new, one-off discoveries, rather than the incremental, steady progress made through small advances, each building on a massive framework of prior knowledge, as part of a collective, international enterprise. That’s perfectly understandable actually – discoveries are news, incremental advances are not. But it does give basic research a public relations problem. In my view, this can best be countered by consistent outreach by scientists who can present scientific content in a way that doesn’t have to be framed as “news” – that’s one reason why blogging is such an important complement to professional science journalism.
And finally, what would be your tip to our readers on how to get it right when they come across some sensational headlines?
Readers will always be somewhat at the mercy of the journalists and headline writers, of course, but there are ways to spot sensationalised science that is being misrepresented or that is less likely to hold up. For epidemiological studies, you can ask what kind of sample size was involved – the bigger, the better obviously! And how is risk being reported – is it relative or absolute risk? Does the experiment actually prove something or only provide a correlation? Have other experts been asked for comment, or has the journalist only spoken to the authors themselves? Or even worse, is the piece just a reworked press release? (These often have no name attached). More generally, do the findings of the experiment stand alone or are they supported by a wider framework? Free-floating experiments that are not moored to a constraining body of work (like many in social psychology, for example) make great stories because they require no background knowledge. But that also means they have no supporting foundation and are thus more likely to be wrong. That’s why science and news don’t always go together – the most newsworthy results today may be the ones that fail to replicate tomorrow.
Kevin, thank you so much. These tips will be very useful for the final push of our Rewrite the Headlines competition for students and complement our own How to “Research the Headlines” series. You gave us lots to think about; not only for our lay audience but for explaining the value of outreach in establishing a link between scientists and the public.
Trackbacks & Pingbacks