Skip to content

Beware of the magnets?

by on 2015/10/23

“Could your views on God and immigration be change by using MAGNETS?” asks the Mail Online. Well that’s certainly an attention-grabbing headline. The coverage then goes on to state that “Psychologists have discovered it’s possible to significantly change a person’s beliefs simply by targeting their brain with magnets” and “People who were subjected to this treatment reported that their belief in God dropped by a third following the stimulation, while there was an increase in positive feelings towards immigrants”. So how accurately does this reflect the published article? Actually, it’s not too far off base. The scientific article, published in the prestigious journal Social Cognitive and Affective Neuroscience, describes a study whereby a group of politically moderate U.S. university students were recruited for the study, and then divided up into two groups.

At this point, alarm bells should probably be ringing – this study used what is known as a between-subjects design, where the behavior of one group is compared to the behavior of another group. Here one group of participants were given the critical treatment (Transcranial Magnetic Stimulation, or TMS, to their medial prefrontal cortex), whereas the other group was given a control treatment (TMS to the same region, but at a much lower strength, known as sham stimulation). Then, both groups rated their agreement with some politically inflammatory essays on immigration and completed a questionnaire on their religious beliefs. The group which received the full-strength TMS treatment had a lower level of supernatural beliefs and lower levels of anti-immigrant beliefs than the control group.

Now, why my big issue with the between-subject design of this study? There is nothing wrong with this type of design per se – it’s certainly used in many psychological studies where it’s not appropriate to have all of the individuals take part in all of the conditions (these are known as a within-subject design). But, of course you need to be careful when making inferences about whether the manipulation can change a person’s behaviour (as inferred by the reported snippets above) – this is not what is done in these types of studies. Usually this isn’t a terribly dangerous inference to make but, critically, you need some good measures indicating that the groups were well-matched before the manipulation has taken place or, at the very least, a large enough sample that you can assume that both groups would be pretty well-matched by the law of averages.

Unfortunately, this study fulfilled neither of these criteria. No comparison of the treatment and control groups’ religious or political ideologies prior to treatment is given in the paper, and each group only contained 19 individuals. With such small numbers, it would only take one or two more ‘extreme’ individuals being randomly allocated to the treatment group to make such a difference. But how do my concerns reconcile with the (accurate) claim in the news article that religious beliefs dropped by a third? Well, this is another common misconception that the size of an effect (1/3 difference, for example) is related to the reliability of an effect (whether it is ‘statistically significant’). Given that the average size of an effect can be affected to a large degree by one or two extreme cases (called outliers), typically, a scientific study will use statistical tests to examine the consistency of a particular effect, and this is the benchmark for scientific integrity. In all fairness, the journal article does pass these tests at the conventional thresholds, but only just (you can see graphs from the paper in the Mail Online article, and decide for yourself how convinced you are by the magnitude of the effect – the differences between the heights of the bar charts – and the variability – the size of the error bars within each bar chart). Examining differences before and after the treatment would have been a far more convincing way to show any effects.

But, perhaps I’m holding the Mail Online to an overly high set of standards here. The article itself, beyond the inference that this manipulation would turn a person into someone with a different belief structure (which is a bit of a stretch), isn’t over-the-top, includes many quotes from the authors, and photographs of the TMS technique in addition to the graph from the journal article itself. Of course, there’s no actual link to the article (when is there ever?), but there is at least a link to the journal so it’s not so difficult to track down. The Independent also provides a good overview of the study, with nice accurate reporting of the experimental details. At the other end of the spectrum, however, is the coverage in the Express, which states “A bizarre experiment claims to be able to make Christians no longer believe in God and make Britons open their arms to migrants in experiments some may find a threat to their values.” As far as I’m concerned, such overly-editorialised nonsense has no place in science reporting, diminishing public trust in basic science and generating unnecessary hysteria.

Holbrook, C. et al. (2015). Neuromodulation of group prejudice and religious belief. Social Cognitive and Affective Neuroscience. DOI: 10.1093/scan/nsv107

 

Leave a Comment

Leave a comment