Evidence for brain training effectiveness found lacking
Is it possible to maintain or even improve your thinking skills by completing relatively simple, repetitive games and tasks? That question provides the foundation for research on cognitive training, often referred to as brain training. Not only is this a very active research area it also represents a multi-million dollar industry. Those marketing brain training games or apps often refer to their own product being designed according to the latest research findings. However, the most comprehensive review of the evidence for the effect of brain training has just been published. And it doesn’t make a compelling case for the potential benefits often reported.
What did the research say?
The review, led by Daniel Simons from the University of Illinois at Urbana-Champaign and published in the journal Psychological Science in the Public Interest, was the product of a two-year study to collate and summarise the evidence for brain training. The need for the study arose from competing claims being made by groups of leading researchers regarding the likely benefits of brain training (we previously reported one of those open letters). The researchers leading the review noted that the two groups seemed to be coming to quite different conclusions but were essentially drawing on the same body of literature. They therefore set out to systematically review the evidence, not just looking at the number of studies providing evidence for or against certain claims but also assessing the quality of their designs and methodology, an important part of how confident we can be in the claims being made.
The review is, to give it the fully technical term, an absolute whopper. It runs to 84 pages so I’ll give the briefest of overviews here. The researchers predominantly focused on the studies being cited by the companies developing or marketing brain training games or the researchers who wrote in general support of the benefits of brain training. Their reasoning for that focus was that
“These citations presumably represent the evidence that best supports the claims of effectiveness.”
The papers also had to be published and peer-reviewed, that is they had to have passed through the normal process for a scientific paper in which independent experts provide comments and feedback before the paper can be made available to the wider community.
In total, about 374 papers were reviewed, just over 130 coming from the citations of the research community in support of the effectiveness of brain training. Of those, 86 were randomised controlled trials, the kind of design we would be most confident could tell us if an intervention did or did not have a specific benefit. From the randomised trials about 15 papers came from the almost 10-years of follow-up of one of the largest cognitive training studies yet undertaken. The review goes through the various papers in detail, but here’s how they begin their conclusion:
“The evidence cited by Cognitive Training Data and by brain-training companies, together with evidence from working-memory training and video-game-training studies, provides little compelling evidence for broad transfer of training.”
Very broadly, people would generally show improvements over time in the game or task they were completing, and in some cases these improvements might transfer to very closely related tasks (though often only for very similar laboratory based measures). Within this research area, that would be referred to as near transfer, and while of interest it’s not what we would ultimately want to achieve. Ideally, any training paradigm would have “broad transfer” beyond the specific aspects being trained. That was not supported by the review and specifically there was little evidence that “real-world” outcomes were affected (though few studies assessed those aspects). The review goes on, however, noting that
“The limited evidence in the literature for transfer from brain-training interventions to real-world outcomes stands in stark contrast to the marketing claims of many brain-training companies.”
Indeed, the researchers discuss how, even if they had shown these training tasks to be effective, people would need to very carefully balance the time (and cost) of doing those against the likely outcomes. The review notes that consumers ultimately cannot compare different brain training products directly as the evidence doesn’t exist and similarly, it’s not clear how long any benefit might actually last (that is, would you need to keep playing the task or game forever?!).
The review comes to a close by noting that if the aim of using any of these products is enjoyment or perhaps to help with motivation more generally for example, then people are able to make those choices. However, if they are expecting these products to reduce or delay age-related cognitive declines (or help with improvements in an educational context which is another area that brain training devices are increasingly visible), that isn’t something strongly supported by the evidence reviewed.
What did the media say?
The publication of the review received attention in many of the world’s leading media outlets, and you may well have seen something in your own preferred news source. I do though just want to highlight a piece in The Atlantic by Ed Yong partly because the piece does such an excellent job of distilling the review but also because it has some revealing quotes from people across the different sides of the debate.
From Simons who led the review we hear “If you want to remember which drugs you have to take, or your schedule for the day, you’re better off training those instead,” while researchers not involved in the review offer their praise for the effort as “a tour de force” and “a service for the whole field”, though as Yong points out they are signatories on the open letter criticising the field which kick-started this whole process. Others seem to have taken the review in a positive fashion with George Rebok for example noting “The review is very timely, and will help us raise the bar on the science of brain-training”.
Some of the best quotes, as you might expect, come from those on the opposing side of the argument. Henry Mahncke, a neuroscientist and CEO of one of the brain training companies referred to in the review goes as far as to call the authors “moral monsters” for the examples they use to show that even if benefits were observed the amount of time needed to see that probably wouldn’t be worth anyone’s effort. Luckily, there a few contributions to help the public think about what they might choose to do instead, including from one of the UK’s leading neuroscientists, Dorothy Bishop,
“[brain training games] encourage people to engage in solitary activities at home, when they could be getting out and doing something that would not only stimulate the brain, but also be fun and sociable, such as learning a foreign language.”
The bottom line.
This area isn’t just one of niche academic interest but is potentially something that has an impact on people regarding the things they might choose to do to maximise their brain health, particularly as they grow older. Indeed, the review highlighted how one of the largest companies developing and selling brain training games, Lumosity, agreed to pay a $2 million fine (the actual judgement from the US Federal Trade Commission was $50 million) for misleading claims about the effectiveness of their products. That judgement made specific reference to how those claims
“preyed on consumers’ fears about age-related cognitive decline, suggesting their games could stave off memory loss, dementia, and even Alzheimer’s disease.”
While most of us are probably keen to ensure we maintain our thinking skills as long as possible, for now, the evidence for brain training as a route to that end is less than compelling.