Here’s a story from so far back that I’m no longer worried about calling out the people in question.
I sat in a ballroom packed with fellow conference-goers, hearing the latest science on learning and the brain, brought to us by a compelling, charismatic, and utterly self-assured speaker.
Amazingly, he told us, people’s underlying neurocognitive profile reveals itself through one simple test – whether your thumbs cross left or right when you clasp hands. Delighted chatter spread through the audience as we all tried it ourselves.
Other remarkable facts were to follow, including this one guaranteed to inspire an audience of educators: When you engage in challenging mental activity, like the new crop of tech-based brain games, you can actually prevent cognitive decline in old age. This includes staving off dreaded diseases such as Alzheimer’s.
So uplifting. So well presented. So wonderfully relevant to teaching and learning in the contemporary age.
And almost certainly wrong, down to the last word.
I’m a fan of conferences. There is nothing like a good keynote, or workshop, or even a crackling Twitter backchannel to reconnect you with like-minded people, get you thinking differently, and set you on a new course in your work.
At their best, conferences can wildly energizing, and attending them is a privilege, one that I wish were more broadly accessible within academia. But there’s a real problem brewing within the education conference circuit, one that I don’t think we talk about enough but really need to.
It’s the problem of decorative neuroscience.
You probably know what I mean, because you have probably seen it yourself. It’s a superficial sprinkling of brain research that’s presented not in depth and with nuance, and not really intended to convey the main findings and remaining questions. Decorative neuroscience isn’t there to do that. It’s there to lend an air of novelty, inevitability, and credibility to the speaker’s message.
This is not just a problem with conferences. It happens lots of places, on blogs, marketing materials, in books aimed at audiences of non-scientists. I’m not the only person who’s expressed frustration with the practice of borrowing neuroscience concepts, with varying degrees of accuracy, to prop up claims way beyond what the science itself actually warrants. But conferences today seem to be ground zero for wild claims, over-extension, and mistranslation.
I don’t want to come down on the whole idea of bringing science of all kinds to conversations about learning, or chill productive discussion by policing what is said and by whom. I love that more people every day are becoming excited about the science of mind and brain, and that they see it as relevant to the enterprise of teaching and learning. It’s what I’ve hoped and waited for my whole career, and the last thing I want to do is get in the way of that.
I also don’t want to make things harder for people who are doing the talking. It’s tough to be up at the keynote podium, and doing so requires the elision of many details that one would never skate over in, say, a piece of scholarly writing. We can’t knock speakers for trying to pack big concepts into one short speech, or for trying to make complex ideas engaging and broadly accessible. That is, after all, what the good ones get paid the big bucks to do.
But I do want the people who put up their precious money, time, and energy to hear these messages to get really good, scientifically grounded ones.
As I talked about in an address that I gave at the 2019 POD Network conference, it’s also extremely easy to over-interpret, over-apply, or just flat misrepresent the science when it fits into someone’s already polarized stance on an issue. Reputable-sounding research is a great whetstone against which to grind our axes, but doing so simply sets back the whole enterprise of thoughtful implementation of the best, most solidly established findings from all the learning sciences.
So how do you know if you may be encountering decorative neuroscience? Here are a few red flags.
- There are neuroscience-related slides – or diagrams, or other materials – that aren’t discussed or even explained, but rather thrown up and taken down as the speaker speeds towards a conclusion.
- These slides/diagrams/videos contain terms and illustrations that are probably unfamiliar or totally incomprehensible to the audience, and this seems like an intentional choice rather than a simple misapprehension.
- It’s unclear whether “findings” from “research” refer to converging evidence from multiple studies, or just a single isolated one.
- There are glaring examples of common misconceptions, such as neuromyths or conflating causation and correlation (cognitive activity prevents brain disease, ADHD is skyrocketing due to the spread of technology, and so on).
- It’s unclear which specific studies, books, or articles are the source of the claims the speaker is making.
- The speaker lacks an academic background, publishing history, or any other identifiable credentials that are directly relevant to the field of neuroscience.
That last one in particular pains me to write, so I want to say more about it.
I’m not ready to say that only people with degrees in – or world-class research programs in, or big-name recognition within – neuroscience should be the only people who get to talk about it. There is plenty of good translational science communication by non-scientists out there (see here, here, and here for just a few examples). We should treasure the amplification of evidence-based thinking about teaching and learning that occurs as a result.
But I do think that especially in this area, we need to be informed, discerning, and choosy consumers. It should be acceptable – common, even – to ask whether the person discussing neuroscience is an actual expert in the subject. There are different reasonable levels of that expertise – everything from having a strong self-taught background, to holding relevant degrees, all the way up to being a bona-fide world-class contributor of original research in the field. There’s flexibility in how much of that background we’d consider to be a bare minimum of credibility. Asking the question, however, should be non-negotiable.
It should also be non-negotiable to have some way of tracing back to the sources of the speaker’s claims. Talks are no place to include in-depth literature reviews, or even standard bibliographies, but mention can be made of key sources within the flow of the talk.
Better yet, speakers can put together a handout, web site, heck even a Pinterest board or Padlet with their source materials. This can be a way for people who want to do a deep dive later to have a way to do that. It also helps nip in the bud any evidence-free nonsense claims (such as this one about drastically shrinking attention spans or the debunked “Cone of Experience”). Even the most solidly well-constructed studies occasionally have to be revisited when replications don’t pan out or new information comes along, so it’s good to know where the speaker is coming from for that reason as well.
Neuroscience is fascinating, trendy, at an early stage of development, and above all, extraordinarily complicated. All of these things create a perfect storm in which misinformation, mythology, and hype can thrive, even among highly educated people. This is why we need to vet our expert speakers, and the higher-profile the platform and the louder the mike, the stricter the vetting should be.
Science can be engaging, approachable, and accurate all at the same time. We know this. It’s time for our speaker lineups to reflect it as well.