Skip to main content

Why is a widely used concussion test failing to protect athletes?

Why is a widely used concussion test failing to protect athletes?

/

One of sports' most dangerous injuries needs better diagnostic tools

Share this story

It’s no secret that football has a head injury problem. Earlier this year, the NFL settled a $765 million lawsuit after thousands of players accused the league of whitewashing long-term dangers and pushing players back onto the field too soon. And last week, a PBS investigation indicted the NFL for being slow to address the problem. One major barrier to addressing the crisis? The league’s testing tools don’t necessarily work.

Many NFL, NHL, college, and high school teams look for concussions with a computerized diagnostic tool called the ImPACT. The ImPACT tests for memory problems and impaired cognitive processes by asking potential concussion victims to do things like remember the location of objects on a screen or quickly pick the right answers in a series of simple questions. It’s simple enough to be taken on the field after an injury, and when combined with a survey of common physical symptoms, the test can quickly determine whether a player has suffered a serious head injury.

"There's no gold standard."

Or, at least, they can in theory. In recent years, researchers have questioned whether the ImPACT and other concussion-detecting systems are actually accurate. A 2007 review found a nearly 40 percent false-positive rate with the ImPACT and smaller but still significant false-positive rates for other tests. A 2013 study, led by UT Arlington professor Jacob Resch, determined that the ImPACT classified healthy subjects as concussion victims between 22 and 46 percent of the time — since these studies generally don’t use people with head injuries, it’s hard to say how often the reverse happens. Now, Resch has followed up his earlier work with a new study, this one exploring what might be impairing ImPACT’s reliability.

The false positives in computerized testing, Resch says, reflect the overall difficulty of detecting concussions. "There's not a gold standard," he noted. One person’s concussion might show symptoms that are absent in another’s, and effective physical tests can be just as tough to create as cognitive ones: "We have one [concussion detection] unit, a balance unit, that costs approximately $100,000, and it [has a] 60 percent sensitivity to concussions." Currently, the most highly recommended approach is a holistic one that combines a clinical examination with tests like the ImPACT. "All it takes is one leg of the stool to show that the athlete may still be recovering from their concussion," he says. Doing a barrage of tests, though, isn’t always feasible. And improving the quality of the ImPACT will also help the state of concussion detection in general. So what’s wrong with it?

Resch’s research suggests that different versions of the ImPACT test don’t work well together. The ImPACT isn’t a single set of questions taken over and over; it’s a series of "forms" that tweak the problems to stop people from learning them and skewing the results. Uninjured athletes generally take Form 1 to determine a baseline, then switch to one of several other forms after being injured and during recovery.

To test the ImPACT, Resch gathered 108 college-age, concussion-free volunteers and started administering the forms. Every participant got Form 1 as a baseline, then came back after 45 and 50 days to take one of several other versions. Ideally, they’d get the same score across several areas — verbal memory, reaction time, and other skills — as long as none of the participants suffered a head injury or was in the process of recovering from one. And in some cases, that’s what happened. But in others, subjects started performing markedly better on follow-up tests than the initial form, or performed inconsistently on different forms.

"It’s important to note that athletes do get better — can get better — at taking the test."

The differences, while statistically significant, wouldn’t necessarily have a huge effect on test scores. "Ultimately, with the ImPACT, you are allowed a confidence interval, or a range of scores around your baseline performance," says Resch. These variations didn’t fall outside those boundaries. They do, however, shed light on what might be hurting the ImPACT’s overall success rate: some tests may be objectively easier or harder, and Resch suggests that the practice effect may have kicked in, though this research can’t prove it. "It’s important to note that athletes do get better — can get better — at taking the test," he says.

Resch stresses that the ImPACT is a valuable part of concussion testing. "We use the ImPACT. We absolutely do. And as research shows, it’s one of the best tests on the market." But using his research, doctors can compensate for ImPACT’s weaknesses by knowing that a certain component of the test skews one way or the other. On the test-makers’ side, this paper could point a way forward for future changes. And for the NFL and other agencies, it’s potentially a step towards a better, faster detection system — even if that’s just one part of cracking a complicated cultural and technological conundrum.