Michael LaCour was a promising young social scientist until his eye-catching study about swaying public opinion on gay marriage, published last year in one of the world’s leading journals, turned out to have been built on data that can’t be found.
Anil Potti was a rising star at Duke whose studies of cancer genetics drew heaps of praise — and research dollars — until his academic career crumbled under questions about his résumé, and the integrity of his findings.
The stories of Potti and LaCour are mirror images of each other. That’s a good thing — it says a lot about how the scientific community is changing its approach to correcting its mistakes.
The paper quickly raised red flags
First, a primer on the case of Anil Potti — who, along with Duke, settled a number of lawsuits related to his research earlier this month. Potti’s rise began in 2006, when he and his mentor, Joseph Nevins, published a paper claiming to show that genetic "signatures" could predict how patients with cancer would respond to chemotherapy. That paper quickly raised red flags for biostatisticians at M.D. Anderson, a prominent cancer research center, who worried that the data were flawed.
But none of the journals that published Potti’s supposedly breakthrough findings would run the critiques. Instead, the criticism ended up in a statistics journal, unknown to most cancer researchers. Though there was an investigation, Duke said the investigation’s findings gave them enough confidence to continue clinical trials based on Potti’s work.
Then The Cancer Letter reported in 2010 that Potti had faked a Rhodes scholarship. After that, Duke halted the trials based on his research, and Potti resigned from his post. People began to look at his data with more skepticism, and Potti’s research’s validity quickly unraveled; his retractions now number in the double digits. Today, he practices oncology in North Dakota.
When the statisticians drew first blood, the community listened
But with LaCour, when the statisticians drew first blood, the community listened. Two graduate students at the University of California-Berkeley, were excited enough about his original study, published in Science in December, to try to extend the work. Upon close examination, though, they quickly found problems with the data, and contacted another author on the paper. He took their concerns seriously, and LaCour was forced to admit that he had faked the details of how surveys were conducted, and of how the study had been funded.
The ensuing media scrutiny of his public persona dug up a fabricated résumé. Cleverly using a browser extension he installed "to notify me when his website changed," Jesse Singal, at New York Magazine’s The Science of Us, has been doing a tick-tock of LaCour’s CV misrepresentations, including a nonexistent award (sound familiar?). LaCour’s paper was formally retracted on May 28th, and he offered a response to the allegations — which many found very wanting —on May 29th. As is often the case in such situations, another of his papers is already facing scrutiny.
The résumé fibs may turn out to be an afterthought in the LaCour case
The résumé fibs, damning as they are, may turn out to be an afterthought in the LaCour case. That’s the opposite of what happened to Potti, whose CV misdeeds were what forced people to think critically about his data — lying on his résumé, not falsifying data, ultimately brought down his research career. So what’s changed over the past five years?
One important factor is the growing recognition among science journals that the tools of statistics represent an effective defense against fraud. Consider, for example, the current record holder for scientific retractions, Yoshitaka Fujii. Thanks to an intense statistical analysis, Fujii, a Japanese anesthesiologist, was found to have made up data in 172 studies. Combined with other papers in which he did not appear to have obtained the proper ethical approvals to perform his research, he now has 183 retractions, all in the past four years.
It would be a hell of a lot better if journals applied such analyses to all studies prior to publication; the vaunted peer review system is supposed to be a quality filter. But given the sheer volume of papers released each year—at least 2 million and rising—that’s not realistic. And the quantity of papers likely won’t change anytime soon, given that academic career advancement depends almost entirely on publishing in peer-reviewed journals.
If you want to see the self-correcting nature of science in action, check out PubPeer.com
Fortunately, a growing movement called post-publication peer review offers a reasonable compromise. If you want to see the self-correcting nature of science in action — and by that we mean robust critiques, a number of which have led to corrections and retractions — check out PubPeer.com. The site, launched in 2012, allows commenters to discuss virtually any peer-reviewed paper that exists online. Post-publication peer review has been around as long as papers have been published, but it usually happens inside labs. PubPeer allows it to happen in public.
The sophisticated Photoshop forensic tools used by commenters to uncover potential fraud can make most of the discussions quite technical, as you’d expect given the site’s audience of scientists. And not everyone likes PubPeer — especially some of the researchers whose work is being questioned.
But PubPeer and other efforts, such as the Center for Open Science and work by the University of Pennsylvania’s Uri Simonsohn and colleagues, show that free and open critiques, powered by the internet, could dramatically speed up science’s self-correction process. Even in the decade before any of them became active, the number of retractions grew dramatically, from about 40 to 400, mostly because of better detection.
Fake résumé scandals will still cripple lots of careers — and rest assured we’ll cover those stories. But relatively simple data analysis is a much more robust solution to weeding out fraud. Bring on the geeks.
Adam Marcus, the managing editor of Gastroenterology and Endoscopy News, and Ivan Oransky, the global editorial director of MedPage Today, are co-founders of Retraction Watch, a MacArthur Foundation-funded blog that tracks scientific errors.
Correction: An earlier version of this article originally referred to Jesse Singal as female; he's not. We regret the error.