Skip to main content

Scientists can draw very different meanings from the same data, study shows

Scientists can draw very different meanings from the same data, study shows

/

Crowdsourcing research 'gives space to dissenting opinions'

Share this story

Giving the same information to multiple scientific teams can lead to very different conclusions, a report published today in Nature shows. And that's exactly why two researchers think scientists should share their data with others — well before they publish.

In this experiment, 29 scientific teams were given the same information about soccer games. They were asked to answer the question "Are dark-skinned players more likely to be given red cards than light-skinned ones?" Some scientists found that there was no significant difference between light-skinned and dark-skinned players, whereas others found a very strong trend toward giving more red cards to dark-skinned players. So, even though a pooled result showed that dark-skinned players were 30 percent more likely than light-skinned players to receive red cards, the final conclusion drawn from this exercise — that a bias exists — was a lot more nuanced than it likely would have been if only one team had conducted the analysis.

Scientists are notoriously guarded about their data

Scientists are notoriously guarded about their data. Part of that attitude has to do with the desire to prevent other research teams from publishing similar results before they have the opportunity. But some scientists also worry about having their work scrutinized by others because they might find an error; very few people like being double-checked. That's a problem because replicating results from studies that have already been published — notably in psychology — can be very tricky, if not impossible. A lot of the grunt work of verifying past conclusions happens slowly, through papers that are often published years later. That's why some researchers think that opening up the data analysis process before a study gets published could make a difference. That approach might lead to less flashy conclusions — but those conclusions would also probably be more robust.

"Once a finding has been published in a journal, it becomes difficult to challenge. Ideas become entrenched too quickly, and uprooting them is more disruptive than it ought to be," the study's author's write in Nature. The crowdsourcing approach is beneficial because it "gives space to dissenting opinions."

Today's study shows that researchers' results are influenced by the choices they make when they analyze raw data, says Raphael Silberzahn, a psychology and management researcher at The University of Navarra in Spain and a co-author of the study. But crowdsourcing results could correct for any individual lab’s bias.

Results are influenced by the choices scientists make when they analyze raw data

For example, in today's study, the researchers had to decide what information about the various soccer teams to include in the analysis — and how to count that information. As a result, some of the researchers decided not to take into account the fact that different players were evaluated by the same referees, Silberzahn says. Because referees are likely to display the same bias in different games, counting each referee multiple times can alter the result. So, after discussing their approach with other teams, some scientists changed their minds, and counted the information differently. In other cases, researchers altered which variables they wanted to include in the analysis. Without that kind of pre-publication peer review, those changes might not have happened.

Published findings are "difficult to challenge."

Convincing scientists to use this approach won't be easy, though. Not only would having multiple teams work on the same dataset be expensive, it would also be time-consuming. That’s why Silberzahn doesn’t recommend crowdsourcing for every question — just the ones with important policy implications. "The uncertainty of scientific conclusions about, for example, the effects of the minimum wage on unemployment, and the consequences of economic austerity policies should be investigated by crowds of researchers rather than left to single teams of analysts," he says.

But there's another incentive to conducting research this way. The crowdsourcing framework can "provide researchers with a safe space" in which scientists can vet their approaches, explore doubts and get a second, third, or fourth opinion, Silberzahn says. That, in turn, might help researchers publish results with more confidence.

That may be true, but researchers who are interested in this approach will have to work hard to create "safe spaces" like the one that Silberzahn envisions. Right now, the culture of science doesn't encourage transparency; scientists guard their results right up until publication. So shifting away form that will take time. Still, some researchers really do want to see this happen, the authors note. "Scientists around the world are hungry for more reliable ways to discover knowledge and eager to forge new kinds of collaborations to do so."

To prove their point, the authors mention that many of the research teams involved in this exercise were recruited using just a Facebook post — and two tweets.