Skip to main content

Dumb robots that make mistakes actually help humans solve problems

Dumb robots that make mistakes actually help humans solve problems

/

The mistake-making bots jolt humans out of their set patterns of behavior

Share this story

Illustration by James Bareham

When humans work together with not-very-smart robots, they’re better at solving problems than when they work only with other people, new research says. The finding could help boost productivity in digital workplaces. And it’s a step toward understanding an automated future where bots might actually help people make better decisions. Of course, that’s probably what the creators of Skynet thought, too.

Welcome to our automated future

Our automated future has already arrived — at least, to some extent. Algorithms and smart devices help people choose what to buy, what to read, what to watch, and which roads to take. But social scientists Nicholas Christakis and Hirokazu Shirado at Yale University wanted to understand what this means on a bigger scale: how do machines change how humans interact with each other? And can robotic colleagues help humans work together more effectively?

To find out, Christakis and Shirado challenged people to solve a puzzle working in human-only groups, or in groups that included bots masquerading as human players. That’s when the researchers made a paradoxical discovery. The mixed human-bot teams solved the challenge faster than the human-only ones, but only when the bots occasionally made completely random decisions. Basically, the dumb bot’s erratic behavior shook the humans out of their ruts and coaxed them toward more creative solutions, according to the study published today in the journal Nature. (It wasn’t painless — people in the mixed groups liked winning, but found their unpredictable robot teammates annoying.)

“What’s particularly exciting is that in our present world and certainly going into the future, people and algorithms are going to be making decisions together,” says Iain Couzin, who studies collective behavior at the Max Planck Institute and was not involved in this study. “There’s a real push for trying to understanding these systems in a more quantitative way — let’s understand from a scientific perspective how collective decisions are made.”

Christakis and Shirado recruited 4,000 people via Amazon Mechanical Turk, an online platform where users are paid to complete online tasks. They divided up the participants into 230 different groups of about 20 people each. Each player got to control the color of one dot inside an array of connected dots. The dot could be either green, orange, or purple.

This is what the color challenge looks like to individual players.
This is what the color challenge looks like to individual players.
Image by Nicholas Christakis and Hirokazu Shirado

Here’s the task: the group of 20 players has to ensure that every dot in their network is a different color from its neighbor. If five minutes elapse and there are still, say, two purples next to each other, the entire team loses. And they lose their bonus: each player gets paid $2 dollars just for showing up, and a $3 bonus that drops in value the longer the group takes to solve the problem. The challenge is that players can only see their own dot, and their immediate neighbors’ — they can’t zoom out and see the entire array. It’s a gamified metaphor for people at work: we can see our individual tasks, and those of the people sitting next to us — but it’s harder to see the bigger picture.

Some of the groups were composed entirely of people. But some of them included three bots in place of three humans. The bots were programmed to pick the color least likely to conflict with their immediate neighbors. But in some cases, the researchers added a little “noise,” or random variation to the bot’s program. So occasionally — 10 percent of the time for the “low-noise” bots, 30 percent of the time for the “high-noise” bots — some of the bots would randomly change their dot’s color for no reason.

Why would adding random variation boost a team’s performance?

Sometimes these bots replaced human players on the edge of the array, where they didn’t have many neighbors, and sometimes they were put in the middle — where they had a lot. Other times, they were placed randomly on the field. It turns out that teams with centrally located, low-noise bots solved the puzzle much faster than all-human teams or teams with the high-noise bots. It seems like a bizarre result. Why would adding random variation at a few key positions in a network change a team’s performance for the better?

The idea that adding errors or illogical behavior to a system can be beneficial actually has some precedent in the natural world. If you had perfect reproduction from generation to generation, for example, then you wouldn’t get evolution, Christakis says. “So you need a little bit of noise in reproduction, a little mutation to allow an organism to adapt to new environments,” he says.

Human players are depicted in circles, and bots are depicted in squares. Red connections indicate when the colors conflict.
Human players are depicted in circles, and bots are depicted in squares. Red connections indicate when the colors conflict.
GIF by Nicholas Christakis and Hirokazu Shirado

Similarly, if the human players inflexibly stick to the perfect color for their neighborhood in the coloring challenge, they might never solve the puzzle. “Everyone is thinking, ‘Well I’ve done my job, it’s some other jerk somewhere else that hasn’t done their job!’ But meanwhile that guy’s thinking, ‘I’ve done my job!’” Christakis says. “So what’s happening in that situation is that everybody is locked into what seems to be the best they could do locally, but globally they are not doing what’s best.”

In order to reach that global solution, the players had to tolerate making mistakes, or picking a color that put them in conflict with their neighbors. Even just three bots behaving more erratically taught humans far away in the network that this strategy could unlock solutions that hadn’t been obvious.

“The AI doesn’t need to be that smart, because humans are already smart.”

The corollary is that the push for super smart AI like Watson, and AlphaGo, might be misguided. Even “unsophisticated” AI — Christakis didn’t pull any punches, he calls his bots dumb — can influence human behavior and, in this case, boost human performance. “The AI doesn’t need to be that smart, because humans are already smart — but we need some help,” Shirado says. “Maybe AI or robots can help people to help themselves.”

The findings are extremely thought-provoking, Couzin says. “They showed that these robots, or bots, within the network could actually dramatically change the ability for humans to come to decisions — even if they don’t know that they’re there,” he says. And it’s already happening — algorithms influence the decisions we make every day. “That’s something that we as humans need to begin to face, because if we don’t approach this from a scientific, ethical perspective, then we don’t understand it. And if we don’t understand it, then it can be abused.” That’s why studies like this one are so important, he says.

Christakis, for one, isn’t afraid of a future where unsophisticated bots coax humans toward making better decisions. “I would certainly feel more secure with dumb AI than with super smart AI,” he jokes. “I feel much safer about these bots than I would about Skynet.”