Skip to main content

Children are susceptible to peer pressure from robots

Children are susceptible to peer pressure from robots

/

‘Psst, kid, you wanna try some logistic regression?’

Share this story

Robocup German Open Robots Soccer Tournament 2013
A Nao robot, the same type used in the experiments.
Photo by Jens Schlueter / Getty Images

“If your friends told you to jump off a bridge, would you?”

It’s a warning you probably heard in childhood, a hypothetical example of the dangers of groupthink. And it likely inspired more impulsive behavior than it prevented. But in the not-too-distant future, parents may have to update this little adage:

“If a robot told you to jump off a bridge, would you?”

Because, as it turns out, quite a few probably would.

In a study published today in the journal Science Robotics, researchers from Germany and the UK demonstrated that children are susceptible to peer pressure from robots. The findings, say the researchers, show that, as robots and AIs become integrated into social spaces, we need to be careful about the influence they wield, especially on the young.

Robots and AI could use social influence to change our behavior

The paper’s authors ask, “For example, if robots recommend products, services, or preferences, will compliance [...] be higher than with more traditional advertising methods?” They note that robots are being introduced to plenty of other domains where social influence could be important, including health care, education, and security.

The study in question is actually a reimagining of perhaps the best-known and most influential demonstration of social conformity: the Asch experiment. This series of tests, first carried out in 1951 by Polish psychologist Solomon Asch, illustrates how humans can be influenced by groupthink to the point where we will deny even the most obvious facts.

One of the cards used in the original Asch test. Participants had to say which line on the right was closest in length to the line on the left.
One of the cards used in the original Asch test. Participants had to say which line on the right was closest in length to the line on the left.
Image: Creative Commons

In his experiments, Asch invited 50 male college students to take part in a “vision test.” The students were seated around a table and shown a line on a chart next to a group of three other lines of varying lengths, labeled A, B, and C. They were then asked, one at a time, to say which of the three lines was closest in length to the first. The answer was obvious, but what participants didn’t know is that all but one of the students were actors. And when the ringers were called upon to give their answer, they all gave the same, incorrect response.

When it came to the turn of the real test subject (who always went last), roughly one-third caved to social pressure and gave the same, incorrect answer as their peers. Over the course of 12 such trials that Asch conducted, roughly 75 percent of participants conformed in this way at least once, while only a quarter never conformed at all.

“It’s such an elegant little experiment that we just thought: let’s do it again, but with robots,” says Tony Belpaeme, a professor of robotics at the University of Plymouth and co-author of the paper. And that’s exactly what he and his colleagues did, adding the extra twist of testing first groups of adults and then groups of children.

The results showed that, while adults did not feel the need to follow the example of the robots, the children were much more likely, too. “When the kids were alone in the room, they were quite good at the task, but when the robots took part and gave wrong answers, they just followed the robots,” says Belpaeme.

Images showing the robot used (A); the setup of the experiment (B and C); and the “vision test” as shown to participants (D).
Images showing the robot used (A); the setup of the experiment (B and C); and the “vision test” as shown to participants (D).
Photo by Anna-Lisa Vollmer, Robin Read, Dries Trippas, and Tony Belpaeme

Although it’s the susceptibility of the children that leaps out in this experiment, the fact that the adults were not swayed by the bots is also significant. That’s because it goes against an established theory in sociology known as “computer are social actors,” or CASA. This theory, which was first outlined in a 1996 book, states that humans tend to interact with computers as if they were fellow humans. The results of this study show that there are limits to this theory, although Belpaeme says he and his colleagues were not surprised by this.

“The results with the adults were what we expected,” he says. “The robots we used don’t have enough presence to be influential. They’re too small, too toylike.” Adult participants quizzed after the test told the researchers just as much, saying that they assumed the robots were malfunctioning or weren’t advanced enough to get the question right. Belpaeme suggests that if they tried again with more impressive-looking robots (“Like if we said ‘This is Google’s latest AI’”), then the results might be different.

Although the CASA theory was not validated in this particular test, it’s still a good predictor of human behavior when it comes to robots and computers. Past studies have found that we’re more likely to enjoy interacting with bots that we perceive as having the same personality as us, just as with humans, and we readily stereotype robots based on their perceived gender (which is a topic that’s become particularly relevant in the age of the virtual assistant).

These social instincts can also affect our behavior. We find it harder to turn off robots if they’re begging us not to, for example. Another study published today in Science Robotics found we’re better at paying attention if we’re being watched by a robot we perceive as “mean.”

All this means that although it’s children who seem to give in more easily to robotic peer pressure, adults aren’t exactly immune. Researchers say this is a dynamic we need to pay attention to, especially as robots and AI get more sophisticated. Think about how the sort of personal data that got shared during the Cambridge Analytica scandal could be used to influence us when combined with social AI. “There’s no question about it,” says Belpaeme. “This technology will be used as a channel to persuade us, probably for advertising.”

This robot peer pressure could be used for good as well as evil. For example, AI systems in educational settings can teach children good learning habits, and there’s evidence that robots can help develop social skills in autistic children. In other words, although humans can be influenced by robots, it’s still humans who get to decide how.