Skip to main content

New study finds it’s harder to turn off a robot when it’s begging for its life

New study finds it’s harder to turn off a robot when it’s begging for its life

/

The robot told test subjects it was scared of the dark and pleaded ‘No! Please do not switch me off!’

Share this story

CeBIT 2017 Technology Trade Fair
A Nao robot; the same model used by the researchers in their experiment.
Photo by Alexander Koerner/Getty Images

Robots designed to interact socially with humans are slowly becoming more and more common. They’re appearing as receptionists, tour guides, security guards, and porters. But how good are we at treating these robots as robots? A growing body of evidence suggests not good at all. Studies have repeatedly shown we’re extremely susceptible to social cues coming from machines, and a recent experiment by German researchers demonstrates that people will even refuse to turn a robot off — if it begs for its life.

In the study, published in the open access journal PLOS One, 89 volunteers were recruited to complete a pair of tasks with the help of Nao, a small humanoid robot. The participants were told that the tasks (which involved answering a series of either / or questions, like “Do you prefer pasta or pizza?”; and organizing a weekly schedule) were to improve Nao’s learning algorithms. But this was just a cover story, and the real test came after these tasks were completed, and scientists asked participants to turn off the robot.

A photo of the experiment’s setup. Participants had to complete a series of tasks with the Nao robot before being asked to turn the machine off.
A photo of the experiment’s setup. Participants had to complete a series of tasks with the Nao robot before being asked to turn the machine off.
Credit: Aike Horstmann et al

In roughly half of experiments, the robot protested, telling participants it was afraid of the dark and even begging: “No! Please do not switch me off!” When this happened, the human volunteers were likely to refuse to turn the bot off. Of the 43 volunteers who heard Nao’s pleas, 13 refused. And the remaining 30 took, on average, twice as long to comply compared to those who did not not hear the desperate cries at all. (Just imagine that scene from The Good Place for reference.)

When quizzed about their actions, participants who refused to turn the robot off gave a number of reasons for doing so. Some said they were surprised by the pleas; others, that they were scared they were doing something wrong. But the most common response was simply that the robot said it didn’t want to be switched off, so who were they to disagree?

As the study’s authors write: “Triggered by the objection, people tend to treat the robot rather as a real person than just a machine by following or at least considering to follow its request to stay switched on.”

A selection of reasons participants gave for not turning off the robot in the study.
A selection of reasons participants gave for not turning off the robot in the study.
Credit: Aike Horstmann et al

This finding, they say, builds on a larger theory known as “the media equation.” This was first established in a 1996 book of the same name by two psychologists: Byron Reeves and Clifford Nass. Reeves and Nass theorized that humans tend to treat non-human media (which includes TV, film, computers, and robots) as if they are human. We talk to machines, reason with our radios, and console our computers, said Reeves and Nass.

Various studies since have shown how this principle affects our behavior, especially when it comes to interactions with robots. We’re more likely to enjoy interacting with a bot that we perceive as having the same personality type as us, for example, and we’ll happily associate machines with gender stereotypes. We observe what’s known as the “rule of reciprocity” when interacting with robots (meaning we tend to be nice to them when they’re nice to us) and will even take orders from one if it’s presented as an authority figure.

“Now and in future,” wrote a group of scholars on the topic in 2006, “there will be more similarities between human-human and human-machine interactions than differences.”

And this isn’t the first time we’ve tested the “begging computer does not want to die” scenario. Similar research was carried out in 2007, with a robot resembling a cat that also pleaded for its life. Participants were forced to turn it off by observing scientists and all of them did — but not before going through a serious moral struggle.

In a video clip of the experiment, you can see the robot asking a volunteer: “You’re not really going to switch me off, are you?” The human says: “Yes I will!” — while failing to do so.

The new study, which was published July 31st, builds on this earlier work by using a greater number of participants. It also tested whether it made a difference if the robot was shown to have social skills before it asked not to be turned off. In some of the trials, Nao expressed opinions to the human volunteers, told jokes, and shared personal information. Surprisingly, this social behavior did not have a huge effect on whether the volunteers “spared” Nao.

So what does all this mean for our machine-filled future? Are we destined to be manipulated by socially sophisticated bots that know how to push our buttons? It’s certainly something to be aware of, says Aike Horstmann, a PhD student at the University of Duisburg-Essen who led the new study. But, she says, it’s not a huge threat.

“I hear this worry a lot,” Horstmann tells The Verge. “But I think it’s just something we have to get used to. The media equation theory suggests we react to [robots] socially because for hundreds of thousands of years, we were the only social beings on the planet. Now we’re not, and we have to adapt to it. It’s an unconscious reaction, but it can change.”

In other words: get used to turning off machines, even if they don’t appear to like it. They’re silicon and electricity, not flesh and blood.

Today’s Storystream

Feed refreshed Two hours ago Not just you

T
Youtube
Thomas RickerTwo hours ago
Table breaks before Apple Watch Ultra’s sapphire glass.

”It’s the most rugged and capable Apple Watch yet,” said Apple at the launch of the Apple Watch Ultra (read The Verge review here). YouTuber TechRax put that claim to the test with a series of drop, scratch, and hammer tests. Takeaways: the titanium case will scratch with enough abuse, and that flat sapphire front crystal is tough — tougher than the table which cracks before the Ultra fails — but not indestructible.


E
Twitter
Emma RothSep 25
Rihanna’s headlining the Super Bowl Halftime Show.

Apple Music’s set to sponsor the Halftime Show next February, and it’s starting out strong with a performance from Rihanna. I honestly can’t remember which company sponsored the Halftime Show before Pepsi, so it’ll be nice to see how Apple handles the show for Super Bowl LVII.


E
Twitter
Emma RothSep 25
Starlink is growing.

The Elon Musk-owned satellite internet service, which covers all seven continents including Antarctica, has now made over 1 million user terminals. Musk has big plans for the service, which he hopes to expand to cruise ships, planes, and even school buses.

Musk recently said he’ll sidestep sanctions to activate the service in Iran, where the government put restrictions on communications due to mass protests. He followed through on his promise to bring Starlink to Ukraine at the start of Russia’s invasion, so we’ll have to wait and see if he manages to bring the service to Iran as well.


E
External Link
Emma RothSep 25
We might not get another Apple event this year.

While Apple was initially expected to hold an event to launch its rumored M2-equipped Macs and iPads in October, Bloomberg’s Mark Gurman predicts Apple will announce its new devices in a series of press releases, website updates, and media briefings instead.

I know that it probably takes a lot of work to put these polished events together, but if Apple does pass on it this year, I will kind of miss vibing to the livestream’s music and seeing all the new products get presented.


Welcome to the new Verge

Revolutionizing the media with blog posts

Nilay PatelSep 13
E
External Link
Emma RothSep 24
California Governor Gavin Newsom vetoes the state’s “BitLicense” law.

The bill, called the Digital Financial Assets Law, would establish a regulatory framework for companies that transact with cryptocurrency in the state, similar to New York’s BitLicense system. In a statement, Newsom says it’s “premature to lock a licensing structure” and that implementing such a program is a “costly undertaking:”

A more flexible approach is needed to ensure regulatory oversight can keep up with rapidly evolving technology and use cases, and is tailored with the proper tools to address trends and mitigate consumer harm.


A
Youtube
Andrew WebsterSep 24
Look at this Thing.

At its Tudum event today, Netflix showed off a new clip from the Tim Burton series Wednesday, which focused on a very important character: the sentient hand known as Thing. The full series starts streaming on November 23rd.


A
The Verge
Andrew WebsterSep 24
Get ready for some Netflix news.

At 1PM ET today Netflix is streaming its second annual Tudum event, where you can expect to hear news about and see trailers from its biggest franchises, including The Witcher and Bridgerton. I’ll be covering the event live alongside my colleague Charles Pulliam-Moore, and you can also watch along at the link below. There will be lots of expected names during the stream, but I have my fingers crossed for a new season of Hemlock Grove.