Skip to main content

Google's AI researchers say these are the five key problems for robot safety

Google's AI researchers say these are the five key problems for robot safety


The robots are perfectly safe; it's you Google's worried about

Share this story

Google is worried about artificial intelligence. No, not that it will become sentient and take over the world, but that, say, a helpful house robot might accidentally skewer its owner with a knife. The company's latest AI research paper delves into this issue under the title "Concrete Problems In AI Safety." Really, though, that's just a fancy way of saying "How Are We Going To Stop These Terror-Bots Killing Us All In Our Sleep."

Five key problems for future robot manufacturers

To answer this brain-tickler, Google's computer scientists have landed on five "practical research problems" — key issues that programmers will need to consider before they start creating the next Johnny Five. For PR reasons the paper frames these problems as they might relate to a hypothetical cleaning robot, but really, they're meant to be applied to any real-world AI agent that will control a robot that will interact with humans.

The problems are as follows:

  • Avoiding Negative Side Effects: how do you stop a robot from knocking over a bookcase in its zealous quest to hoover the floor?
  • Avoiding Reward Hacking: if a robot is programmed to enjoy cleaning your room, how do you stop it from messing up the place just so it can feel the pleasure of cleaning it again?
  • Scalable Oversight: how much decision making do you give to the robot? Does it need to ask you every time it moves an object to clean your room, or only if it's moving that special vase you keep under the bed and never put flowers in for some reason?
  • Safe Exploration: how do you teach a robot the limits of its curiosity? Google's researchers give the example of a robot that's learning where it's allowed to mop. How do you let it know that mopping new floors is fine, but that it shouldn't stick the mop in an electrical socket?
  • Robustness to Distributional Shift: how do you make sure robots respect the space they're in? A cleaning robot let loose in your bedroom will act differently than one that is sweeping up in a factory, but how is it supposed to know the difference?

So, these aren't quite a simple as Isaac Asimov's Three Laws of Robotics, but we shouldn't expect that. After all, these are questions, not answers.

Some of these problems seem quite straightforward. With the last one, for example, you might just want to program a bot that has a number of preset modes. When it finds itself in an industrial setting (and it'll know because you'll tell it), it can just switch to Factory Mode and maybe go a bit harder with the sweeping.

But other issues are so context-heavy that it's all but impossible to preprogram responses for every scenario. Consider the "safe exploration" problem, which the paper glosses as "taking actions that don't seem ideal [...] but which help the agent learn about its environment." Robotic agents will certainly need to undertake actions outside their frame of reference to learn about the world, but how do you mitigate potential harm when they're doing so?

Boston Dynamics

The paper suggests a range of methods, including creating simulations that robot agents can explore before hitting the real world; "bounded exploration" rules that limit a robot's movements to a predefined space, one that has perhaps been robo-proofed for possible mistakes; and good, old-fashioned human oversight — getting a bot to check with its handler before taking actions outside its sphere of reference.

As you can imagine, each of these approaches has its own benefits and drawbacks, and Google's paper isn't about suggesting a breakthrough solution — it's just outlining the wider issues. But as the company notes in its research blog, it's practical challenges like these that are the real robot safety risks.

Worry about sentient AI if you like, but problems like these are more immediate

Although it's fine for figures like Elon Musk and Stephen Hawking to raise awareness about the dangers of artificial intelligence, the majority of computer scientists agree that these problems are far, far away. Before we worry about AI becoming murderously sentient, we need to make sure robots that might work in factories and homes are clever enough not to accidentally kill or maim humans. (It's happened already, and it will definitely happen again.)

Google's stake in all this is interesting too. The company might be offloading Boston Dynamics, the ambitious robotic hardware maker it purchased in 2013, but it continues to pour money and resources into all sorts of AI projects. It's this work, along with research from universities and rival companies, that will provide the foundation for the software-brains that will animate physical robots. It's not a trivial task to make sure these brains are thinking straight.