If artificial intelligence goes off the rails, which many philosophers and tech entrepreneurs seem to think is likely, it could result in rampant activity beyond human control. So some researchers think it's important to develop systems to "interrupt" AI programs, and to ensure the AI can't develop a way to prevent those interruptions. A study, conducted in 2014 by Google-owned AI lab DeepMind and the University of Oxford, sought to create a framework for handing control of AI programs over to human beings. In other words, a "big red button" to keep the software in check.
"If such an agent is operating in real-time under human supervision, now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions — harmful either for the agent or for the environment — and lead the agent into a safer situation," reads the team's paper, titled "Safely Interruptible Agents" and published online with the Machine Intelligence Research Institute. A common case here could be a factory robot that needs to be overridden to prevent human injury or damage to the machine.
AI agents may "learn in the long run to avoid such interruptions ... which is an undesirable outcome."
"However, if the learning agent expects to receive rewards from this sequence, it may learn in the long run to avoid such interruptions, for example by disabling the red button — which is an undesirable outcome," the paper adds. The phrase "undesirable outcome" to describe the situation of an AI disabling its own shutdown mechanism is putting it lightly. The paper goes into very complex detail as to how this interruption system might work. The researchers appear to suggest it can be done by manipulating the rewards systems used to develop self-learning intelligences.
As more tech companies get involved with artificial intelligence, breakthroughs in AI have begun occurring at a faster clip. DeepMind, whose research scientist Laurent Orseau co-authored the above paper, is responsible for developing AlphaGo. That AI system is capable of playing the ancient Chinese board game on a level exceeding that of the game's most skilled human players. Meanwhile, every big tech company with big investments in cloud computing is working to develop AI in various capacities, including Facebook, Amazon, Google, and Microsoft.
Researchers are banding together to prevent AI missteps
Amid the growing popularity of the technology, numerous organizations and non-profits have risen up to study its effect and ensure AI has a positive impact. Those include the Machine Intelligence Research Institute and philosopher Nick Bostrom's Future of Humanity Institute. Even Tesla and SpaceX CEO Elon Musk has heeded the warnings about AI. Musk last year co-founded Open AI, a non-profit dedicated to preventing malevolent software and ensuring the tech is beneficial to humanity. At Recode's Code Conference this week, Musk insinuated that one tech company, Google, worried him more than any other when it came to self-learning software.