Isaac Asimov's Three Laws of Robotics (technically four) have been a stalwart of science fiction for decades. With recent advances in artificial intelligence, though, computer scientists and tech companies are beginning to seriously consider the rules we actually need to protect ourselves from future robots and AI. Last week, researchers from Google published a scientific paper outlining five key challenges for making robots safe to work with, and yesterday, in an article for Slate, Microsoft CEO Satya Nadella laid out the six "principles and goals" he believes AI research must follow to keep society safe.
Nadella is worried about the social impact of robots
Nadella's goals aren't a direct analog of Asimov's Laws. The latter are rules that robots must obey, while the Microsoft chief is princiapply speaking to the industry — to the computer scientists who are building AI systems and working with machine learning. As such, his rules are more about the potential social impact of artificial intelligence, than stopping a robot coming at you. For that reason they may sound more boring than Asimov's (less dramatic certainly), but they're much more important.
Here are Nadella's principles:
- AI must be designed to assist humanity. Nadella says that machines that work alongside humans should do "dangerous work like mining" but still "respect human autonomy."
- AI must be transparent. "We want not just intelligent machines but intelligible machines," says Nadella. "People should have an understanding of how the technology sees and analyzes the world."
- AI must maximize efficiencies without destroying the dignity of people. "We need broader, deeper, and more diverse engagement of populations in the design of these systems. The tech industry should not dictate the values and virtues of this future."
- AI must be designed for intelligent privacy. Nadella asks for "sophisticated protections that secure personal and group information."
- AI must have algorithmic accountability. So that "humans can undo unintended harm."
- AI must guard against bias. "Proper and representative research" should be used to make sure AI doesn't discriminate against people (like humans do).
Fittingly, Nadella's goals are as full of ambiguity as Asimov's own Three Laws. But while loopholes in the latter were there to add intrigue to short stories (a lot of Asimov's fiction is concerned with the logical technicalities of how one articular robot bypassed the Laws and knocked off its owner), the vagueness of Nadella's principles reflect the messy business of building robots and AI that deeply affect peoples' lives.
Consider the numerous mentions of "bias" and "diversity," for one. It's a truism that the tech world often struggles to consider diverse viewpoints (those that are non-white and / or non-male), and with machine learning systems that make decisions on behalf of humans, there's a good chance that engineers' ignorance — or even prejudice — will end up hard-coded into the system. To make sure this doesn't happen, computer scientists need to guard against bias (Nadella's sixth principle), but the systems they make should also be easy enough to understand so others can look for bias (his second principle) and then undo any harm (the fifth principle).
AVOIDING bias is one big problem; stopping robots wrecking the economy another
Another theme in Nadella's six goals (and his accompanying essay), is the effect of AI on the economy — otherwise known as "what happens when a robot takes your job?" Nadella's statement that AI must "maximize efficiencies without destroying the dignity of people" seems targeted at this, and later on in his op-ed he asks: "Will automation lead to greater or lesser equality?" There's no clear answer, he suggests, but an urgent need to think about the future.
Basically, there's a lot of overlap between Nadella's different rules, but the overarching theme is simple: let's not fuck people over. In our rush to build autonomous machines and systems, we shouldn't forget that these systems are supposed to help us, not hurt us. Just like Asimov said.