Sophia the robot was made a citizen of Saudi Arabia last month, but a lot of people weren’t happy about it.
Some noted the grim irony of a robot receiving ‘rights’ in a country where women were only recently allowed to drive. Others said it set a bad precedent for how we might treat robots in future. (AI ethicist Joanna Bryson told The Verge the stunt was “obviously bullshit.”) Some were annoyed about the perception of Sophia itself — a robot that’s also a media star, with magazine cover-shoots, talk show appearances, and even a speech to the UN. Experts in the field sometimes decry Sophia as emblematic of AI hype, and say that although the bot is presented as being a few software updates away from human-level consciousness, it’s more about illusion than intelligence.
For Ben Goertzel, chief scientist at Hanson Robotics, the company that made Sophia, the situation is conflicting, to say the least. In interviews with The Verge, Goertzel said it was “not ideal” that some thought of Sophia as having artificial general intelligence or AGI (the industry term for human-equivalent intelligence) but, he acknowledged that the misconception did have its upsides.
“If I show them a beautiful smiling robot face, then they get the feeling that AGI may indeed be nearby.”
“For most of my career as a researcher people believed that it was hopeless, that you’ll never achieve human-level AI.” Now, he says, half the public thinks we’re already there. And in his opinion it’s better to overestimate, rather than underestimate, our chances of creating machines cleverer than humans. “I’m a huge AGI optimist, and I believe we will get there in five to ten years from now. From that standpoint, thinking we’re already there is a smaller error than thinking we’ll never get there,” says Goertzel.
He admits that Sophia’s presentation annoys experts, but defends the bot by saying it conveys something unique to audiences. "If I tell people I'm using probabilistic logic to do reasoning on how best to prune the backward chaining inference trees that arise in our logic engine, they have no idea what I'm talking about. But if I show them a beautiful smiling robot face, then they get the feeling that AGI may indeed be nearby and viable.” He says there’s a more obvious benefit too: in a world where AI talent and interest is sucked towards big tech companies in Silicon Valley, Sophia can operate as a counter-weight; something that grabs attention, and with that, funding. “What does a startup get out of having massive international publicity?” he says. “This is obvious.”
It’s fair to say that Sophia isn’t unintelligent, either. As Goertzel, who is currently building a “decentralized market for AI,” points out, it makes uses of a wide number of AI methods. There’s face tracking, emotion recognition, and robotic movements generated by deep neural networks. And although most of Sophia’s dialogue comes from a simple decision tree (the same tech used by chatbots; when you say X, it replies Y), what it says is integrated with these other inputs in a unique fashion. It’s not groundbreaking in the way that work coming out of companies like DeepMind or university labs, but it’s not a toy.
“None of this is what I would call AGI, but nor is it simple to get working,” says Goertzel. “And it is absolutely cutting-edge in terms of dynamic integration of perception, action, and dialogue.”
The end-result is undeniably engaging, and despite Sophia’s often stilted and awkward conversation, viewers seem to be left with a sense of something more. Much of this impact can be credited to the work of Hanson Robotics founder David Hanson, who, for many years, was a Walt Disney Imagineer, building sculptures for the company’s theme parks. It’s Hanson, too, who often exaggerates Sophia’s capacity for consciousness, telling Jimmy Kimmel earlier this year that the robot was “basically alive,” for example.
While we can easily get lost arguing the philosophy and semantics of judging what is and is not “alive,” it’s more obvious to say that this statement is grossly misleading.
When I ask Goertzel how he squares his desire to have people properly understand Sophia’s capabilities with Hanson’s comments, he says: “Well, David is his own human being, and he always speaks from his heart. He doesn’t always phrase things the exact way I would. But I would say that he’s an artist and a sculptor, and he came at this because he wanted to make his sculptures come alive. He’s made sculptures that move and speak, and reason, and do abduction and deduction. So in his view, speaking from his heart, he’s making his sculptures come alive.” He adds: “I don’t think David ever says something just from a marketing or PR standpoint. He’s the most heartfelt guy I know.”
And when asked about comments from academics like Bryson, who suggest that giving robots rights degrades human ones, Goertzel strongly disagrees. He says Saudi Arabia’s decision to give Sophia citizenship shows the country’s desire to be more progressive. “Empirically, in Saudi Arabia, the granting of a robot rights seems to be correlated with increases rather than decreases in general human rights,” he says, pointing to recent changes like letting Jewish people work in the country, and the decision to give women the right to drive.
“How does it affect people if they think you can have a citizen that you can buy?”
Critics might reply that giving one robot “rights” in a ceremony designed to draw attention to a lavishly-funded tech conference doesn’t indicate any particular ethical probity. And, in fact, in a country where homosexuality is punishable by death and migrant workers are kept in slave-like conditions, some might interpret it as just the opposite: as further evidence of “rights” being treated without due regard. As Bryson put it: “How does it affect people if they think you can have a citizen that you can buy?”
Goertzel says these are important questions, and he hopes that Hanson Robotics and Sophia are triggering debate. It’s certainly something he himself enjoys. When I ask if he thinks Siri is deserving of citizenship, he says it would be “pretty funny,” but points out qualities that Sophia has that Siri doesn’t, like uniqueness, and a physical presence. “To what extent are these characteristics necessary for being considered ‘deserving of rights?’” he asks. “[It’s] an interesting direction for thinking.”
Goertzel says that at the current rate of progress, society will likely be forced to re-think concepts like rights, and perhaps even democracy. He gives the (extremely unlikely and easily avoidable) example of someone 3D-printing a bunch of robots with voting rights in order to swing an election. “If you programmed them to vote in a certain way, and they all autonomously work the same way, then suddenly you have a dictatorship by robots,” he says.
If you believe the super-intelligent AIs are just around the corner, then these would be pressing questions. But there are enough real and current threats in the field of artificial intelligence — from bias in the algorithms handing out prison sentences to AI-powered surveillance states — to make these hypothetical problems seem like indulgent sideshows. Many researchers and experts say what’s currently needed is a better public understanding of AI; of its current capabilities and limitations. When it comes to answering that call, Sophia seems to be doing more harm than good.
For Goertzel and Hanson Robotics, there are other factors at play. When I ask why Sophia keeps on getting media appearances and making headlines, Goertzel’s answer is simple: “People love [them], they both disturb and enchant people. Whatever else they are, they are fantastic works of art.”