Skip to main content

How science fiction is training us to ignore the real threats posed by AI

How science fiction is training us to ignore the real threats posed by AI

/

Clara Labs’ Maran Nelson on why movies like Her and Ex Machina miss the point

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

Illustration by Alex Castro / The Verge

CEOs of artificial intelligence companies usually seek to minimize the threats posed by AI, rather than play them up. But on this week’s episode of ConvergeClara Labs co-founder and CEO Maran Nelson tells us there is real reason to be worried about AI — and not for the reasons that science fiction has trained us to expect.

Movies like Her and Ex Machina depict a near future in which anthropomorphic artificial intelligences manipulate our emotions and even commit violence against us. But threats like Ex Machina’s Ava will require several technological breakthroughs before they’re even remotely plausible, Nelson says. And in the meantime, actual state-of-the-art AI — which uses machine learning to make algorithmic predictions — is already causing harm.

“Over the course of the next five years, as companies continue to get better and better at building these technologies, the public at large will not understand what it is that is being done with their data, what they’re giving away, and how they should be scared of the ways that AI is already playing in and with their lives and information,” Nelson says.

AI predictions about which articles you might want to read contributed to the spread of misinformation on Facebook and the 2008 financial crisis, Nelson says. And because algorithms operate invisibly — unlike Ava and other AI characters in fiction — they’re more pernicious. “It’s important always to give the user greater control and greater visibility than they had had before you implemented systems like this,” Nelson says. And yet, increasingly, AI is designed to make decisions for users without asking them first.

Clara’s approach to AI is innocuous to the point of being dull: it makes a virtual assistant that schedules meetings for people. (This week, it added a bunch of integrations designed to position it as a tool to aid in hiring.) But even seemingly simple tasks still routinely trip up AI. “The more difficult situations that we often interact with are, ‘Next Wednesday would be great — unless you can do in-person, in which case we’ll have to bump it a couple of weeks based on your preference. Happy to come to your offices.’”

Even a state-of-the-art AI can’t process this message with a high degree of confidence — so Clara hires people to check the AI’s work. It’s a system known as “human in the loop” — and Nelson says it’s essential to building AI that is both powerful and responsible.

Nelson sketches out her vision for a better kind of AI on Converge, an interview game show where tech’s biggest personalities tell us about their wildest dreams. It’s a show that’s easy to win, but not impossible to lose — because, in the final round, I finally get a chance to play and score a few points of my own.

You can read a partial, lightly edited transcript with Nelson below, and you’ll find the full episode of Converge above. You can listen to it here or anywhere else you find podcasts, including Apple PodcastsPocket CastsGoogle Play MusicSpotify, our RSS feed, and wherever fine podcasts are sold.

Maran Nelson: My big idea is that science fiction has really hurt the chances that we’re going to get scared of AI when we should.

Casey Newton: We’ve seen a lot of movies and TV shows where there is a malevolent AI, so I want you to unpack that for us a little bit. What do you mean?

Almost every time people have played with the idea of an AI and what it will look like, and what it means for it to be scary, it’s been tremendously anthropomorphized. You have this thing — it comes, it walks at you, and it sounds like you’re probably going to die, or it made it very clear that there’s some chance your life is in jeopardy.

Yes.

The thing that scares me the most about that is not the likelihood that in the next five years something like this will happen to us, but the likelihood that it will not. Over the course of the next five years, as companies continue to get better and better at building these technologies, the public at large will not understand what it is that is being done with their data, what they’re giving away, and how they should be scared of the ways that AI is already playing in and with their lives and information.

So the idea of HAL from 2001 is distracting people from what the actual threats are.

Very much so.

I think another one that people don’t think about as much is the 2008 financial collapse. There you have another situation where there are people who are building risk models about what they can do with money. Then they’re giving those risk models, which are in effect models like the ones that are powering Facebook News Feed and all of these other predictive models, and they’re giving them to bankers. And they’re saying, “Hey, bankers, it seems like maybe these securitization loans in the housing, it’s going to be fine.” It’s not going to be fine. It’s not at all! They’re dealing with a tremendous amount of uncertainty, and at the end of the day in both of these cases, as with News Feed, with the securitization loans, it is the consumers who end up taking the big hit because the corporation itself has no real accountability structure.

One of the ideas you’re getting at is that companies of all sizes sort of wave AI around as a magic talisman, and the moment they say, “Well, don’t worry, we put AI on this,” we’re all supposed to relax and say, “Oh, well the computers have this handled.” But what you’re pointing out is that actually these models can be very bad at predicting things. Or they predict the wrong things.

Absolutely, and I think that the reality is just the opposite. When you start to interact with consumers and have a product like ours that is largely AI, there is a real fear factor. What does that mean? What does it mean that I’m giving up or giving away? It’s important always to give the user greater control and greater visibility than they had before you implemented systems like this.

Converge with Casey Newton /

Silicon Valley’s best game show.

Subscribe!