Without the fanfare backing efforts like Jurassic World or the big Marvel Studios tentpoles, Ex Machina might be the most important sci-fi film of the year. Making its stateside debut at SXSW this week, the movie, directed by 28 Days Later screenwriter Alex Garland, is a heady psychological thriller that takes our anxieties about present-day technology — namely, clandestine data collection and killer robots — and tells us we’re not ready for what comes next. But it’s not because of government overreach or the imminent singularity. Here, it’s because our collective assumptions about technology and how we relate to it as human beings could betray us in the end — and we won’t see it coming.
The action in Ex Machina takes place in the not-so-distant future, where a company named BlueBook has achieved a kind of technological ubiquity the likes of which Google can only dream of. From there, the film homes in on three characters: Caleb, played by Domhnall Gleeson, is a BlueBook programmer and everyman, making him the ostensible protagonist of the narrative. Nathan, played by Oscar Isaac, is the eccentric CEO of BlueBook, who invites Caleb — like some sinister Willy Wonka — to his forest home to help him prove he’s made a breakthrough in artificial intelligence. The breakthrough is Ava, a gynoid played by Swedish actress Alicia Vikander who, save for her exposed mechanical internals, seems like a normal young woman, and thus the perfect candidate to pass the Turing test. Pretty soon, however, it’s clear that Nathan’s version of the test is much more than just base thinking and feeling, and things take a turn when Ava lets on that Nathan shouldn’t be trusted.
Garland makes AI — and, by extension, us — inscrutable, untrustworthy, and scary
In Ex Machina, Garland explores our fascination with and distrust of technology’s potential by making it human. In doing so, he makes it — and, by extension, us — inscrutable, untrustworthy, and scary, and he’s not afraid to admit he’s somewhat paranoid about our future. Still, it’s not for the reasons you might expect. I spoke with him earlier this week about the movie.
Mild spoilers ahead.
Starting broadly, Ex Machina blends a lot in terms of robotics, the ethical problems in Silicon Valley, and what I saw as commentary on men and women in both those areas. What are your feelings on where we are now and what’s not far off?
Well, I think probably where we are at the moment is in a slightly difficult stage, because I think we’re not entirely sure what our relationship with technology is, and we’re not comfortable with it. I suspect that’s why there’s a bunch of narratives around at the moment which kind of pry that anxiety out. I suspect, even though I’ve been working on a movie which is ostensibly concerned with stuff like strong artificial intelligence and things like that, in a way it’s got more to do with big tech companies than any sense that computers are gonna start thinking. It has to do with Edward Snowden, [questioning] exactly what information tech companies have about us, and what we’ve given up of ourselves without realizing.
I suppose there’s this tension between people wanting AI to be a good thing and our natural distrust of the companies creating it, the Googles and Facebooks of the world. Does that worry you?
Well, those companies are involved in trying to develop very complicated AI. And they definitely have an intention, which is to create machines that have similar qualities to us. So not just processing abilities, but feeling abilities. In terms of how close that is, it’s really, really hard to judge, except to say it’s not very close. It’s not imminent. It’s got something in common, I suspect, with a cure for cancer, inasmuch as you can make progress, but the progress tends to also reveal how far you are away from the end goal. So it’s not quite as simple as two steps forward and one back. It may even be two steps forward and two steps back. Whatever it is, it’s not something that’s just around the corner. Unless there is someone somewhere working on something that I and the people I’ve spoken to don’t know anything about.
Let’s talk about Nathan, the CEO of BlueBook. He’s basically any tech industry CEO, but taken to a logical extreme. Does he resemble anyone particular in your mind?
No, he wasn’t aimed at, like, Mark Zuckerberg or anything like that. It was more to do with a set of games that were being set up and played with the audience in terms of where their allegiances lie and why their allegiances lie in different places. And the intention is that things shift around. The basic idea of that character is that we would automatically distrust him, because we do distrust the CEOs of large companies. On top of that, he gives us quite a lot of reasons to distrust him. The question is if [that distrust] interferes with our ability to hear what it is he’s actually saying, and whether what he’s saying is right or wrong.
There’s a conversation that’s happens right in the middle of the film which goes specifically to this point, where [Nathan] sort of teases Caleb by sounding illiberal, [pushing] Caleb into thinking that he’s racist. And I think he’s playing that game with the audience, too, and I think there’s supposed to be the secondary question of, to what extent are you seeing what this guy is actually like and to what extent is he presenting a version of himself, in order to be something that Caleb feels he must rescue this apparently female robot from. So, I was more concerned with those kind of games than specifically aiming this at any human CEO or something.
That kind of manipulative back-and-forth kind of pervades the entire movie. Would you say we’re complicit in a similar kind of manipulative game with tech companies today?
We are, although it’s a complicated thing. For example, take the case of Snowden and the NSA revelations — [which is] an incredibly important thing. As a check and balance, at least one way we can reassure ourselves about what Snowden revealed to us is that the big organizations he is blowing the whistle on are government organizations, and in governments within Western democracy, there is actually the ability to vote them out. So if the citizens get sufficiently angry with the government, they can get rid of them. With the big tech companies, I’m not actually sure that’s true. To decide to not be complicit with them would mean not having a mobile phone, not having a credit card, not having a computer, not having a tablet, and so on. In some respects, I think we are complicit, and in others I think we’re helpless, because realistically we may not have a choice but to involve ourselves with these things.
Would you say that Caleb becomes something of a stand-in for Snowden in this film then, at least at face value?
Kind of, but I was playing a sort of reverse game with Caleb, which is that he kind of stands for us in some respects. What is happening to Caleb via Nathan and also via this robot is pretty much happening to the audience, and we should pretty much be keeping track with him in terms of where we feel our allegiances lie and where our fears lie. On top of that, there is the secondary thing — whereas Nathan often sounds unreasonable but if you take a cold, hard look at what he’s saying, it might actually be true; with Caleb I wanted him to be saying stuff that sounded fair and reasonable but if you took a cold, hard look at it, it might not be true. That’s part of the game that’s being played with the audience. Some of it has to do with tech stuff, some of it has to do with human relationships.
So what about Ava, then? Part of what makes Ava both fascinating and unnerving, at least on a conceptual level, is that she is a product of consumer data from BlueBook’s users. Do you think that’s where our consumer technology is headed?
Not entirely where Ava is concerned, because I don’t necessarily share the anxieties about AI that I know exist. I can see AI as potentially dangerous. Nuclear power is potentially dangerous, as well. But that doesn’t have to stop us from using nuclear power, and it wouldn’t have to stop us from creating strong AI.
What I’m trying to do is [question] a bunch of prejudices, and saying ‘Is this reasonable?’ Is it reasonable to feel suspicious of this man? Is it reasonable to feel suspicious of this robot? Is it reasonable to feel instinctively allied to this man?
For instance, one of the key questions of the film is "Is Ava like a very sophisticated chess computer that’s acting as if it wants to win a game of chess and acting as if it has a consciousness?" I’ve got an answer to that question, but I’m fully aware that viewers might read it a different way.
What about gender? Ava is created to be (or approach being at any rate) a thinking, feeling woman, and the action proceeds with us putting her in this Othered position. Is that you commenting on how the tech world views women in general?
I wouldn’t limit it to the tech world. I would say there is a gender narrative there. Specifically, the thing that I was interested in was just raising questions. In a way, they’re more fundamental than the way men represent or objectify women, because I think there are elements of that in there that are probably quite obvious. But part of it is, Does she have a gender? Is she a woman? Is it or is it not reasonable to say "She"? You could, for example, take Ava’s mind and move it to another body that Nathan had created that had the outward appearance of being masculine. Ava’s body is externally female, but internally, it’s just her mind. So if you changed it to a masculine body, would Ava then be a he? Some people would say no. Some people would say it’s genderless. And some people would say there actually is a gender. And that is exactly the question I’m seeking to ask, because if you’re saying that the body is irrelevant and the mind is all that matters, then are you saying that men and women, for example, have different minds? That a female consciousness is different from a male consciousness? In other words, if the body is interchangeable, then what is it that denotes gender? And, conversely, if male and female minds are exactly the same in fundamental ways, which seems like a perfectly reasonable thing to say to me, then in that case maybe Ava is female because she has the external characteristics of a female and we don’t denote gender based on consciousness.
Does the act of making Ava female, at least outwardly, create an expectation on Nathan’s part and on Caleb’s part that she would behave as female? Whereas we can only guess at how she sees herself.
She may or may not see herself in that way. What we know is that the young man sees her in that way. And one of the things that Nathan does in his setup here is he presents himself to this young guy as a kind of Bluebeard type figure, from whom this young woman needs to be rescued. That then allows [Caleb] to cast himself in the role of the rescuer, the proper hero of this little narrative. Now, whether Nathan is that Bluebeard figure or just presents himself as that is one of the questions that then is posed, but also is Caleb reasonable as casting himself as the savior / knight figure? In doing that, does he make himself the "hero" of the story, without stopping to think what’s actually going on inside this machine’s head?
"My allegiances are with the machine, not with the humans in the story."
Would you say there’s also a Frankenstein element to her creation?
There’s always a Frankenstein element to these creation myths. It’s the text, I suppose, which all of these stories go back to. Also, it’s always the case that in looking at the creation, you inevitably end up looking at the creator. And I suppose it is like Frankenstein inasmuch as [the story itself] has sympathy with the monster. To me, definitely, my allegiances are with Ava. They’re with the machine, not with the humans in the story.