Skip to main content

    Franklin Foer on how Silicon Valley is threatening our humanity

    Franklin Foer on how Silicon Valley is threatening our humanity

    /

    ‘The tech companies are destroying the possibility of contemplation’

    Share this story

    Fears about the “existential threat of big tech” usually focus on autonomous weapons and how to control superintelligence before it has the power to control us. That’s not so for Franklin Foer, The Atlantic staff writer and former New Republic editor-in-chief. His new book World Without Mind is out this week, and it’s about a different type of existential threat.

    He thinks that the big tech companies — Google, Apple, Facebook, Amazon — are “destroying the possibility of contemplation” and making us turn away from the intellectual work that, he says, makes us human.

    The Verge spoke with Foer about fake news, why there’s too much attention on Silicon Valley libertarianism, and how food can serve as a model for a cultural revolution.

    This interview has been lightly edited and condensed for clarity.

    What got you started thinking about the “threat” of Silicon Valley? I assume the book was written before the election. How much did the election change its message?

    The ultimate genesis of this book was when Amazon had its big fight with the Hachette publishing group in 2014. Amazon was trying to renegotiate its contract with e-book vendors, and it was really, really aggressive in trying to set the terms. There was really no way of resisting, because their dominance over the book industry was so pervasive.

    And then, things blew up at The New Republic with [Facebook co-founder] Chris Hughes. The confluence of those two things sent me down a road where I started thinking hard about Silicon Valley’s influence in the spheres of media, publishing, and culture.

    I think I handed in my book on November 1st, 2016, so only a couple days later did I awaken and realize I’d written a book about fake news and the rise of Donald Trump. So I went back and I made amendments to take note of that.

    The subtitle of the book, “The Existential Threat of Big Tech,” caught my eye. I’ve done some reporting on “existential risk,” and it usually refers to global catastrophic risk, like nuclear war or pandemic, not the types of threats you’re describing. What made you choose this subtitle?

    Of course, what I’m describing is not as quite as apocalyptic and explosive as as nuclear war, but there’s a threat that challenges our very existence as human beings. What I worry about ultimately is that when we’re stripped of our privacy, when we’re stripped of free will, when we start to merge with machines in a more robust way, at some point, we’ll cease to be identifiably human. And therefore, I think our humanity is in some ways the thing that’s under existential threat.

    There are people who love the idea of our humanity being augmented. They think it’s a good thing. More intelligent, stronger, and so on.

    Author Franklin Foer.
    Author Franklin Foer.
    Photo by Evy Mages

    They’re living in a science fiction fantasy world. The problem is that we’re not just merging with machines, we’re merging with the companies that make these machines.

    It might be one thing if we were gaining intellectual powers that we had full control over, but we don’t. Right now, four or five big companies control the machines we’re using. It doesn’t mean their tools aren’t useful, but the danger is that the companies influence us in really subtle ways. If you think of data as kind of an x-ray of our soul, it’s this window into our minds that the company has possessed. It’s a very, very powerful x-ray for them to hold because the more that you understand about somebody, the easier it is to manipulate them.

    And you think we’re being manipulated into giving up our privacy? The book mentions that Silicon Valley libertarianism gets all the attention, but you say that the “collapse of the individual” is actually the guiding ethos. How did you come to that?

    To be clear, “Silicon Valley” is a fairly glib and imprecise term, so when I use it, I am referring to its elites, and to its thought leaders, not to the average engineer.

    I started just watching every YouTube video I could get of a town hall meeting featuring Larry Page, Mark Zuckerberg, and so on. I started listening to what they were saying and it wasn’t a lot of screeds against government or celebrations of the heroic individual. What I found was this love of all things social. The network is the most fetishized concept in the valley, and as I listened, I began to think the real danger was the collectivism. They were so obsessed with achieving some sort of new global consciousness, and I found them to be completely immune to all reasonable anxieties about the state of the individual.

    If supposed libertarianism is getting too much attention, which attitude do you think we’re not looking at enough?

    Monopoly. When you listen to most people in Silicon Valley talk about the network they talk about it as a winner-take-all system. The idea of the network is that you make a bet on the right company and they capture the network and all the other market players disappear. I think that’s a very common way of thinking.

    If you listen to the way that people like Larry Page talk about competition, they abhor the idea of competition. They think of it as something that’s almost beneath them. So rather than competing against Apple, or Uber, they would much rather focus on their moonshot ideas and doing something truly transformational, and this replicates language that we’ve heard throughout history.

    Monopolists always defend their monopolies by arguing that competition is wasteful. When the railroad barons completed their monopoly, they argued it would be wasteful to have competing rail lines, AT&T said the same thing. But today, the size and scope of these monopolies is different. They just aspire to encompass the totality of human existence, and you see that in the current race to become our personal assistants. These companies never really want to leave our side over the course of the day.

    And these are intellectual technologies, which is a little bit different. These aren’t transportation technologies, these aren’t industrial technologies, these are technologies that provide us with a filter for the world. There’s no care for authorship or intellectual property.

    I don’t think that all of Silicon Valley is anti-intellectual. You’re always seeing these lists of “books that Bill Gates reads.”

    In the epic war over Silicon Valley’s intellectual property, Bill Gates was on the side of licensing copyright and robust protections for intellectual property. He wasn’t on the side of the hackers, and he didn’t want information to be free. That information wants to be free is really at the core of the problem, because that too is kind of a utopian countercultural ideal that sounds awesome in the abstract and has a lot of problems that come with it.

    And this has to do with the intellectual forebears of Silicon Valley? You link today’s attitudes fairly closely to the communes of the 1960s.

    One of the great coincidences in history is that the counterculture and the technology industry grew up side by side in the San Francisco mid-peninsula and the two rubbed up against one another and rubbed off on one another. A lot of the early champions of technology in Silicon Valley were hoping to replicate the commune, and in the late ‘60s and early ‘70s, hordes of people retreated from the cities and from conventional lives to live in communes. The idea was that you’d go back to the land and you’d get some sort of new consciousness that would show how everything relates to everything else and that living in this collective sort of existence would make us all much better human beings.

    If you look at the history of the network and the history of Silicon Valley, it’s really a way to try to capture all the wonderful things that were promised about the counterculture. The only problem with that vision is that all these countercultural concepts like “network” were soon captured by big firms who saw the biggest business opportunity in human history. The vision was less about a new consciousness than it was all about making money.

    Reading the book, one sentence in particular grabbed me, which is “the tech companies are destroying the possibility of contemplation.” I think it seems to sum up the main argument. Did I understand correctly?

    That’s exactly right, and I worry that when we’re always being watched we cease to feel comfortable thinking subversive, original thoughts. There’s a whole ecosystem of journalists and book publishers who are getting crushed in this new economy and it’s their words that are necessary to be contemplative human beings. We’re being dinged, notified, and clickbaited, which interrupts any sort of possibility for contemplation. To me, the destruction of contemplation is the existential threat to our humanity.

    You said that after the election you realized that, in a way, you’d written a book about fake news. How so?

    Facebook permits an ecosystem where we get the news and information that confirms our biases. We become less skeptical and more susceptible to charlatans who are trying to deliver us information that we’ll agree with. There’s a terrible feedback loop that Eli Pariser called the filter bubble, and that’s the thing that makes fake news possible. But to go even further, back to our discussion of contemplation, if we allow ourselves to exist in this haze where we subconsciously go from click to click, we don’t pause or slow down and think deeply. Then we’re all going to be less on guard against propaganda and fake news.

    What’s your big-picture suggestion for avoiding this?

    We need to have a bit of a cultural revolution to reset things. The most hopeful thing I can look at is food. For generations, we were fed processed crap, and only belatedly did we start to care about what we put in our mouth, and even then it was a very “elite” phenomenon. But it was significant because that’s a pretty good instance of people deciding that efficiency isn’t the most important thing and that we need to try to protect the people who actually produce the food we consume. The quality of what we consume in some ways is directly correlated to the way we treat the producers.

    We need to treat culture as something that is incredibly important and incredibly worthy and incredibly virtuous. We should think of ourselves as better human beings, if we’re consuming the “right” intellectual goods. That’s a very elitist sort of attitude, but I think we need to have that sort of elitism in order to set the terms for the entire intellectual economy, and also it improves the culture. Our culture is only good if we have standards about what’s worthy and what isn’t and paying for things is a pretty good sign of something being worthy.

    At the same time, it seems like people are blaming elitism and saying that we need to know “real Americans,” otherwise we wouldn’t have gotten in this position with Trump. Elitism is not very popular right now.

    I think a failure of elitism is the problem. People hate elites because elites have been not just asleep at the job — they’ve been championing the market in a really blind sort of way. People are right to resent them. I mean, this idea that people should get what they want, as I’ve said, is a very dangerous notion and elites have let so much go in this country. They do deserve a lot of blame. But the solution isn’t to pander to the everyday person and to fetishize them. The solution is to have better elites.