Skip to main content

James Bridle on why technology is creating a new dark age

James Bridle on why technology is creating a new dark age

Share this story

Graphics by Michele Doying / The Verge

In 2005, Stanford University researcher John Ioannidis published an essay with the explosive title “Why Most Published Research Findings Are False.” Ioannidis alleged that many researchers were not running meaningful experiments; they were simply sifting through huge amounts of data to find any publishable results, using a technique known as p-hacking or data dredging. Technology made gathering information easier than ever, but the result here was not a deeper understanding of our world — it was greater confusion about it.

The p-hacking problem is one of many high-tech parables in James Bridle’s book New Dark Age, which will be released in the US tomorrow. Bridle is already well-known for his creative critiques of modern technology, including the 2012 drone-tracking project Dronestagram, a salt circle that traps self-driving cars, and last year’s influential essay about creepy YouTube kids’ videos. New Dark Age integrates these critiques into a larger argument about the dangers of trusting computers to explain (and, increasingly, run) the world. As Bridle writes, “We know more and more about the world, while being less and less able to do anything about it.”

But however grim a new dark age sounds, Bridle explains in an interview with The Verge that his vision isn’t a purely negative one, and his book is a call to study not what computers are telling us, but how and why they’re doing it.

This interview has been condensed and edited for clarity.

In your book, you talk about the notion that we’re in a state of knowing more about the world than ever, but we have less and less agency to change it, and we need to develop a kind of literacy around these computing systems. But it seems like we could develop literacy and still not gain any real power over the systems.

Oh, absolutely. That’s certainly possible. I don’t think that there’s any kind of anything that will guarantee you some kind of magical power over things. In fact, the hope that you can do so is itself kind of dangerous. But it’s one of the routes that I explore to a possibility of gaining some kind of agency within these systems.

One of the ways that I approach these problems is through one particular form of systemic literacy that I’ve developed through my work and my studies, but I also think it’s generalizable. I think anyone can get there from a background in any number of disciplines. And understanding that literacy is transferable and that we all have the capabilities to apply it to think clearly about subjects that seem difficult and complex is one of the main thrusts of the book.

You’ve given examples in the past of ways that people could resist “inevitable” technological progress, like taxi drivers making salt traps for self-driving cars. What else could they do?

I did a whole bunch of projects around self-driving cars, which also included building my own — poorly, but in a way that helped me learn how it’s done — so that I gained an understanding of those systems, and possibly as a result would be able to produce a different kind of self-driving car, essentially. In the same way that anyone who tries to work on these systems, build them themselves, and understand them has the possibility of shaping them in a totally different way.

The autonomous trap is another approach to some of the more threatening aspects of the self-driving car. It’s quite a sort of aggressive action to literally stop it. And I think working with and attempting to stop and resist are both super useful approaches, but they both depend on having some level of understanding of these systems.

These seem like individual solutions to some extent. How do you deal with situations like climate change, where you need really large-scale systemic change?

There’s a couple of things I talk about regarding climate in the book, and one of them is to be really, really super direct about the actual threat of it, which is horrific, and it’s kind of so horrific that it’s difficult for us to think about. Simply the act of articulating that — making it really, really clear, exploring some of the implications of it — that kind of realism is a super necessary act.

We’re still fighting this rear-guard action of, “Oh, it’s manageable,” “Oh, we can mitigate it,” or “It’s not really real.” We’re still, despite everything we know, everything people say, stuck in this ridiculous bind where we seem incapable of taking any kind of action. And, for me, that’s part and parcel of this continuous argument we have over numbers and facts and figures and the data and information that we’re gathering, as though this is some kind of argument that has to be won before we do anything. That excludes the possibility of doing anything concrete and powerful and present.

How does it feel to be a critic of these technologies for years and suddenly see people start agreeing with you?

I think there’s a lot of people right now who find themselves in the position of being “Well yes, this is exactly what we meant,” you know? I remember having conversations years ago with someone saying, “What’s the worst that can happen with someone having all this data centralized?” And my answer to that was, “Well, the worst thing that can happen is that fascists take over and have control of that data.” And a few years ago, that felt like the worst possible thing, completely unimaginable. And here we are today — when fascism is alive and well in Europe, and growing in certain ways in the US as well. So it’s suddenly not so remote.

“The really important thing ... is to constantly frame this as a struggle.”

But at the same time, people who have been thinking about this for a while have also been building things that are capable of mitigating that. So while I argue against everything being magically fixed, putting this all out in the open in certain ways does start to make some kind of difference. The really important thing, I think, is to constantly frame this as a struggle. Which, again, we kind of don’t often do, particularly in the context of technology — where we see this stuff as a kind of ongoing, always upward unstoppable march.

Technology always walks this kind of weird knife edge. It becomes hard for us to understand and change — everything disappears behind glass, inside little black boxes. But at the same time, if you do manage to crack them open just a little bit, if you get some kind of understanding, everything suddenly becomes really quite starkly clear in ways that it wasn’t before. I’m kind of insisting on that moment being the moment of possibility — not some kind of weird imaginary future point where it all becomes clear, but just these moments of doubt and uncertainty and retelling of different stories.

Speaking of stories, you reference authors like H.P. Lovecraft and Iain M. Banks in New Dark Age. How is fiction shaping the way we deal with this future?

A lot of the way that we think of technology, and the internet in particular, has been really shaped by the ideas of it that came along before the thing itself arrived, right? Just as our ideas of space exploration are completely shaped by fantasies of space exploration from long before we got to space practically. The really interesting science fiction to me now happens kind of in the next week or the next year at most because it’s so obvious to us how little we can predict about long-term futures, which really, for me, is more of a reflection of reality than reality is a reflection of science fiction.

I’m unsure about the value of stories to pull us in a particular direction. Most science fiction writers insist that all their fiction is really about the present, so they’re really just different ways of imagining that.

Jeff VanderMeer has also said that futuristic dystopias are a way of shifting real problems “over there” out of reality.

Yeah, exactly. There’s a whole genre of design fiction as well that posits these political things as design objects as a way to kind of pull those futures into being. But I always think there’s something very risky about that, because it also positions them as somewhere else, right? Not as tools that we have access to in the present. And VanderMeer’s fiction is pretty interesting, because while it’s obviously somewhat future-oriented, it’s also deeply about the weird and strange and difficult to understand.

I think that is better than what I said before, really. That is the most interesting current within science fiction right now: not imaginings of weird futures, utopian or dystopian, but ones that really home into how little we understand about the world around us right now.

How do we critique the idea of inevitable, upward progress without overly romanticizing the past? In the US, criticism of automation gets tied up with calls to protect jobs that fit a stereotypical 20th century white, male vision of work.

There’s always that danger of romanticization, it’s true. It’s still being played out. That also comes about because of our really narrow view of history — that we have these quite small and very essentially made-up histories of things that we’re so acculturated to. So one of the things I try to do in the book is pull out these alternative histories of technology, and that’s another current that’s quite strong at the moment.

“We have these quite small and very essentially made-up histories.”

I just read Claire Evans’ book Broad Band, about the number of women involved in the creation of the internet as we know it today. Many of the characters, real people in her book, they’re not just engineers and programmers. They’re also community moderators and communicators, people who shaped the internet just as much as people who wrote the lines of code.

And so as soon as you dig up that history, you then can’t help but understand the internet as something that’s very different in the present. And therefore you can understand the future as something else as well. So if we talk about automation, then one of the works we can do is not just to hark back to some kind of golden age, but to trouble that legacy as well, to talk about who worked then and under what conditions, you know?

There’s always technological resistance. Like the Luddites, who are pretty well-known now, but the fact is that the Luddites weren’t smashers of technology; they were a social movement, performing a very violent and direct form of critique of the destruction of their livelihoods, of what those machines were doing. And so now we have many, many other tools of critique for that. But by retelling these stories, by understanding them in different ways, it’s possible to rethink what might be possible in the present.