Skip to main content

How computers misunderstand the world

How computers misunderstand the world

/

Meredith Broussard’s new book explores the fallacy of thinking the tech solution is the best one

Share this story

Image: MIT Press

“There are lots of things on the internet that are popular but not good,” says Meredith Broussard, “like racism or ramen burgers.” Yet we are increasingly being offered the popular and not the good, and that’s just one negative effect of believing that the technological solution is always better one.

Broussard has been a data journalist and software developer, and she is now the author of Artificial Unintelligence: How Computers Misunderstand the World, out now from MIT Press. Broussard has been programming since she was 11, but she has come to realize that the long-held utopian promises haven’t delivered. Now, she says, it’s time to be more critical, think about the limits of computers, and stop excluding equations from the system.

The Verge spoke to Broussard about technological fallacies, why paper can be better, and why we don’t need to reinvent the wheel.

This interview has been lightly edited for clarity.

One of the main arguments of your book is that we suffer from “technochauvinism,” or the belief that the technological solution is always the right one. How is technochauvinism different from other terms we have, like technological utopianism?

It’s very much related. In the book, I write about how our mainstream beliefs about the role of technology in society are very much influenced by the utopian visions of a very small and homogenous group of people, like [the late MIT cognitive scientist] Marvin Minsky. Still, not every decision that we make around technology is specifically utopian, I don’t think that C-suite executives are really thinking about how they want to create a better and separate utopian world where they make decisions about using technology.

Meredith Broussard.
Meredith Broussard.
Photo by Lucy Baber

You’ve been a data journalist and a programmer. Do you remember a point in your career when you realized that we were falling prey to this fallacy?

I’ve been programming since I was 11, so I’ve been hearing this rhetoric for a very, very long time, and just none of the grand visions of the future have come to pass. We don’t have flying cars, and I don’t think we want flying cars because it’s a stupid idea.

The gap between what we imagine and what computers can actually do is really vast. Technology is terrific, and I’m very enthusiastic about forward progress, but it’s time to be more critical and realistic about what computers can and can’t do. Just because you can imagine something doesn’t mean that it’s true, and just because you can imagine a future doesn’t mean that it will come to be. Often, we talk about computers as being able to do anything, and that’s just rhetoric because ultimately they’re machines, and what they do is they compute, they calculate, and so anything you can turn into math, a computer can do. But there are lots of things they can’t do, and they don’t always make things easier. Things like predictive policing are biased. But even take the example of registering for a summer camp. People think, “Oh, it’s definitely better if I do all this on the computer,” but it would be better if you had a computer sign-up and a paper sign-up.

Why is that? What are the advantages of the paper sign-up?

It’s more inclusive that way. There are plenty of people who still don’t have access to the kind of computing you’d need. Also, most computing systems are really, really poorly designed. People don’t put enough money and energy into user interface engineering, and so most paper forms are better-designed. People imagine that there’s this vast capacity available digitally, so they can be sloppy, and that wastes everyone’s time.

I’m filling out a summer camp form right now, and you have to get a copy of your kids’ immunization records, and you have to scan and upload a copy of the immunization record and then you also have to go through and type in every single dose of the vaccines and the date on which it was given. It’s ridiculous. It’s going to take me three to four times as long as opposed to if I just got a copy of the immunization record and put it in an envelope with a stamp.

This reminds me of the chapter in your book when you talk about how Philadelphia school systems were supposed to have a centralized digital system to track textbooks, but it fell apart because no one was doing the data collection.

Yes, that’s an illustration of what always happens in very, very complex systems. If you put in technology, and then you don’t have enough people to support the technology and fix it when it breaks, then the whole system is just going to break down, and people are just going to revert to what’s easier. You have to have somebody who’s paid to take care of the details in a technical system, and you have to have somebody who is just staying on top of the details and fixing things all the time. And often that’s far more expensive than anybody realizes.

At one point, you talk about the “unreasonable effectiveness of data,” which is a play on an article about the “unreasonable effectiveness of math.” But what does this idea mean when applied to data and technology?

It means that if I have a big enough dataset, I don’t actually need to be an expert on the topic that the data covers. I can draw conclusions simply from the vastness of the data, because people are really very similar when it comes right down to it. So if I have millions of examples of people asking, “Alexa, what’s the weather today?” in different variations, I don’t actually need to know that it’s a query about meteorological conditions. I can just take the phonetic sequence that sounds like “weather” and attach that to a program that calls up the temperature of the geological. It’s a mathematical analysis of data as opposed to a content analysis of the data.

Another example is autocomplete. If you type in “GA” and you’re in Georgia, you might be looking for Georgia peaches or Georgia football, and autocomplete can call up similar searches. But it wouldn’t do that if you were in New Jersey. You don’t need to have content knowledge when you have big data. This prioritizes the popular and not the good, and there are lots of things on the internet that are popular but not good — like racism or ramen burgers. When you don’t have any kind of quality filter, you’re going to simply give people better access to extremist content.

Are people starting to pay more attention to technochauvinism?

Yes. Facebook’s Cambridge Analytica scandal is one example. Twitter’s ongoing abuse problem is another.

It’s technochauvinism that a small group of people in California, like the people who run Twitter, believe it’s possible to have a computer administer society, and they believe that it is better to use algorithms than to use people. You could have community managers in any system and the community managers could use technological tools to help improve the conversations, but that's’ expensive and time-consuming and that’s not the kind of project that tech people really want to take on. But we can look at something like the Coral Project, which is using AI tools and also human moderation and common sense to try and make comment sections better.

You’re a fan of collaborating with computers. What would that look like?

I’m most optimistic about human-in-the-loop systems. Human-in-the-loop systems and autonomous systems are the two major models that computer scientists think about when they think about building systems. Autonomous is one that operates totally on its own without any human intervention, and human-in-the-loop is one that has a human as an integral part of the system. So I don’t think that “autonomous cars” are a great idea, but I think human-in-the-loop systems around cars are. If we can use technology to make humans better drivers instead of replacing humans as drivers, I think that’s a win. I think if we start to design systems to accommodate humans, as opposed to designing humans that exclude humans, that’s a better path forward.

So what is the opposite of technochauvinism?

Realism. I would say that the way that we combat technochauvinism is by being honest with ourselves about how many of our promises about the bright future of technology are empty promises. And we don’t need to make empty promises to each other. We could use technology for very practical things like going into every single one of our schools and make sure that kids have enough books and paper and pencils and that there are plugs in every classroom to plug in the various devices that students need. I don’t think that we need to try and reinvent the wheel. We can just use technology to use our world better with the systems that we already have.