Today’s headlines are filled with technological breakthroughs that promise an optimized future, from artificial intelligence to diagnose disease to self-driving cars that revolutionize transportation. One day, everything will be easier, faster, and better, we’re told.
It’s an appealing vision, but there’s a downside to all this efficiency, says scholar and writer Edward Tenner, author of The Efficiency Paradox: What Big Data Can’t Do (out next week from Knopf). “Trying to be ultimately efficient at all times will succeed in the short run,” he says. “But in the long run, you would be damaging your efficiency.” Tenner isn’t a Luddite, and his book doesn’t suggest renouncing efficiency and Big Data. He just advises us to use common sense and keep in mind that there are always trade-offs.
The Verge spoke with Tenner about how the efficiency paradox happens, its costs, and how to balance intuition and technology.
This interview has been lightly edited for clarity.
What is the efficiency paradox? And how did you become interested in the topic?
I saw that there was something really new and very exciting going on in the web. The rise of mobile computing and the increasing interest in artificial intelligence and big data was actually having as great an impact, in some ways even greater, than the initial web of the 1990s. This story kept growing on me, and gradually, I saw that there was a kind of unintended consequence to this: trying to be ultimately efficient at all times will succeed in the short run. But in the long run, you would be damaging your efficiency.
You define efficiency in the book’s preface as being able to produce goods or providing service with a minimum of waste. Then you talk about “continuous-process efficiency” versus “platform efficiency.” What’s the difference between these two?
People in the Elizabethan times and even in the Middle Ages didn’t have the concept of efficiency we do today. That really depended on the rise of thermodynamics in the 19th century and the need to get as much power as possible from water turbines and from steam engines. That efficiency of the 19th century is what I call “continuous-process efficiency,” and it’s when things that were made piece by piece could now be made in a stream. For example, when paper was made in the 18th century, it was always in sheets. In the 19th century, entrepreneurs found a way to have paper coming off a mill continuously, and that is what made possible mass literacy, newspapers, expensive books. It was in its way as important as the Gutenberg revolution of the 15th century had been.
Now, platform efficiency is really a whole other type. It’s something that’s really in the cloud, and it’s about bringing buyers and sellers together with a minimum cost and extremely rapidly. So it’s things like getting a ride or buying a ticket or paying rent or banking.
Platform efficiency is wonderful, and I’m not at all condemning it, but one of the unfortunate consequences is that it has tended to attract investment capital away from much harder things. It’s much easier to make a small fortune with a platform-based startup than it is, for example, to develop a more efficient battery. I came to believe that because these physical and chemical enterprises take so much longer, are so much more expensive, are so much messier, and so they’re less attractive to investors. That’s one negative side of platform efficiency.
Was there a time in American culture when we didn’t care as much about efficiency? To be clear, I’m talking about the general culture not caring, not specific subcultures or movements like the Luddites.
One of the interesting things about American culture is that even the subcultures that pretended to disdain efficiency — like Southern planters — ran on the principle of trying to squeeze as much profit as possible from enslaved labor and from the soil. So, there was this industrial regimentation in the South as well as in the factories of the North.
America, I think, has always been a pioneer of efficiency. They were admired by Europeans for their rigorous efficiency in doing everything, and the criticism of Americans was that they were so concerned with making money and with efficiency that they were losing out on the finer things in life. On the other hand, European observers were always coming over here and trying to copy American methods!
The huge Soviet-era industrial complexes were based on the Gary, Indiana, steel mills, and Lenin and the other Soviet leaders greatly admired Henry Ford.
Let’s talk about some of the examples of the downsides of efficiency. In one of the chapters, you talk about the effects on arts and culture.
By removing so much trial and error and productive mistakes, platform efficiency can lock us into existing patterns. For example, publishers or film producers can analyze data to see what genres have been most popular, what will attract viewers of a certain demographic, and this could indeed make publishing more predictable or producing more profitable.
But so many of the big hits have been real surprises that have broken so many of the rules. AI is really great at finding hidden rules and applying them and optimizing everything according to hidden rules, but it’s really the rule-breaking events that have made life exciting for us.
I’m also interested in a study you mention about how popularity works and the cost of getting rid of gatekeepers of popular culture.
People have presented gatekeepers as a drag. They’re one level between the consumers and the producers. So, if you don’t have them, you are reducing transaction costs and making things more efficient. You can just find things yourself. In the mid-‘90s, Bill Gates and his co-authors wrote The Road Ahead about the friction-free economy of the future, where there wouldn’t be these middlemen.
But these gatekeepers did have a useful role. They could recognize talent that was not quite ready to go mainstream, but had something interesting and exciting there that was worthwhile to develop. If you eliminate the gatekeepers, it’s a little like sports without coaches.
For example, there was a study from Princeton that showed that when you statistically study what people — ordinary consumers, not an elite panel of critics — think of the quality of various works offered on the web, the ones that become very popular have only a small advantage in quality. It’s not really random, but it’s small. When you look at patterns of popularity on the web, there’s a small core interest that snowballs quickly. Without gatekeepers, so much of popularity depends on what happens to become popular first.
In your chapter on education, you talk about the “value of the inefficient medium,” like paper, for example. What are some examples where inefficiency makes us learn better?
I’ve read studies of reading and comprehension that psychologists have done over the years. Electronic reading and paper reading each have their own advantage. The electronic medium is better for recognizing details, but that reading on paper gives you a better, holistic sense of what an author is trying to say. That’s a trade-off.
This is similar to what I say in my chapter on geography. The paper map is awkward in a lot of circumstances and inferior to the electronic map, which I use all the time. But on the other hand, the paper map gives you a sense of the broader terrain, and it’s very helpful in orienting yourself.
Medicine is an area with a lot of hope for AI and big data: precision medicine, AI diagnosis. What are some of the drawbacks here?
In medicine, there are warning signs, and these warnings, in turn, have to be addressed or ruled out by further tests. As more diagnostics develop, there’s a high possibility of false positives that make people go through more tests — and some of the further tests may actually have side effects of their own.
Recently, in The New York Times, there was a review of the new book by Barbara Ehrenreich, who is swearing off the medical system altogether. On the other hand, there are people who pay large amounts of money for so-called concierge medicine with doctors that are always monitoring them. There are different styles, and I’m not belittling the project of life extension, but I think quite a few critics of medicine have pointed out the advantage of a holistic approach to people’s health and the kind of understanding that the best old-fashioned doctors had.
You don’t want to rely completely on that because sometimes those wonderful old-fashioned doctors had old-fashioned ideas that have been contradicted by research data. So you need the big data, but there are many pitfalls in analyzing big data, and there’s some tension between academic statisticians and data analysts in the commercial sector about what constitutes good practice in using this data.
How should we think about these trade-offs? Who should be helping us determine which trade-offs are important enough for us to make?
It’s really for every individual to use electronic and analog materials in a way that suits their own lives. This is not a book on policy. It’s a book that’s telling people, “Don’t be afraid of your common sense.” I think everybody can recognize what works for them, and people will have very different styles.
The last chapter of your book talks about strategies for balancing algorithms and common sense. How did you come up with these strategies?
I tried to see which of the ideas applied across the chapters. For example, people are familiar with the idea of serendipity, and so that didn’t need a lot of introduction. The point about serendipity is just that if you eliminate mistakes then, you’re going to be too dependent on immediate and recent experience and not open enough to productive surprises. But the concept of “desirable difficulty,” on the other hand — where we can learn better if things are more difficult — is less familiar to people because it occurs in studies, for example, of reading comprehension that show that something less legible might actually encourage people to concentrate more.
What else are we missing?
There are two factors that are underestimated by people and that are serious issues in the application of efficient technology. One of them is what’s called “local knowledge.” All of us know that there’s some route that might look really great on a map, but we know it’s a problem because we’ve traveled over it. For example, there’s an intersection that looks like the shortest way, but I know the traffic is tied up there, and it’s quicker to take a longer way, and [traffic app] Waze hasn’t done this. Every once in a while, Waze points out a really crazy direction, and if people don’t have common sense, sooner or later they will be very disappointed. Since I’ve come to recognize Waze is not infallible, I use it, and if I see there’s something that’s not right, I try to pull over and take a look at a printed map and figure out what’s going wrong.
The other is tacit knowledge. The idea is that no matter how much information you feed into an intelligent system, there are many, many things that are tacit, meaning that they are not explicitly stated anywhere. You can’t find that information in an encyclopedia.
One example is how little children can understand the meaning of a proverb — like “a stitch in time saves nine” or “a rolling stone gathers no moss” — in a way that a computer can’t. There are many things that even little children can appreciate that the most advanced technologies of machine learning can’t, and I think that to me is one of the most exciting things about the mind and about being human.