First Click: Deep learning is creating computer systems we don't fully understand

July 12th, 2016

How does a computer make a decision? With normal software this is pretty easy to answer. Choices are outlined by a programmer; paths of options are created that are either followed or not, based on definable input. But, with the latest breed of artificially intelligent machines trained on deep learning methods, it's trickier to work out. Creating these programs often involves selecting the algorithms that give the right results, but not checking how these results are reached. If the program produces the right answer 99 percent of the time, who cares? But when that 1 percent of wrong answers matter the most — when it's a life or death situation — finding out how the computer thinks becomes very, very important.

Take the recent (and sadly fatal) Tesla Model S crash, which happened while the vehicle's autopilot system was activated. Why exactly did the car's computer fail to recognize the truck in front of it? The reasons are vaguely known (it was a sunny day and the truck was painted white, making it difficult to pick out), but we don't know the exact decision-making process, or how the situation will be avoided in the future. "Right now, I'm sure the people at Tesla are asking the exact same question: why did it happen? Why did it not recognize that vehicle?" says professor Dhruv Batra, a specialist in machine perception at Virginia Tech University. "And if you’re relying on a black box there is no answer."

like good students, computers need to be able to show their working

I spoke to Batra about this, and he compared the need to create transparent decision-making in AI to the importance of proper educational methods. A bad teacher asks students to memorize information and repeat it on demand; a good one checks their methodology, and asks them to explain how they reached a certain answer. Essentially, says Batra, we need computers that can show their working.

This problem was illustrated in a recent study let by two of Batra's students, Abhishek Das and Harsh Agrawal. The pair asked two humans and two neural networks specializing in object-recognition questions about certain images, and then watched whereabouts they looked in these pictures to compare their decision-making processes. So, when a human and a computer are asked "What are the color of the man's shoes in this picture?" you might expect them both to look at the bottom of the image, where you're most likely to see shoes. But this, found Das and Agrawal, was not always the case.

Two examples of attention heat maps from the study. The human answers are in the second column, the neural networks' answers are in the third and fourth.

To compare where the humans and machines looked, the researchers created "attention" heat maps that could be laid over one another. On a scale of 0 to 1, where 0 is no overlap at all and 1 is complete overlap, the researchers found that the attention maps from the humans lined up at a rate of 0.63. But when comparing humans to machines, this figure was just 0.26.

Explaining this difference is tricky. In one question in the study, for example, the humans and neural networks where shown a picture of a bedroom, and asked: "What is covering the windows?" (The answer: "Blinds.") The humans looked straight to the windows to answer this question, but the machines, for some reason, looked at the beds instead.

"They're picking [answers] based on biases in the data sets, rather than from facts about the world."

Batra speculates that this particular anomaly was caused by how the neural networks were trained. With questions involving object-recognition in bedrooms, for example, the bed might be the most important feature, so when the algorithms are confronted with a bedroom, they scan the bed first regardless of the question. "They're picking [answers] based on biases in the data sets, rather than from facts about the world," says Batra.

This sort of discrepancy might not matter for a task like sorting your photo collection based on location, but as machine learning algorithms take on greater responsibilities in the real world, it's important for researchers to know how they reach their decisions.

When Microsoft CEO Satya Nadella outlined six key aims for the development of responsible artificial intelligence, transparency was number two. "We want not just intelligent machines but intelligible machines," wrote Nadella last month. "People should have an understanding of how the technology sees and analyzes the world." The European Union agrees, and recently introduced legislation that will eventually grant citizens the "right to an explanation" — a license to demand that tech companies explain how their automated systems reach certain decisions. (The severity of the obligation placed on companies is debatable though.)

TECH LEADERS AND EUROPEAN GOVERNMENTS AGREE: WE NEED MORE TRANSPARENCY

Batra told me that these steps are necessary. "If we’re going to ship these things into the world and interact with humans, they’re going to have to communicate and be trusted," he says. "At some point, someone will ask ‘why did you say this?' and you need to produce a reason that humans will agree with." He says that tests like his own attention heat maps are an example of safety checks that could be added to deep learning systems. Computers would still be trained on large datasets, but their method could be compared to human results to look for any obvious (and possibly dangerous) differences.

Doing so would make training deep learning systems and neural networks more expensive and time-intensive, but Batra believes this cost would also deliver better results. "It would be a benefit of trust," he says, "Of increased understanding. And you would hope that would lead to improved performance." When it comes to making human decisions, thinking like a machine isn't always best.

Five stories to start your day



  1. EU-US Privacy Shield agreement goes into effect

    The European Commission has formally adopted a new agreement governing the transfer of data between Europe and the United States, more than eight months after the longstanding "Safe Harbor"...

  2. Facebook faces $1 billion lawsuit for providing 'material support' to Hamas

    The families of five Americans who were killed or hurt by Palestinian attacks carried out in Israel have filed a $1 billion lawsuit against Facebook, alleging that the social network "knowingly...

  3. Pokémon Go developers promise to tweak Google account permissions after security concerns

    Niantic Labs, the developer behind the currently planet-dominating Pokémon Go, has responded to concerns over a potential security flaw in the app. Signing into the iOS version of Pokémon Go with a...

  4. PewDiePie and other YouTubers took money from Warner Bros. for positive game reviews

    The Federal Trade Commission has reached a settlement with Warner Bros. over claims that the publisher failed to disclose that it had paid prominent YouTubers for positive coverage of one of its...

  5. PC shipments return to growth in the US

    The PC industry has been in decline for more than two years, but last year there were early signs it was starting to stabilize. While we're still waiting for worldwide shipments to go positive,...

Responsible computer system of the day