clock menu more-arrow no yes mobile
ai lab
ai lab

Filed under:

Paul Allen and the Machines: teaching the next generation of artificial intelligence

Microsoft co-founder Paul Allen has been pondering artificial intelligence since he was a kid. In the late '60s, eerily intelligent computers were everywhere, whether it was 2001's HAL or Star Trek's omnipresent Enterprise computer. As Allen recalls in his memoir, "machines that behaved like people, even people gone mad, were all the rage back then." He would tag along to his father's job at the library, overwhelmed by the information, and daydream about "the sci-fi theme of a dying or threatened civilization that saves itself by finding a trove of knowledge." What if you could collect all the world's information in a single computer mind, one capable of intelligent thought, and be able to communicate in simple human language?

Forty years later, with nearly 9 billion dollars to Allen's name, that idea is beginning to seem like more than just fantasy. Much of the technology is already here. We talk to our phones and aren't surprised when they talk back. A web search can answer nearly any question, undergirded by a semantic understanding of the structure of online information. But while the tools are powerful, the processes behind them are still fairly basic. Siri only understands a small subset of questions, and she can't reason, or do anything you might call thinking. Even Watson, IBM's Jeopardy champ, can only handle simple questions with unambiguous phrasing. Already, Google is looking to the Star Trek computer as a guiding light for its voice search — but it's still a long way off. If technology is going to get there, we'll need computers that are better at talking and, more crucially, better at reasoning.

Section TOC Title

Give a machine a textbook…

It's a hard problem, but it's one Allen is eager to solve. After years of pondering these ideas abstractly, he's throwing his fortune into a new venture targeted entirely at solving the problems of machine intelligence, dubbed the Allen Institute for Artificial Intelligence or AI2 for short. It’s ambitious, like Allen's earlier projects on space flight and brain-mapping, but the initial goal is deceptively simple. Led by University of Washington professor Oren Etzioni, AI2 wants to build a computer than can pass a high school biology course. The team feeds in a textbook and gives the computer a test. So far, it's failing those tests… but it's getting a little better each time.

Challenges for AI

Vrg_fpo_300

Causality

Humans use new information to constantly update their mental pictures of the present or the past. That's a much more sophisticated kind of info management than Siri or Wolfram Alpha attempt, but experts say it's within reach.

Ai_easy_003

Uncertain or Vague Knowledge

Traditional Boolean logic categorizes claims as "true" or "false," but human knowledge often deals in incomplete truths or generalizations like "large cars often get poor gas mileage." Future AI systems will have to deal with shades of certainty, and supercomputers like Watson have already switched to similar frameworks.

The key problem is knowledge representation: how to represent all the knowledge in the textbook in a way that allows the program to reason and apply that knowledge in other areas. Programs are good at running procedures (say, converting pounds to kilograms), and modern programs have gotten better at knowing when to run them (say, a Google search on "32 pounds to kilograms"), but they're still managing the information as fodder for algorithms rather than facts and rules that can be generalized across different situations.

Having the computer study biology is a way of laying the groundwork for new kinds of learning and reasoning. "How do you build a representation of knowledge that does this?" Etzioni asks. "How do you understand more and more sophisticated language that describes more and more sophisticated things? Can we generalize from biology to chemistry to mathematics?"

“How do you understand more and more sophisticated language that describes more and more sophisticated things?”

That also means getting a grip on the complexity of language itself. Most language doesn't offer discrete pieces of information for computers to piece through; it's full of ambiguity and implied logic. Instead of simple text commands, Etzioni envisions a world where you can ask Siri something like, "Can I carry that TV home, or should I call a cab?" That means a weight calculation, sure — but it also means calculating distance and using spatial reasoning to approximate bulkiness. Siri would have to proactively ask whether the television can fit in the trunk of a cab. Siri would have to know "that TV" refers to the television you were just looking at online, and that "carry it back" means a walking trip from the affiliated store to your home. Even worse, Siri would have to know that "can I" refers to a question of advisability, and not whether the trip is illegal or physically impossible.

Making it all work could have huge implications. "What we're really talking about is, what is the user interface metaphor of the 21st century?" Etzioni says. "Speech interaction is very natural. We just need to build the back-end capabilities to power it." If we're going to move into a world of voice commands, we'll need to confront thorny problems of language processing and knowledge representation, problems that we'll be lucky to solve within the decade. But if we can work out an answer, it could power a new generation of smart, screenless tech.

Ai_hero_002
Section TOC Title

Beyond the Turing test

The side effects of solving these problems could be even more interesting. The same problems of speech, reasoning, and creative thought are what's traditionally kept us from thinking of computers as human. Both Allen and Etzioni are singularity skeptics, but they still see AI2’s biology exams as fundamentally a problem of artificial intelligence.

Vrg_fpo_300

Contradictions

Pulling data from the web means lots of sources that disagree with each other. Humans know how to combine the sources into a broader understanding, but most existing knowledge architecture isn't built for that.

Vrg_fpo_300

Implicit Knowledge

When you read "a teaspoon of sugar," a person knows implicitly that you're using teaspoon as a measurement, rather than invoking an actual spoon made of sugar — but that kind of contextual knowledge is still hard for machine-learning algorithms to anticipate, especially as permutations grow more complex.

The biology test is AI2's version of the Turing test, a hard metric that defines success. For Turing, the test of machine consciousness was whether a computer could carry on a conversation and convince an impartial observer it was human. For years, it was the gold standard for artificial intelligence (if you want, you can take one here), but in recent years, it’s lost ground to more practical tests. In AI2’s case, the test is the same one we give high school students. If their computer can pass biology, that will mean it's actually reading the textbook and processing the knowledge. It knows biology — at least as well as any high schooler. It understands.

At least we think it does. You have to be careful throwing around words like "understand" around artificial intelligence experts, as Etzioni quickly reminds me. "Technically, the word 'understanding' is a reference to your internal mental state, which in truth is not really knowable by me. So the Turing test is about judging whether external behavior clears a certain threshold." Science needs hard tests, not soft conjecture, so the best we can do are imperfect measures like the Turing test or a biology exam. But if a computer can process knowledge as well as a 10th grader, surely that means something about how it's thinking, and how well its circuits can approximate what's happening in my brain.

The machine mind that comes out of AI2 won't behave like a human mind

So… is it conscious? For Etzioni that's beside the point. He compares project of machine intelligence to aeronautics. "When they were trying to build machines that could fly, some people said, well, we already have things that fly. They're called birds. Let's build things that look like birds. And other people like the Wright Brothers said, birds have a very different weight ratio and physics and so on, so we're going to build a flying machine but on a very different design." Airplanes don't look or act like birds, but if all you want is a transatlantic flight, it doesn't matter.

Similarly, the machine mind that comes out of AI2 won't behave like a human mind, but it will be doing all the same things —processing knowledge, reasoning across claims, and answering questions. It will be thinking in ways a computer never has before. As long the inputs and outputs are the same as a human brain, AI2 isn’t worried about what’s going on inside the box. The team is content to leave that to the philosophers.

Spot
Section TOC Title

“Is that intelligent?”

Vrg_fpo_300

Understanding metaphors

Sentences like "mitochondria are the power plants of the cell" are common in textbooks written for humans, but programs don't know which properties of a power plant apply in the metaphor, and it's notoriously hard to write a rule set that works consistently.

In the meantime, Etzioni will settle for passing biology. At the moment, the computer is still struggling with fourth grade-level biology, and anything requiring a free-form answer is still a bridge too far. Diagrams are particularly difficult, requiring a kind of spatial-metaphorical thinking that's hard to recreate in code. He estimates it will take three to five years to build a reasoning framework strong enough to earn their machine a passing grade. After that, it will be another long wait before they can finish up implementation and start producing work that could be spun off into more practical technologies. For a technology company, that's a long wait — but to a researcher, it's right on the cusp of a breakthrough.

Intelligence never looks like much when you peek under the hood.

And while he's digging through code, it's hard to get caught up in philosophy. As Etzioni is quick to point out, intelligence never looks like much when you peek under the hood. "If I showed you some pulsating goop, and you didn't know this was somebody's brain, and I said to you, 'Look, is that intelligent? Can that possibly be intelligent?'" He laughs, imagining the brain in front of him. "You'd say, 'No, this looks like soup from somewhere I don't want to eat.'"