Skip to main content

Google futurist Ray Kurzweil and other experts say chatbot didn't pass Turing Test

Google futurist Ray Kurzweil and other experts say chatbot didn't pass Turing Test

Share this story

High-profile members of the tech community are pushing back against reports over the weekend that said a computer had passed the Turing Test for the first time, tricking a group of judges into believing that it was human. The feat was performed using a chatbot named Eugene Goostman that pretends to be a 13 year old writing in a second language. But though the machine clearly passed the tests put forward by the competition that it was a part of, many are arguing that this was not, in fact, an accurate Turing Test.

"I chatted with the chatbot Eugene Goostman, and was not impressed."

That's because the test doesn't define specific rules, meaning it's up to the public at large to determine whether a computer has actually passed it. Ray Kurzweil, Google's engineering director and a noted futurist, is among those saying that this isn't that moment. In a blog post addressing the reports, Kurzweil quotes an excerpt from his 2004 book, The Singularity Is Near. "Because the definition of the Turing Test will vary from person to person, Turing Test capable machines will not arrive on a single day, and there will be a period during which we will hear claims that machines have passed the threshold," he wrote. "Invariably, these early claims will be debunked by knowledgeable observers, probably including myself."

He does that now, explaining that restrictions on the test posed big problems. For one, that the bot claimed to be 13 and writing in a second language excused major flaws. Judges testing the machine were also limited to five minutes of interaction with it, raising the chances of them being momentarily fooled. "I chatted with the chatbot Eugene Goostman, and was not impressed," Kurzweil writes. "Eugene does not keep track of the conversation, repeats himself word for word, and often responds with typical chatbot non sequiturs."

New York University cognitive science professor Gary Marcus agrees, writing in The New Yorker that the test wasn't taken by "innovative hardware but simply a cleverly coded piece of software." Marcus writes that the chatbot often resorts to misdirecting the person it's speaking with using humor so that it can avoid questions that it doesn't understand. "It’s easy to see how an untrained judge might mistake wit for reality, but once you have an understanding of how this sort of system works, the constant misdirection and deflection becomes obvious, even irritating," Marcus writes. "The illusion, in other words, is fleeting."

Marc Andreessen, co-founder of Netscape and one of today's biggest names in tech investing, isn't taking much stock in the claims over this chatbot either. "My view is that [the] Turing Test has always been malformed," he writes on Twitter. "Humans are too easy to trick, passing [the] test says almost nothing about software." Techdirt editor Mike Masnick was also among the first to thoroughly call out the test's issues, even making note that other chatbots had previously been reported to pass Turing Tests.

As Kurzweil writes, the fact that there's a test worth arguing about implies that artificial intelligence is beginning to near the point where it might actually pass a Turing Test, even if these early examples aren't quite what we imagine the final product looking like. There's little doubt that chatbots will continue to trick humans in the future, but there seems to be agreement that the AI just isn't there yet for it to happen without restrictions, like those imposed on this recent test.