Even the most advanced chatbots can’t hold a decent conversation, but AI systems are definitely getting better at generating the written word. A new web app provides ample proof, letting anyone enter a text prompt to which AI software will automatically respond.
Enter the start of a made-up news article, and it’ll finish it for you. Ask it a question (by formatting your input like this: “Q: What should I do today?”), and it’ll happily respond.
The site is called TalkToTransformer.com, and it’s the creation of Canadian engineer Adam King. King made the site, but the underlying technology comes from research lab OpenAI. Earlier this year, OpenAI unveiled its new AI language system, GPT-2, and TalkToTransformer is a slimmed-down, accessible version of that same technology, which has been made accessible only to select scientists and journalists in the past. (The name “transformer” refers to the type of neural network used by GPT-2 and other systems.)
If you want to learn about AI language generation, there’s no better way to understand its huge potential and serious limitations than by playing around with TalkToTransformer.
On the plus side, the model is incredibly flexible. It’s able to recognize a huge variety of inputs, from news articles and stories to song lyrics, poems, recipes, code, and HTML. It can even identify familiar characters from franchises like Harry Potter and The Lord of the Rings.
At the same time, you’ll soon see that, at a fundamental level, the system doesn’t understand language or the world at large. The text it generates has surface-level coherence but no long-term structure. When it writes stories, for example, characters appear and disappear at random, with no consistency in their needs or actions. When it generates dialogue, conversations drift aimlessly from topic to topic. If it gets more than a few responses, it seems like good luck, not skill.
Still, as The Verge explained in our original coverage of GPT-2, this system is hugely impressive. Remember: this is a single algorithm that has learned to generate text by studying a huge dataset scraped from the web and other sources. It learned by looking for patterns in this information, and the result is a surprisingly multitalented system.
It may not be hard to find gaps in GPT-2’s knowledge, but it’s impossible to know whether you’ve explored the limits of what it can do.