One of the biggest trends in machine learning right now is text generation. AI systems learn by absorbing billions of words scraped from the internet and generate text in response to a variety of prompts. It sounds simple, but these machines can be put to a wide array of tasks — from creating fiction, to writing bad code, to letting you chat with historical figures.
The best-known AI text-generator is OpenAI’s GPT-3, which the company recently announced is now being used in more than 300 different apps, by “tens of thousands” of developers, and producing 4.5 billion words per day. That’s a lot of robot verbiage. This may be an arbitrary milestone for OpenAI to celebrate, but it’s also a useful indicator of the growing scale, impact, and commercial potential of AI text generation.
OpenAI started life as a nonprofit, but for the last few years, it has been trying to make money with GPT-3 as its first salable product. The company has an exclusivity deal with Microsoft which gives the tech giant unique access to the program’s underlying code, but any firm can apply for access to GPT-3’s general API and build services on top of it.
As OpenAI is keen to advertise, hundreds of companies are now doing exactly this. One startup named Viable is using GPT-3 to analyze customer feedback, identifying “themes, emotions, and sentiment from surveys, help desk tickets, live chat logs, reviews, and more”; Fable Studio is using the program to create dialogue for VR experiences; and Algolia is using it to improve its web search products which it, in turn, sells on to other customers.
All this is good news for OpenAI (and Microsoft, whose Azure cloud computing platform powers OpenAI’s tech), but not everyone in startup-land is keen. Many analysts have noted the folly of building a company on technology you don’t actually own. Using GPT-3 to create a startup is ludicrously simple, but it’ll be ludicrously simple for your competitors, too. And though there are ways to differentiate your GPT startup through branding and UI, no firm stands to gain as much as from the use of the technology as OpenAI itself.
Another worry about the rise of text-generating systems relates to issues of output quality. Like many algorithms, text generators have the capacity to absorb and amplify harmful biases. They’re also often astoundingly dumb. In tests of a medical chatbot built using GPT-3, the model responded to a “suicidal” patient by encouraging them to kill themselves. These problems aren’t insurmountable, but they’re certainly worth flagging in a world where algorithms are already creating mistaken arrests, unfair school grades, and biased medical bills.
As OpenAI’s latest milestone suggest, though, GPT-3 is only going to keep on talking, and we need to be ready for a world filled with robot-generated chatter.