A recent decision by research lab OpenAI to limit the release of a new algorithm has caused controversy in the AI community.
The nonprofit said it decided not to share the full version of the program, a text-generation algorithm named GPT-2, due to concerns over “malicious applications.” But many AI researchers have criticized the decision, accusing the lab of exaggerating the danger posed by the work and inadvertently stoking “mass hysteria” about AI in the process.
Researchers joked that their latest amazing work was just too dangerous to share
The debate has been wide-ranging and sometimes contentious. It even turned into a bit of a meme among AI researchers, who joked that they’ve had an amazing breakthrough in the lab, but the results were too dangerous to share at the moment. More importantly, it’s highlighted a number of challenges for the community as a whole, including the difficulty of communicating new technologies with the press, and the problem of balancing openness with responsible disclosure.
The program at the center of all the fuss is relatively straightforward. GPT-2 is the latest example of a new class of text-generation algorithms, which are expected to have a big impact in the future. When fed a prompt like a headline or the first line of a story, GPT-2 produces text that matches the input. The results are varied but often surprisingly coherent. Fabricated news stories, for example, closely mimic the tone and structure of real articles, complete with invented statistics and quotations from made-up sources.
It is, in many ways, a fun tool with the power to delight and surprise. But it doesn’t have anywhere near the ability of humans to comprehend and produce text. It generates text but doesn’t understand it. OpenAI and outside experts agree that it’s not a breakthrough per se, but rather a brilliantly executed example of what cutting-edge text generation can do.
OpenAI’s reasons for restricting the release include the potential for programs like GPT-2 to create “misleading news articles” as well as automate spam and abuse. For this reason, while they published a paper describing the work as well with a “much smaller” version of the program, the researchers withheld the training data and full model. In the usually open-by-default world of AI research, where code, data, and models are shared and discussed widely, the move — and OpenAI’s reasoning — has attracted a lot of attention.
Some examples of GPT-2 responding to text prompts:
The arguments against OpenAI’s decision
Criticism has revolved around a few key points. First, by withholding the model, OpenAI is stopping other researchers from replicating their work. Second, the model itself doesn’t pose as great a threat as OpenAI says. And third, OpenAI didn’t do enough to counteract the media’s tendency to hype and distort this sort of AI news.
The first point is pretty straightforward. Although machine learning is a relatively democratic field, with lone researchers able to deliver surprising breakthroughs, in recent years, there’s been an increasing emphasis on resource-intensive research. Algorithms like GPT-2 are created using huge amounts of computing power and big datasets, both of which are expensive. The argument goes that if well-funded labs like OpenAI don’t share their results, it impoverishes the rest of the community.
Academics can’t compete if big labs withhold their work
“It’s put academics at a big disadvantage,” Anima Anandkumar, an AI professor at Caltech and director of machine learning research at Nvidia, told The Verge. In a blog post, Anandkumar said OpenAI was effectively using its clout to “make ML research more closed and inaccessible.” (And in a tweet responding to OpenAI’s announcement, she was even more candid, calling the decision “Malicious BS.”)
Others in the field echo this criticism, arguing that, in the case of potentially harmful research, open publication is even more important, as other researchers can look for faults in the work and come up with countermeasures.
Speaking to The Verge, OpenAI research scientist Miles Brundage, who works on the societal impact of artificial intelligence, said the lab was “acutely aware” of this sort of trade-off. He said via email that the lab was considering ways to “alleviate” the problem of limited access, by inviting more individuals to test the model, for example.
Anandkumar, who stressed that she was speaking in a personal capacity, also said that OpenAI’s rationale for withholding the model didn’t add up. Although the computing power needed to re-create the work is beyond the reach of most academics, it would be relatively easy for any determined or well-funded group to get. This would include those who might benefit from abusing the algorithm, like nation states organizing online propaganda campaigns.
The threat of AI being used to automate the creation of spam and misinformation is a real threat, says Anandkumar, “but I don’t think limiting access to this particular model will solve the problem.”
Delip Rao, an expert in text generation who’s worked on projects to detect fake news and misinformation using AI, agrees that the threats OpenAI describes are exaggerated. He notes that, with fake news, for example, the quality of the text is rarely a barrier, as much of this sort of misinformation is made by copying and pasting bits of other stories. “You don’t need fancy machine learning for that,” says Rao. And when it comes to evading spam filters, he says, most systems rely on a range of signals, including things like a user’s IP address and recent activity — not just checking to see if the spammer is writing cogently.
“The words ‘too dangerous’ were casually thrown out here.”
“I’m aware that models like [GPT-2] could be used for purposes that are unwholesome, but that could be said of any similar model that’s released so far,” says Rao, who also wrote a blog post on the topic. “The words ‘too dangerous’ were casually thrown out here without a lot of thought or experimentation. I don’t think [OpenAI] spent enough time proving it was actually dangerous.”
Brundage says the lab consulted with outside experts to gauge the risks, but he stressed that OpenAI was making a broader case for the dangers of increasingly sophisticated text-generation systems, not just about GPT-2 specifically.
“We understand why some saw our announcement as exaggerated, though it’s important to distinguish what we said from what others said,” he wrote. “We tried to highlight both the current capabilities of GPT-2 as well as the risks of a broader class of systems, and we should have been more precise on that distinction.”
Brundage also notes that OpenAI wants to err on the side of caution, and he says that releasing the full models would be an “irreversible” move. In an interview with The Verge last week, OpenAI’s policy director compared the technology to the face-swapping algorithms used to create deepfakes. These were released as open-source projects and were soon swept up by individuals around the world for their own uses, including the creation of non-consensual pornography.
The difficulty of AI media hype
While debates over the dangers of text-generation models and academic access have no obvious conclusion, the problem of communicating new technologies with the public is even thornier, say researchers.
Did OpenAI’s approach stoke misinformed coverage?
Critics of OpenAI’s approach noted that the “too dangerous to release” angle became the focus of much coverage, providing a juicy headline that obscured the actual threat posed by the technology. Headlines like “Elon Musk’s OpenAI builds artificial intelligence so powerful it must be kept locked up for the good of humanity” were common. (Elon Musk’s association with OpenAI is a long-standing bugbear for the lab. He co-founded the organization in 2015 but reportedly had little direct involvement and resigned from its board last year.)
Although getting frustrated about bad coverage of their field is hardly a new experience for scientists, the stakes are particularly high when it comes to AI research. This is partly because public conceptions about AI are so out-of-line with actual capabilities, but it’s also because the field is grappling with issues like funding and regulation. If the general public becomes unduly worried about AI, could it lead to less meaningful research?
In this light, some researchers say that OpenAI’s strategy for GPT-2 actively contributed to bad narratives. They also blame reporters for failing to put the work in its proper context. “I feel the press was primed with the narrative OpenAI set them, and I don’t think that’s a very objective way to create reporting,” says Rao. He also noted that the embargoed nature of the work (where reporters write their stories in advance and publish them at the same time) contributed to the distortion.
Anandkumar says: “I have deep admiration for the people who work [at OpenAI] and this is interesting work but it doesn’t warrant this type of media attention [...] It’s not healthy for the community and it’s not healthy for the public.”
OpenAI says it did its best to preemptively combat this hype, stressing the limitations of the system to journalists and hoping they would find faults themselves when experimenting with the program. “We know the model sometimes breaks, and we told journalists this, and we hoped their own experience with it would lead to them noting the places where it breaks,” said Brundage. “This did happen, but perhaps not to the same extent we imagined.”
Other AI labs have grappled with similar problems
Although OpenAI’s decisions to restrict the release of GPT-2 were unconventional, some labs have gone even further. The Machine Intelligence Research Institute (MIRI), for example, which is focused on mitigating threats from AI systems, became “nondisclosed-by-default” as of last November, and it won’t publish research unless there’s an “explicit decision” to do so.
The lab laid out a number of reasons for this in a lengthy blog post, but it said it wanted to focus on “deconfusion” — that is, making the terms of the debate over AI risk clear before it engaged more widely on the subject. It approvingly quoted a board member that described MIRI as “sitting reclusively off by itself, while mostly leaving questions of politics, outreach, and how much influence the AI safety community has, to others.”
This is a very different approach to OpenAI, which, even while limiting the release of the model, has certainly done its best to engage in wider questions.
Brundage says that, despite the criticism, OpenAI thinks it “broadly” made the right decision, and there would likely be similar cases in the future where “concerns around safety or security limit our publication of code/models/data.” He notes that, ultimately, the lab thinks it’s better to have the discussion before the threats emerge than after, even if critics disagree with their methods of doing so.
He adds: “There are so many moving parts to this decision that we mostly view it as: did we do something that helps OpenAI deal better with this class of problems in the future? The answer to that is yes. As models get increasingly more powerful, more and more organizations will need to think through these issues.”