Skip to main content

FTC should stop OpenAI from launching new GPT models, says AI policy group

FTC should stop OpenAI from launching new GPT models, says AI policy group

/

The Center for AI and Digital Policy filed a complaint arguing that GPT-4 violates the FTC’s rules against unfair and deceptive practices.

Share this story

Illustration of the OpenAI logo on an orange background with purple lines
Illustration: The Verge

An artificial intelligence-focused tech ethics group has asked the Federal Trade Commission to investigate OpenAI for violating consumer protection rules, arguing that the organization’s rollout of AI text generation tools has been “biased, deceptive, and a risk to public safety.”

The Center for AI and Digital Policy (CAIDP) filed its complaint today following the publication of a high-profile open letter calling for a pause on large generative AI experiments. CAIDP president Marc Rotenberg was one of the letter’s signatories, alongside a number of AI researchers and OpenAI co-founder Elon Musk. Similar to that letter, the complaint calls to slow down the development of generative AI models and implement stricter government oversight.

The CAIDP complaint points out potential threats from OpenAI’s GPT-4 generative text model, which was announced in mid-March. They include ways that GPT-4 could produce malicious code and highly tailored propaganda as well as ways that biased training data could result in baked-in stereotypes or unfair race and gender preferences in things like hiring. It also points out significant privacy failures with OpenAI’s product interface — like a recent bug that exposed OpenAI ChatGPT histories and possibly payment details to other users.

“OpenAI released GPT-4 to the public for commercial use with full knowledge of these risks.”

OpenAI has openly noted potential threats from AI text generation, but CAIDP argues that GPT-4 crosses a line of consumer harm that should draw regulatory action. It seeks to hold OpenAI liable for violating Section 5 of the FTC Act, which prohibits unfair and deceptive trade practices. “OpenAI released GPT-4 to the public for commercial use with full knowledge of these risks,” including potential bias and harmful behavior, the complaint claims. It also defines AI hallucinations, or the phenomenon of generative models confidently making up nonexistent facts, as a form of deception. “ChatGPT will promote deceptive commercial statements and advertising,” it warns — potentially bringing it under the FTC’s purview.

In the complaint, CAIDP asks the FTC to halt any further commercial deployment of GPT models and require independent assessments of the models before any future rollouts. It also asks for a publicly accessible reporting tool similar to the one that allows consumers to file fraud complaints. And it seeks firm rulemaking on the FTC’s rules for generative AI systems, building on the agency’s ongoing but still relatively informal research and evaluation of AI tools.

As noted by CAIDP, the FTC has expressed interest in regulating AI tools. It’s warned in recent years that biased AI systems could draw enforcement action, and in a joint event this week with the Department of Justice, FTC Chair Lina Khan said the agency would be looking for signs of large incumbent tech companies trying to lock out competition. But an investigation of OpenAI — one of the major players in the generative AI arms race — would mark a major escalation in its efforts.