OpenAI has been hit with what appears to be the first defamation lawsuit responding to false information generated by ChatGPT.
A radio host in Georgia, Mark Walters, is suing the company after ChatGPT stated that Walters had been accused of defrauding and embezzling funds from a non-profit organization. The system generated the information in response to a request from a third party, a journalist named Fred Riehl. Walters’ case was filed June 5th in Georgia’s Superior Court of Gwinnett County and he is seeking unspecified monetary damages from OpenAI.
The case is notable given widespread complaints about false information generated by ChatGPT and other chatbots. These systems have no reliable way to distinguish fact from fiction, and when asked for information — particularly if asked to confirm something the questioner suggests is true — they frequently invent dates, facts, and figures.
“I heard about this new site, which I falsely assumed was, like, a super search engine.”
Usually, these fabrications do nothing more than mislead users or waste their time. But cases are beginning to emerge of such errors causing harm. These include a professor threatening to flunk his class after ChatGPT claimed his students used AI to write their essays, and a lawyer facing possible sanctions after using ChatGPT to research fake legal cases. The lawyer in question recently told a judge: “I heard about this new site, which I falsely assumed was, like, a super search engine.”
OpenAI includes a small disclaimer on ChatGPT’s homepage warning that the system “may occasionally generate incorrect information,” but the company also presents ChatGPT as a source of reliable data, describing the system in ad copy as a way to “get answers” and “learn something new.” OpenAI’s own CEO Sam Altman has said on numerous occasions that he prefers learning new information from ChatGPT than from books.
It’s not clear, though, whether or not there is legal precedence to hold a company responsible for AI systems generating false or defamatory information, or whether this particular case has substantial merit.
Traditionally in the US, Section 230 shields internet firms from legal liability for information produced by a third party and hosted on their platforms. It’s unknown whether these protections apply to AI systems, which do not simply link to data sources but generate information anew (a process which also leads to their creation of false data).
The defamation lawsuit filed by Walters in Georgia could test this framework. The case states that a journalist, Fred Riehl, asked ChatGPT to summarize a real federal court case by linking to an online PDF. ChatGPT responded by created a false summary of the case that was detailed and convincing but wrong in several regards. ChatGPT’s summary contained some factually correct information but also false allegations against Walters. It said Walters was believed to have misappropriated funds from a gun rights non-profit called the Second Amendment Foundation “in excess of $5,000,000.” Walters has never been accused of this.
Riehl never published the false information generated by ChatGPT but checked the details with another party. It’s not clear from the case filings how Walters’ then found out about this misinformation.
Notably, despite complying with Riehl’s request to summarize a PDF, ChatGPT is not actually able to access such external data without the use of additional plug-ins. The system’s inability to alert Riehl to this fact is an example of its capacity to mislead users. (Although, when The Verge tested the system today on the same task, it responded clearly and informatively, saying: “I’m sorry, but as an AI text-based model, I don’t have the ability to access or open specific PDF files or other external documents.”)
Eugene Volokh, a law professor who has written on the legal liability of AI systems, noted in a blog post that although he thinks “such libel claims [against AI companies] are in principle legally viable,” this particular lawsuit “should be hard to maintain.” Volokh notes that Walters did not notify OpenAI about these false statements, giving them a chance to remove them, and that there have been no actual damages as a result of ChatGPT’s output. “In any event, though, it will be interesting to see what ultimately happens here,” says Volokh.
We’ve reached out to OpenAI for comment and will update this story if we hear back.