- Home
- /
- Tech & Law
- /
- OpenAI Wins Defamation Case Over...
OpenAI Wins Defamation Case Over Inaccurate ChatGPT Output
Muhammed Razik
5 July 2025 10:39 AM IST
The Superior Court of Gwinnett County, Georgia, issued a ruling,granting summary judgment in favor of OpenAI, the developer of the AI chatbot ChatGPT. The case involved Mark Walters, a nationally known radio host and Second Amendment advocate, who filed a defamation claim after ChatGPT generated a false statement suggesting he was involved in embezzlement. The court headed by Judge...
The Superior Court of Gwinnett County, Georgia, issued a ruling,granting summary judgment in favor of OpenAI, the developer of the AI chatbot ChatGPT. The case involved Mark Walters, a nationally known radio host and Second Amendment advocate, who filed a defamation claim after ChatGPT generated a false statement suggesting he was involved in embezzlement. The court headed by Judge Tracie Cason dismissed the claim, finding in favor of OpenAI and clarifying the legal standards that apply to AI-generated content.
Background
The case originated when Frederick Riehl, the editor of the magazine AmmoLand.com,news and advocacy site focused on Gun rights in the US asked ChatGPT to summarize a press release issued by the Washington State Attorney General in a lawsuit against Mark Walters, a prominent radio show host and advocate for strong gun rights in America. The suit was filed based on allegations of corruption against Walters. When Riehl directly copy-pasted the press release, ChatGPT provided an accurate summary. However, when Riehl submitted a URL link to the press release, the language model returned inaccurate information. A different inaccurate response was generated when the same URL was pasted a second time. In response, Mark Walters filed a defamation lawsuit against OpenAI, alleging that the model produced defamatory and incorrect information about him.
Observations by the Court
The court held that, in order to establish a defamatory statement, the statement must be read and understood by a hypothetical reader as an actual fact about the plaintiff. Moreover, the disclaimer provided and the broader context must be taken into account to properly assess the validity of a defamation claim. The output by ChatGPT was accompanied by several disclaimers from the language model, such as its Terms of Use noting its susceptibility to inaccurate information (commonly referred to as "hallucinations") and its knowledge cutoff in September 2021. Riehl, an experienced user who, as established by relevant facts, was aware of such hallucinations, had verified the inaccuracy of ChatGPT's output. The court, relying on precedent, held that a prudent reader would not interpret the response as a factual statement. Riehl's access to the actual complaint and the SAF press release further undermined any defamatory interpretation.
The court reiterated that inorder to succeed in a defamation case, plaintiffs must prove that the defendant was at fault, and the level of proof required depends on the plaintiff's public fame.The threshold for private individuals has been lowered whereas public status individuals have been put on higher thresholds.The court determined that Walters as a public figure,given his prominent presence in media through his talkshow and role in gun right advocacy and had a higher threshold of proof.Under the standard of proof,the plaintiff was unable to determine the standard of care that OpenAI had to meet.The court rejected the petitioners argument that by releasing a product that had propensity to cause harm with the knowledge of it's potential for error could not be accounted as negligent,observing that such acts doesn't wrarnt sanction.The court dismissing plaintiff's suit,held that the petitioner failed to show that OpenAI knew the information was false or acted recklessly.The court held that
“Therefore Walters, regardless of whether he qualifies as a public figure or a private figure, cannot obtain either presumed or punitive damages unless he can show "actual malice,"”
The court noted that OpenAI had made efforts to limit inaccuracies and concluded that general awareness of AI's limitations is not enough to meet the high legal standard required to prove malice.
Case Title: Walters vs OpenAI