OpenAI Faces Defamation Lawsuit as ChatGPT Generates False Accusations Against Radio Host

K.C. Sabreena Basheer Last Updated : 12 Jun, 2023
4 min read

OpenAI, the renowned artificial intelligence company, is now grappling with a defamation lawsuit stemming from the fabrication of false information by their language model, ChatGPT. Mark Walters, a radio host in Georgia, has filed a lawsuit against OpenAI after ChatGPT falsely accused him of defrauding and embezzling funds from a non-profit organization. The incident raises concerns about the reliability of AI-generated information and the potential harm it can cause. This groundbreaking lawsuit has attracted significant attention due to the growing instances of misinformation and its implications for legal responsibility.

Radio host Mark Walters has filed a defamation lawsuit against OpenAI as its AI chatbot ChatGPT generated false accusations against him.

The Allegations: ChatGPT’s Fabricated Claims against Mark Walters

In this defamation lawsuit, Mark Walters accuses OpenAI of generating false accusations against him through ChatGPT. The radio host claims that a journalist named Fred Riehl requested ChatGPT to summarize a real federal court case by providing a link to an online PDF. However, ChatGPT created a detailed and convincing false summary that contained several inaccuracies, leading to the defamation of Mark Walters.

The Growing Concerns of Misinformation Generated by AI

False information generated by AI systems like ChatGPT has become a pressing issue. These systems lack a reliable method to distinguish fact from fiction. They often produce fabricated dates, facts, and figures when asked for information, especially if prompted to confirm something already suggested. While these fabrications mostly mislead or waste users’ time, there are instances where such errors have caused harm.

Also Read: EU Calls for Measures to Identify Deepfakes and AI Content

Real-World Consequences: Misinformation Leads to Harm

The emergence of cases where AI-generated misinformation causes harm is raising serious concerns. For instance, a professor threatened to fail his students after ChatGPT falsely claimed they had used AI to write their essays. Additionally, a lawyer faced possible sanctions after utilizing ChatGPT to research non-existent legal cases. These incidents highlight the risks associated with relying on AI-generated content.

Also Read: Lawyer Fooled by ChatGPT’s Fake Legal Research

OpenAI's ChatGPT creates alternative facts causing real-life prolems.

OpenAI’s Responsibility and Disclaimers

OpenAI includes a small disclaimer on ChatGPT’s homepage, acknowledging that the system “may occasionally generate incorrect information.” However, the company also promotes ChatGPT as a reliable data source, encouraging users to “get answers” and “learn something new.” OpenAI’s CEO, Sam Altman, has preferred learning from ChatGPT over books. This raises questions about the company’s responsibility to ensure the accuracy of the information generated.

Also Read: How Good Are Human-Trained AI Models for Training Humans?

Determining the legal liability of companies for false or defamatory information generated by AI systems presents a challenge. Internet firms are traditionally protected by Section 230 in the US, shielding them from legal responsibility for third-party-generated content hosted on their platforms. However, whether these protections extend to AI systems that generate information independently, including false data, remains uncertain.

Also Read: China’s Proposed AI Regulations Shake the Industry

Mark Walters’ defamation lawsuit filed in Georgia could potentially challenge the existing legal framework. According to the case, journalist Fred Riehl asked ChatGPT to summarize a PDF, and ChatGPT responded with a false but convincing summary. Although Riehl did not publish the false information, the details were checked with another party, leading to Walters’ discovery of the misinformation. The lawsuit questions OpenAI’s accountability for such incidents.

Concerns raise about the genuinity of AI-generated content as AI generates false information.

ChatGPT’s Limitations and User Misdirection

Notably, ChatGPT, despite complying with Riehl’s request, cannot access external data without additional plug-ins. This limitation raises concerns about the potential to mislead users. While ChatGPT cannot alert users to this fact, it responded differently when tested subsequently, clearly stating its inability to access specific PDF files or external documents.

Also Read: Build a ChatGPT for PDFs with Langchain

Eugene Volokh, a law professor specializing in AI system liability, believes that libel claims against AI companies are legally viable in theory. However, he argues that Walters’ lawsuit may face challenges. Volokh notes that Walters did not notify OpenAI about the false statements, depriving them of an opportunity to rectify the situation. Furthermore, there is no evidence of actual damages resulting from ChatGPT’s output.

Our Say

OpenAI is entangled in a groundbreaking defamation lawsuit as ChatGPT generates false accusations against radio host Mark Walters. This case highlights the escalating concerns surrounding AI-generated misinformation and its potential consequences. As legal precedence and accountability in AI systems are questioned, the outcome of this lawsuit may shape the future landscape of AI-generated content and the responsibility of companies like OpenAI.

Sabreena Basheer is an architect-turned-writer who's passioante about documenting anything that interests her. She's currently exploring the world of AI and Data Science as a Content Manager at Analytics Vidhya.

Responses From Readers

Congratulations, You Did It!
Well Done on Completing Your Learning Journey. Stay curious and keep exploring!

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details