ChatGPT Investigated by the Federal Trade Commission for Potential Harm

K.C. Sabreena Basheer Last Updated : 17 Jul, 2023
4 min read

In a significant development, the Federal Trade Commission (FTC) has initiated an investigation into OpenAI, the prominent artificial intelligence (AI) startup responsible for creating ChatGPT. The investigation revolves around allegations of consumer harm resulting from data collection practices and the dissemination of false information by the AI-powered chatbot. With mounting concerns over AI’s impact on society, this investigation serves as a crucial moment in assessing the risks associated with AI technology.

Also Read: OpenAI Faces Defamation Lawsuit as ChatGPT Generates False Accusations Against Radio Host

FTC Probes OpenAI’s Data Collection and Security Practices

The FTC recently sent a detailed 20-page letter to OpenAI, raising concerns about the company’s security practices and its handling of personal data. The agency has requested comprehensive information from the company. This includes details about AI model training processes and the collection & treatment of personal data. The FTC aims to determine whether OpenAI has engaged in unfair or deceptive practices, particularly regarding privacy, data security, and potential consumer harm.

Also Read: OpenAI and Meta Sued for Copyright Infringement

Federal Trade Commission has investigated OpenAI's AI chatbot ChatGPT to find the potential harm in collecting and treating personal data.

OpenAI Faces Regulatory Threat in the United States

The FTC’s investigation marks the first major regulatory challenge for OpenAI in the United States. As one of the most prominent AI companies, OpenAI’s predicament indicates a growing trend of increased scrutiny surrounding AI technology. The investigation reflects the mounting concerns as AI-powered products become more prevalent, threatening human employment and facilitating the spread of disinformation.

Also Read: U.S. Congress Takes Action: Two New Bills Propose Regulation on Artificial Intelligence

OpenAI Acknowledges the Importance of Safety and Compliance

In response to the investigation, OpenAI’s CEO, Sam Altman, emphasized the criticality of ensuring the safety of their technology. Altman expressed confidence in his firm’s adherence to the law and willingness to cooperate with the agency’s investigation. OpenAI recognizes the significance of maintaining transparency and abiding by regulations to address the potential risks associated with AI.

Also Read: “We Will Fix the Hallucination Problem,” Says Sam Altman

OpenAI acknowledges the importance of AI safety and compliance.

OpenAI’s International Regulatory Challenges

OpenAI has already faced regulatory scrutiny beyond the United States. In March, the Italian data protection authority banned ChatGPT, citing the unlawful collection of personal data and the absence of an age-verification system to safeguard minors from inappropriate content. OpenAI restored access to the system after making the requested changes. These international pressures further underscore the need for comprehensive scrutiny and responsible deployment of AI technology.

Also Read: OpenAI and DeepMind Collaborate with UK Government to Advance AI Safety and Research

FTC’s Swift Action Indicates the Urgency for AI Regulation

The FTC’s prompt initiation of an investigation against OpenAI demonstrates a sense of urgency surrounding AI regulation. The agency’s swift response, less than a year after the introduction of ChatGPT, signifies the necessity of evaluating and monitoring AI technology during its nascent stages. FTC Chair Lina Khan has consistently emphasized the need for proactive regulation in the face of evolving AI risks.

Also Read: ChatGPT Makes Laws to Regulate Itself

The Federal Trade Commission has asked for OpenAI to regulate its AI chatbot ChatGPT on data collection and usage.

OpenAI’s Potential Disclosure of Building Methods and Data Sources

As part of the investigation, OpenAI may be required to divulge its methodologies in developing ChatGPT and disclose the data sources used for training its AI systems. While OpenAI previously shared this information, recent developments have led to more guarded responses. This is possibly due to concerns about competitors replicating their work and potential legal implications associated with specific data sets.

Also Read: All Your Online Posts Now Belong to the AI, States Google

Advocacy Group’s Concerns Amplify the Investigation

The Center for AI and Digital Policy, an organization advocating for ethical technology use, complained to the FTC, requesting the prevention of OpenAI from releasing new commercial versions of ChatGPT. The complaint raised concerns about bias, disinformation, and security risks associated with the chatbot. OpenAI has been actively refining ChatGPT to minimize biased or harmful outputs. Meanwhile, ongoing user feedback also plays a crucial role in the system’s improvement.

Also Read: PoisonGPT: Hugging Face LLM Spreads Fake News

FTC is trying to find the potential harm of ChatGPT's data collection and processing methods.

Our Say

The FTC’s investigation into OpenAI’s ChatGPT raises important questions about data privacy, consumer protection, and AI’s potential to cause harm. As AI technology rapidly advances, it becomes imperative to establish regulations and guidelines that promote responsible development and deployment. OpenAI’s cooperation with the investigation and the FTC’s thorough evaluation of the company’s practices will shape the future of AI governance and its impact on society.

Sabreena Basheer is an architect-turned-writer who's passionate about documenting anything that interests her. She's currently exploring the world of AI and Data Science as a Content Manager at Analytics Vidhya.

Responses From Readers

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details