OpenAI Raises Concerns Over EU’s AI Regulations, Threatens to Cease Operating in Europe

Yana Khare Last Updated : 26 May, 2023
4 min read
OpenAI Threatens to Cease Operating in Europe | AI Act

OpenAI, the renowned artificial intelligence company behind ChatGPT, has expressed its worries about the European Union’s proposed AI Act. During a recent visit to London, OpenAI’s CEO, Sam Altman, OpenAI voiced his concerns and threatened to cease operating in Europe. Further, he warned that if compliance with the regulations becomes untenable, the company may be forced to withdraw its services from the EU. With the potential implications for OpenAI’s operations in Europe, this development has sparked significant debate and attention. Let’s delve into the details of the EU’s AI Act and explore the potential consequences of OpenAI’s stance.Also Read: OpenAI CEO Urges Lawmakers to Regulate AI Considering AI Risks

The EU’s Proposed AI Act and its Objectives

The EU's Proposed AI Act and its Objectives | OpenAI | Sam Altman

The European Union has embarked on a groundbreaking initiative with its proposed AI Act. It is being hailed as the first primary regulatory legislation concerning AI worldwide. The act primarily focuses on regulating artificial intelligence systems and safeguarding European citizens from potential AI-related risks. The European Parliament has already shown overwhelming support by voting to adopt the AI Act, with a tentative adoption date set for June 14.Also Read: White House Calls Tech Tycoons Meet to Address the AI Threat

The Three Risk Categories Defined by the AI Act

To address varying degrees of risks associated with AI systems, the AI Act proposes a classification into three distinct categories:

The Three Risk Categories Defined by the AI Act | EU | OpenAI Threatens to Cease Operating in Europe

A. Highest Risk Category:

The AI Act explicitly prohibits using AI systems that pose an unacceptable risk, such as those resembling the government-run social scoring systems observed in China. Such prohibitions aim to preserve individual privacy, prevent potential discrimination, and protect against harmful social consequences.

The second category pertains to AI systems subject to specific legal requirements. An example mentioned in the act is using AI systems to scan resumes and rank job applicants. The EU intends to ensure fairness and transparency in AI-driven employment practices by imposing legal obligations.

C. Largely Unregulated Category:

AI systems not explicitly banned or listed as high-risk fall into this category, which means they would remain largely unregulated. This approach allows flexibility while leaving room for innovation and development in AI technologies.Also Read: Europe Considers AI Chatbot Bans Following Italy’s Block of ChatGPT

Tech Companies’ Pleas for Caution and Balance

Tech Companies' Pleas for Caution and Balance | AI Act | OpenAI Threatens to Cease Operating in Europe

Numerous US tech companies, including OpenAI and Google, have appealed to Brussels for a more balanced approach to AI regulation. They argue that Europe should allocate sufficient time to study and understand the technology’s intricacies, effectively balancing the opportunities and risks. Sundar Pichai, Google’s CEO, recently met with key EU officials to discuss AI policy, emphasizing the importance of regulations encouraging innovation without stifling progress.

During his visit to London, Sam Altman expressed severe concerns regarding OpenAI’s ability to comply with the AI Act’s provisions. While Altman acknowledged the company’s intention to comply, he stated that if compliance became impossible, OpenAI would have no choice but to cease operations in Europe. Interestingly, Sam Altman has also advocated for establishing a government agency to oversee AI projects of a significant scale. He believes such an agency should grant licenses to AI companies and have the authority to revoke them if safety rules are breached.Also Read: OpenAI Leaders Write About The Risk Of AI, Suggest Ways To Govern

Transparency and Safety Considerations

AI Act | EU | Sam Altman | OpenAI Threatens to Cease Operating in Europe

OpenAI has faced criticism for its lack of transparency regarding its AI models. The recent release of GPT-4 generated disappointment within the AI community due to the absence of information about the training data, cost, and creation process. Ilya Sutskever, OpenAI’s cofounder and chief scientist, defended this position by citing competition and safety concerns. Sutskever emphasized the collaborative effort and time required to develop AI models. Furthermore, he noted that safety would become even more crucial.

Our Say

As the debate surrounding AI regulations intensifies, OpenAI threatens to cease operating in Europe because the EU’s AI Act has generated significant attention. The proposed legislation aims to balance regulation and innovation. Thus, ensuring AI systems do not pose undue risks while allowing room for progress. OpenAI’s concerns highlight the challenges companies face when navigating complex regulatory frameworks. As the adoption of the AI Act approaches, the discussions on the future of AI regulation will undoubtedly continue to captivate the tech community and policymakers alike.

A 23-year-old, pursuing her Master's in English, an avid reader, and a melophile. My all-time favorite quote is by Albus Dumbledore - "Happiness can be found even in the darkest of times if one remembers to turn on the light."

Responses From Readers

Clear

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details