ChatGPT threatens to leave the European Union

 

At a conference in London, the CEO of OpenAI has made it clear that if ChatGPT cannot comply with the new European law, they will stop operating in the region.

Sam Altman, the CEO of OpenAI, does not want to operate in the European Union if its laws are too authoritarian. At least that’s what he wanted to confirm at the conference at University College of London, where he spoke about the new regulation looming over AI systems like ChatGPT in the EU.

The CEO of the platform, which is currently the most well-known, has warned that their intention is to “stop operating” in the European Union if they cannot comply with the provisions of the new artificial intelligence legislation currently being studied within the Union. However, OpenAI’s intention is to “try to comply.”

Altman said he had met with European Union regulators to discuss some details of the AI law during his recent tour of Europe (he was also in Spain). The CEO of OpenAI has also warned that the company has “many” criticisms of the current wording of the law.

One of the company’s major concerns about the current European proposal is its designation of high-risk systems. According to its current wording, it requires AI models like ChatGPT and OpenAI’s GPT-4 to be designated as “high-risk.”

 

Europe considers systems like ChatGPT to be “high-risk.”

 

One of the key points of the upcoming European regulation is what is known as “generative basic models.” The current regulatory project requires these models, which are also considered high-risk, like ChatGPT, to comply with additional transparency requirements. These requirements include disclosing that the content was generated by AI, designing the model to prevent the generation of illegal content, and publishing summaries of copyrighted data used for AI training. These technical requirements are much stricter than those demanded in other regions like the US.

While it is an important step for user privacy and security, it would greatly affect the functioning of these AI systems. OpenAI has argued that their general-purpose systems are not inherently high-risk. And if they cannot comply with Europe, they will stop operating:

 

“If we can comply, we will, and if we can’t, we will stop operating… We will try. But there are technical limits to what is possible.”

 

According to Altman, the law itself is not inherently flawed, but the CEO of OpenAI believes that “subtle details really matter.” For him, the solution to this whole issue is to find something in between “the traditional European approach and the traditional American approach.”