OpenAI CEO Sam Altman warned the company may consider pulling its operations out of the European Union if it follows through with planned regulations on artificial intelligence (AI) after calling on Congress to put forward regulations on the use of AI in the U.S.
The EU is planning to require companies with generative AI products like OpenAI’s ChatGPT to disclose the use of copyrighted material in training its AI platforms to generate images and text in response to users’ prompts. The proposal would also require generative AI systems to inform users that the content was generated by AI and not by humans.
“The current draft of the EU AI Act would be over-regulating, but we have heard it’s going to get pulled back,” Altman told Reuters at an event in London. “They are still talking about it.
“There’s so much they could do, like changing the definition of general purpose AI systems. There’s a lot of things that could be done.”
The EU Parliament is considering revisions to the law, and the next step in the process is a vote by the Parliament between June 12-15 on a negotiating draft. Then, the EU Parliament can negotiate with the Council of European Ministers on the final version of the law.
Under the current proposal, the EU would require certain generative AI platforms to be designated “high risk” if they are intended to be used in the biometric identification of humans, employment recruitment and evaluation, educational and vocational training, managing critical infrastructure, law enforcement, immigration and more.
Altman told Time that it would be impossible for OpenAI to comply with all the requirements of the current version of the EU AI Act and that the eventual law’s definition of “high risk” could prove problematic for the company’s ongoing operations in the EU.
“If we can comply, we will, and if we can’t, we’ll cease operating. … We will try. But there are technical limits to what’s possible,” Altman said. The OpenAI CEO also indicated he would prefer to see regulations take shape that represent “something between the traditional European approach and the traditional U.S. approach.”
In mid-May, Altman testified before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law and invited government regulation “to mitigate” the risks of AI.
“As this technology advances, we understand that people are anxious about how it could change the way we live. We are too. But we believe that we can and must work together to identify and manage the potential downsides so that we can all enjoy the tremendous upsides. It is essential that powerful AI is developed with democratic values in mind. And this means that U.S. leadership is critical,” Altman told the senators.
“We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.”
GET FOX BUSINESS ON THE GO BY CLICKING HERE
OpenAI did not respond to a request for comment from FOX Business on the distinction between the optimal level of regulation in the U.S. versus what’s being proposed in the EU.
FOX Business’ Emma Colton and Reuters contributed to this report.