The founders of OpenAI said an international authority is needed to regulate artificial intelligence in the company’s latest call for a rulebook to govern the rapidly growing technology.
CEO Sam Altman has been outspoken about the need to regulate AI, urging lawmakers to do so at a Senate hearing just last week. In a blog post published Monday, co-authored with OpenAI’s president and co-founder Greg Brockman, and chief scientist Ilya Sutskever, Altman wrote AI could become smarter than human experts within the next 10 years.
World leaders agree: At Group of Seven summit meetings in Hiroshima this past weekend, G-7 leaders said that AI required regulation in order to be human-centric and trustworthy, Bloomberg reported, agreeing to hold cabinet-level discussions on the issue.
Here’s a look at what Altman and other tech industry insiders are proposing.
The View From OpenAI
In their blog post, Altman, Brockman, and Sutskever argue that AI is on track to become the most powerful technology humans have ever created. The OpenAI founders are calling for cooperation among the developers of artificial intelligence, writing that “individual companies should be held to an extremely high standard of acting responsibly.”
They propose an oversight body similar to the International Atomic Energy Agency, which would oversee the technology and have the authority to spot-check or audit companies developing AI.
“It would be important that such an agency focus on reducing existential risk and not issues that should be left to individual countries, such as defining what an AI should be allowed to say,” they wrote.
OpenAI also said that people from around the world must democratically decide on how AI should operate: “We don’t yet know how to design such a mechanism, but we plan to experiment with its development.”
The View From Google
Google CEO Sundar Pichai wrote in the Financial Times Tuesday that more important than the race to develop AI technology is “the race to build AI responsibly.”
Pichai said that with AI “at an inflection point,” it’s important to not only regulate the industry, but ensure it’s regulated properly.
To do that, he argued, governments should develop policy frameworks that both anticipate possible harms from the technology and also take a forward-thinking approach to their benefits.
He said that continued AI research would be necessary as the technology evolves, calling for a partnership between governments in the U.S. and Europe.
The View From Eric Schmidt
Former Google CEO Eric Schmidt has taken a different tone to Pichai and Altman.
Speaking to NBC’s Meet the Press program earlier this month, Schmidt advocated for the industry to self-regulate, arguing that no one from outside the sector was well-versed enough in how the technology operates to manage it.
“My concern with any kind of premature regulation, especially from the government, is it’s always written in a restrictive way,” he said. Rather, Schmidt wants to see an “agreement among the key players that we will not have a race to the bottom.”
The issue, he said, is getting the industry to agree on which guardrails to install in order to avoid the “worst behaviors” of AI — and global agreement on what those behaviors are.