OpenAI leaders propose an international AI regulatory body

AI is advancing fast enough and the dangers it can pose are so obvious that OpenAI’s leadership believes the world needs an international regulatory body on a par with nuclear energy – and fast. But not too fast.

In a post on the company’s blog, OpenAI founder Sam Altman, president Greg Brockman and chief scientist Ilya Sutskever explain that the pace of artificial intelligence innovation is so rapid that we cannot expect existing authorities to adequately support the technology. restrain.

While there’s some quality in patting oneself on the back here, it’s clear to any impartial observer that the technology, most visible in OpenAI’s explosively popular ChatGPT conversational agent, poses both a unique threat and an invaluable asset.

The post, usually quite light on details and commitments, nevertheless admits that AI won’t save itself:

We need some degree of coordination between leading development efforts to ensure that the development of superintelligence takes place in a way that allows us to both maintain security and promote smooth integration of these systems into society.

We’ll probably need something like one eventually [International Atomic Energy Agency] for superintelligence efforts; any effort above a certain capacity threshold (or resources such as computational power) must be subject to an international authority that can inspect systems, require audits, test for compliance with security standards, impose limits on deployment levels and security levels, etc.

The IAEA is the UN’s official body for international nuclear energy cooperation, though of course, like other such organizations, it could use some spice. An AI governing body built on this model may not be able to come in and flip the switch on a bad actor, but it can create and follow international standards and agreements, which is at least a starting point.

OpenAI’s post notes that tracking computing power and energy consumption for AI research is one of the relatively few objective measures that can and probably should be reported and tracked. While it may be difficult to say whether or not AI should be used for this or that, it can be helpful to say that resources devoted to it, like other industries, should be monitored and controlled.

Leading AI researcher and critic Timnit Gebru said something similar in an interview with the Guardian just today: “Unless there is external pressure to do otherwise, companies are not going to self-regulate. We need regulation and we need something better than just profit.”

OpenAI has visibly embraced the latter, to the consternation of many who hoped it would live up to its name, but at least as an industry leader, it’s also calling for real action on the governance side — beyond hearings where senators line up for re-election speeches that end in question marks.

While the proposal boils down to “maybe we’d like to do something,” it is at least a conversation starter in the industry and indicates that the world’s largest AI brand and provider is lending support to doing that something. Public scrutiny is badly needed, but “we don’t yet know how to design such a mechanism.”

And while company leaders say they are in favor of slashing the brakes, there are no plans to do so yet, both because they see the huge potential to “improve our societies” (not to mention the company’s bottom line). ) don’t want to let go and because there is a risk that bad actors will be on the gas with full breaths.

Leave a Comment