Google’s Sundar Pichai has agreed to work with lawmakers in Europe on what’s been dubbed an “AI pact” — seemingly a contingency package of voluntary rules or standards while formal regulations for adopting AI are still being worked out.
Pichai met with Thierry Breton, the European Union’s internal market commissioner, who issued a statement after today’s confab saying: “There is no time to lose in the AI race to build a safe online environment.”
A briefing published by his office after the meeting also said the EU wants to be “proactive” and work on an AI pact ahead of the incoming EU legislation that will apply to AI.
The memo added that the block wants to launch an AI pact “involving all major European and non-European AI actors on a voluntary basis” and ahead of the legal deadline of the aforementioned pan-EU AI law.
But – at the moment – the only tech giant publicly linked to the initiative is Google.
We have contacted Google and the European Commission with questions about the initiative.
In further public remarks, Breton said:
We expect the technology in Europe to respect all our rules on data protection, online safety and artificial intelligence. In Europe, it is not choosing and choosing.
I am glad that Sundar Pichai recognizes this and that he is committed to complying with all EU rules.
The GDPR [General Data Protection Regulation] is positioned. The DSA [Digital Services Act] and DM [Digital Markets Act] are carried out. The negotiations on the AI law are approaching their final stages and I call on the European Parliament and the Council to approve the framework before the end of the year.
Sundar and I agreed that we cannot afford to wait for AI regulations to actually take effect, and to work with all AI developers to develop an AI pact on a voluntary basis before the legal deadline .
I also welcome Sundar’s commitment to step up the fight against disinformation ahead of elections in Europe.
While there are no details on what might be in the “AI Pact”, as with any self-regulatory scheme, it would have no legal bite, so there would be no way to force developers to sign up – nor any consequences for failure to comply with (voluntary) obligations.
Still, it may be a step toward the kind of international regulatory cooperation that a number of technologists have been calling for in recent weeks and months.
The EU has a precedent for getting tech giants to write their names to a bit of self-regulation: after several years of drafting a number of voluntary agreements (aka Codes) that a number of tech giants have signed ( including Google), committed to improving their responses to reports of online hate speech and the spread of harmful disinformation. And while the two Codes mentioned above haven’t solved the still-complex online voice moderation issues, they’ve given the EU a stick to measure whether or not platforms meet their own requirements – and sometimes use them to make a light public beating when they are not.
More broadly, the EU remains ahead of the global pack in digital regulation and has already set regulations for artificial intelligence — proposing a risk-based framework for AI apps two years ago. But even the bloc’s best efforts are still lagging behind developments in the field, which felt particularly sizzling this year after OpenAI’s generative AI chatbot, ChatGPT, was made widely available to web users and gained viral attention.
Currently, the draft EU AI law, proposed in April 2021, remains a living piece of legislation between the European Parliament and the Council – with the former recently agreeing on a series of amendments they intend to include, including several aimed at generative AI .
A compromise on a final text will need to be reached between the EU’s co-legislators, so it remains to be seen what the final form of the bloc’s AI rulebook will look like.
And even if the law passes before the end of the year, which is the most optimistic timeline, there will certainly be an implementation period involved – most likely at least a year before it applies to AI developers. That is why EU commissioners are ardently urging emergency measures.
Earlier this week, EVP Margrethe Vestager, who heads the bloc’s digital strategy, suggested that the EU and US work together to set minimum standards before the legislation goes into effect (via Reuters).
In further comments today, after meeting with Google, she tweeted: “We need the AI law ASAP, but AI technology is evolving at extreme speed. So we now need voluntary agreement on universal rules for AI.”
A spokesperson for the Commission expanded on Vestager’s comment, saying: “During the G7 digital ministerial meeting in Takasaki, Japan on April 29-30, EVP Vestager proposed internationally agreed AI guardrails that companies can voluntarily comply with until the AI law comes into effect in the EU. This proposal was picked up by G7 leaders, who agreed in their communiqué last Saturday to launch the ‘Hiroshima AI Process’, with the aim of designing such guardrails, particularly for generative AI.”
Despite these sudden expressions of haste, it’s worth noting that the existing EU data protection rulebook, the GDPR, may apply – and has already been applied to certain AI apps, including ChatGPT, Replika and and Clearview AI to name three. to call. For example, a regulatory intervention on ChatGPT in Italy led to a suspension of the service in late March, followed by OpenAI with new disclosures and controls for users in an effort to comply with privacy regulations.
Add to that, as Breton points out, the upcoming DSA and DMA could create even more hard requirements that AI app makers will have to abide by in the months and years to come, depending on the nature of their service, as those rules of be applied to digital systems. services, platforms and the most market-defining tech giants (in the case of the DMA).
Nevertheless, the EU remains convinced of the need for specific risk-based rules for AI. And, it seems, wants to double down on the planned ‘Brussels effect’ that can attract its digital legislation by announcing a stop-gap AI pact now.
In recent weeks and months, US lawmakers have also turned their attention to the fraught question of how best to regulate AI. how to regulate the technology.
Google may be hoping to play the other side by rushing to work with the EU on voluntary standards. Let the AI arms race begin!
This report has been updated with additional comments by Vestager