

OpenAI, Google, Microsoft and Meta aren't the only ones racing to play leading roles in artificial intelligence (AI). Political leaders are also vying with each other to create frameworks for – and to promote – software capable of performing human tasks, such as text and image generators ChatGPT and Midjourney. On Tuesday, October 24, the European Union hopes to reach a political agreement on the Artificial Intelligence Act, or at least compromise on the main points of this draft regulation.
Then, on November 1 and 2, British Prime Minister Rishi Sunak will welcome representatives of foreign governments and tech giants to London for an "international AI safety summit." Later in November, the G7 countries will gather for a meeting on the "Hiroshima Process," an AI discussion that began in Japan in May.
This convergence illustrates a sense of political urgency to take hold of a technology considered at once highly promising and worrying. But in this frenzy – the US, the OECD and China are also active – strategies differ.
In 2021, Brussels launched the world's first major legislative project on AI: The AI Act prohibits certain uses ("social rating" systems, "subliminal techniques" for manipulation), and for uses deemed "high-risk" (autonomous driving, screening resumes, granting bank loans), it imposes obligations, such as minimizing error rates and discriminatory biases, checking the quality of training data, etc.
On the controversial point of "general-purpose" AI models (such as those generating text or images), the path of compromise between the European Parliament and the member states consists of imposing obligations on the largest models (above a threshold of computing used for training, or a number of users or corporate clients in the EU, according to a document cited by Contexte). Software manufacturers should also ensure that they have taken steps to respect the "copyright" of content used for training.
As for London, which wants to become an AI capital, it has chosen to focus on the risks deemed most existential. According to a draft joint declaration from the summit, reviewed by Euractiv, these are linked to "intentional misuse" – to generate computer attacks or biological weapons – or to "issues of control" of an AI that could escape humans. The statement is reminiscent of alarmist letters calling for AI to be "paused," or deeming it as dangerous as "pandemics or nuclear war."
The British approach also matches the rhetoric of the industry giants: The summit focuses on "frontier AI models," the term used by OpenAI, Google, Microsoft and Anthropic when they created an industry group of the most powerful software manufacturers in June. London also aims to create a kind of "IPCC of AI," a panel of experts inspired by the one charged with informing governments about climate – an idea also championed, in an op-ed in the Financial Times, by several executives working in the field of AI.
You have 41.25% of this article left to read. The rest is for subscribers only.