THE AMERICA ONE NEWS
Jul 16, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic
The Economist
The Economist
10 Dec 2023


NextImg:Europe, a laggard in AI, seizes the lead in its regulation
Europe | AI in the sky

Europe, a laggard in AI, seizes the lead in its regulation

The world’s first AI regulation is a bit of a mixed bag

| BERLIN

TWO THINGS European lawmakers should get credit for are stamina and an extraordinary tolerance for bad food. Representatives of the EU Parliament, member governments and the European Commission, the bloc’s executive body, spent nearly 40 hours in a dark meeting room in Brussels until the wee hours of December 9th hashing out a deal on the AI Act, Europe’s ground-breaking law on regulating artificial intelligence. Observers shared pictures online of half-eaten sandwiches and other fast food piling up in the venue’s rubbish bins to gauge the progress of the talks.


This ultramarathon of negotiation was the endpoint of one of the most diligent lawmaking processes ever. It started in early 2018 with lengthy public consultations and a weighty 52-person “High-Level Expert Group”, which in 2020 led to a white paper on which all could comment online (1,250 groups and individuals did so). The legislation has yet to be released because kinks still need to be worked out, but the draft version was a document of nearly 100 pages and almost as many articles.

Was it all worth it? The thorough process certainly has led to a logically coherent legal approach, not unlike that of much product-safety legislation. In order to give the technology space to evolve, the AI Act’s first draft, which the commission presented in April 2021, mainly tried to regulate various applications of AI tools, rather than how they were built. The riskier the purpose of an AI tool, the stricter the rules with which it needed to comply. An AI-powered writing assistant needs no regulation, for instance, whereas a service that helps radiologists does. Facial recognition in public spaces might need to be banned outright.

But the idea of focusing on how AI tools are applied was predicated on the assumption that algorithms are mostly trained for specific purposes. Then along came the “large language models” that power such AI services as ChatGPT and can be used for any number of purposes, from analysing text to writing code.

Since these LLMs can be a source of harm themselves, for example by spreading bias and disinformation, the European Parliament wanted to regulate them as well, for instance by forcing their makers to reveal what data they were trained on and how they assessed the model’s risks. By contrast, some governments, including those of France and Germany, worried that such requirements would make it hard for small European model-makers to compete with big American ones. The result, after the all-nighter, is a messy compromise that limits stricter rules to the most powerful LLMs, while creating regulatory exceptions for smaller models (nicknamed “sandboxes”) and exempting the open-source kind, which allow users to adapt them to their needs.

A second big sticking point was to what extent law-enforcement agencies should be allowed to use facial recognition, which at its core is an AI-powered service. The European Parliament pushed for an outright ban, in order to protect privacy rights. Governments, meanwhile, insisted that they need the technology for public security, notably to protect big events such as the Olympic Games next year in France. Again, the compromise is a series of exceptions. Police will need a court warrant to use facial recognition and the “real-time” sort is limited to predefined places, times and crimes (such as kidnapping and sexual exploitation).

“The EU becomes the very first continent to set clear rules for the use of AI,” tweeted Thierry Breton, the EU’s commissioner for the internal market. Mr Breton is never far from the social-media limelight: during the negotiation marathon, he continuously posted shots of himself in the middle of a huddle. Yet whether the AI Act will be as successful as the General Data Protection Regulation (GDPR), the EU’s landmark privacy law, is another question. Important details still need to be worked out. And the European Parliament still needs to approve the final version.

Most important, it is not clear how well the AI act will be enforced—an ongoing problem with recent digital laws passed by the EU, given that it is a club of independent countries. In the case of the GDPR, national data-protection agencies are mainly in charge, which has led to differing interpretations of the rules and less than optimal enforcement. In the case of the Digital Services Act and the Digital Markets Act, two recent laws to regulate online platforms, enforcement is concentrated in Brussels at the commission. The AI act is more of a mix, but experts worry that some national bodies will lack the expertise to prosecute violations, which can lead to fines of up to €35m ($38m) or 7% of a company’s global revenue.

The GDPR triggered what is known as the “Brussels Effect”: big tech firms around the globe complied, and many non-European governments borrowed from it for their own legislation. The AI act may not do the same. Complex compromises and haphazard enforcement are not the only reasons. For one thing, the incentives in AI are different: AI platforms may find it easier to simply use a different algorithm inside the EU to comply with its regulations. (By contrast, global social-media networks find it difficult to maintain different privacy policies in different countries.) And by the time the AI Act is fully in force and has shown its worth, many other countries, including Brazil and Canada, will have written their own AI acts.

The protracted discussions over the AI Act have certainly helped people, both in Europe and elsewhere, to understand better the risks of the technology and what to do about them. But instead of trying to be first, Europe might have done better trying to be best—and come up with an AI act that has more rigour and less piling of exceptions on top of exceptions.

Read more of our articles on artificial intelligence

A mining project revives a dying Bosnian town

A country paralysed by ethnic divisions gets a big investment

Armenia is turning against its erstwhile guardian, Russia

The Western-leaning prime minister, Nikol Pashinyan, has few good options


In Europe, green policies rule while green politicians struggle

Reshaping the continent’s economy is easier than winning votes