THE AMERICA ONE NEWS
Sep 17, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic
Lizzie Dearden


NextImg:Has Britain Gone Too Far With Its Digital Controls?

As a couple with a stroller walked by a police van adorned with cameras on one of London’s busiest shopping streets this month, officers stopped the man for questioning. After several minutes, they put him in handcuffs and took him away.

Such scenes have become increasingly common as British authorities have ramped up the use of live facial recognition. Since January 2024, more than 1,000 people have been charged or cited in London with the help of the technology, which scans people’s faces and compares their image in real time to a database of about 16,000 wanted individuals, according to the police.

British authorities have also recently expanded oversight of online speech, tried weakening encryption and experimented with artificial intelligence to review asylum claims. The actions, which have accelerated under Prime Minister Keir Starmer with the goal of addressing societal problems, add up to one of the most sweeping embraces of digital surveillance and internet regulation by a Western democracy.

That has put Britain at the forefront of a debate over the choices that democracies will have to make about security, privacy, civil liberties and governing in the digital age. Critics contend that the nation has gone too far, intruding on the daily lives of citizens with technology and regulation. But others argue the measures are a pragmatic adaptation to technological change to strengthen safety and national security.

“There’s a big philosophical debate going on here,” said Ryan Wain, the executive director of the Tony Blair Institute for Global Change, a London group started by the former prime minister that supports the government’s policies. “There’s a big question about what is freedom and what is safety.”

In a statement, Britain’s Department for Science, Innovation and Technology, which oversees digital policy, said the public expected the government to utilize modern technology.

“We make no apologies for using the latest tools to help tackle crime, protect children online and secure our borders while safeguarding freedoms and ensuring the internet is safe for everyone,” a spokesman said. “Our focus is on safety and national security, not unnecessary intrusion.”

For years, the British government sacrificed some privacy and civil liberties for security and public safety. After terrorist attacks and other crimes, London installed more CCTV security cameras than almost any other comparable city. A 2016 law called the Investigatory Powers Act, also known as the “Snoopers Charter,” gave intelligence agencies and the police vast powers to intercept communications and review online activity.

ImageA security camera on a street pole.
Credit...Charlotte Hadden for The New York Times

The latest policies build on that tradition.

This year, the government expanded internet regulation with a new law aimed at preventing children from accessing online pornography and content that might encourage self-harm, suicide or eating disorders. In July, the law, called the Online Safety Act and passed under the previous Conservative government, introduced age verification checks for Reddit, Instagram and other services. Civil liberties campaigners have said it weakens privacy, while child safety groups have said the requirements can be easily evaded or ignored.

In July, Nigel Farage, whose populist Reform U.K. party leads in national polls, called for the repeal of the law, arguing it was a form of censorship and “borderline dystopian.” He has also criticized the recent arrests of people for social media posts that were made under old laws against hate speech and incitement.

Melanie Dawes, the chief executive of Ofcom, the agency implementing the new online safety law, said the new policies were essential for protecting children and did not infringe on speech.

“There’s no silver bullets here,” she said in an interview. “But our job is to drive change and we’re beginning to do that.”

Image
Melanie Dawes, the chief executive of Ofcom.Credit...Charlotte Hadden for The New York Times

As President Trump visits Britain this week, the tech debate has also taken on trans-Atlantic significance. The Trump administration and Republican lawmakers recently criticized Britain’s online safety law as an attack on both free speech and U.S. tech companies. This month, Mr. Farage testified at a congressional hearing in Washington about perceived threats to free speech in Britain.

The Trump administration also intervened in February after Britain ordered Apple to create an easy way for intelligence agencies and law enforcement officials to retrieve encrypted user data stored on the company’s servers. Last month, Tulsi Gabbard, the U.S. national intelligence director, said Britain had dropped the demand after American officials stepped in. British authorities have declined to comment.

Image
The Trump administration and Republican lawmakers recently criticized Britain’s online safety law as an attack on both free speech and U.S. tech companies.Credit...Kenny Holston/The New York Times

Over the past year, Britain has also expanded the use of artificial intelligence and algorithmic tools to handle immigration, including using the technologies to screen asylum applications, as well as exploring the introduction of digital IDs.

A spokesman for the Home Office, which manages immigration, said the moves have helped process a backlog of asylum claims and have enabled “human caseworkers, who will always be in charge of taking decisions, to reduce the time they have to spend on lengthy administrative tasks.”

But the technologies have raised concerns among some government workers, who question how effectively caseworkers can oversee A.I. and lament the lack of laws regulating its usage. If asylum decisions involving A.I. are legally challenged, one official said, Britain’s specialist immigration courts could get clogged with appeals and slow the system down.

Facial recognition has been perhaps the most visible sign of Britain’s expanding tech policies. Jake Hurfurt, the head of research and investigations at the privacy group Big Brother Watch, said the country had deployed the tools far more than other democracies.

“There has to be limits,” he said, noting that the European Union recently adopted a law to limit the use of facial recognition.

Gavin Stephens, the chairman of the National Police Chiefs’ Council, said the faces of innocent people were not stored by authorities. Last month at the Notting Hill Carnival, an annual street festival to celebrate Caribbean culture, 61 arrests were made of individuals identified by live facial recognition, including some wanted for violent offenses and crimes against women.

Image
A police van using facial-recognition cameras at the Notting Hill Carnival in West London in August.Credit...Carlos Jasso/Agence France-Presse — Getty Images

“Why wouldn’t you use this sort of technology if there were people who were wanted for serious offenses and were a risk to public safety?” Mr. Stephens said in an interview. “It’s definitely an important thing for the future.”

Mark Rowley, the head of the Metropolitan Police in London, wants to go further. At a conference in Westminster this month, he said facial recognition would be integrated into officers’ phones so they could switch it on to “confirm suspects’ identities on the street more efficiently.” Authorities are also testing affixing permanent facial recognition cameras in certain areas of London.

A spokesman for the Metropolitan Police said the technology was accurate, with only one person being misidentified in 2024 out of more than 33,000 cases.

Prison authorities are also expanding their use of A.I. In July, the Ministry of Justice, which oversees the prison system, introduced an “A.I. Action Plan” that includes algorithmic tools for predicting things like the risk a prisoner poses to the public if released from jail. The agency is also requiring people on parole to undergo “remote check-in surveillance” on their mobile devices under a new pilot aimed at “preventing crimes before they happen.”

When facial recognition cameras were set up in London’s shopping area along Oxford Street this month, the police said they arrested seven people, including those wanted for robbery and assault. They would not specify why the man with the stroller was detained.

Sindy Coles, who was shopping with a friend and walked by as the man was being questioned, said the facial recognition cameras were “too much.”

“It’s for your safety,” her friend said.

“It’s an invasion of privacy,” Ms. Coles replied.

“There’s no privacy now,” the friend said.