THE AMERICA ONE NEWS
May 30, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic
Fox Business
Fox Business
30 May 2023


Tech industry leaders, scientists and professors issued a new warning regarding the risks associated with artificial intelligence

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the statement shared by the Center for AI Safety read. 

The statement said that while experts in the field, policymakers, journalists and the public are increasingly discussing risks from the technology, it "can be difficult to voice concerns about some of advanced AI’s most severe risks."

"The succinct statement below aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously," the webpage said. 

Elon Musk waving

Tesla CEO Elon Musk leaves the Phillip Burton Federal Building on Jan. 24, 2023, in San Francisco, California.  (Justin Sullivan/Getty Images / Getty Images)

Among the signatories are Google DeepMind CEO Demis Hassabis, OpenAI CEO Sam Altman, the so-called "godfather of AI" Geoffrey Hinton and environmentalist Bill McKibben. 

Others among the hundreds who signed the statement are professors, researchers and people in positions of leadership, as well as the singer Grimes.

More than 1,000 researchers and tech leaders, including billionaire Elon Musk, signed a letter earlier this year that called for a six-month-long pause on the training of systems more advanced than OpenAI's GPT-4, saying it poses "profound risks to society and humanity."

Sam Altman, chief executive officer and co-founder of OpenAI

Sam Altman, chief executive officer and co-founder of OpenAI, speaks during a Senate Judiciary Subcommittee hearing in Washington, D.C., on May 16, 2023. (Eric Lee/Bloomberg via Getty Images / Getty Images)

Notably, Altman has said previously that the letter calling for the moratorium wasn't the "optimal way to address the issue."

In a blog post last week, he and company leaders said that AI needs an international watchdog to regulate future superintelligence

The trio believes it is conceivable that AI systems will exceed expert skill levels in most domains within the next decade. 

Geoffrey Hinton speaks in 2019

Geoffrey Hinton, chief scientific adviser at the Vector Institute, speaks during The International Economic Forum of the Americas (IEFA) Toronto Global Forum in Toronto, Ontario, Canada, on Sept. 5, 2019.  (Cole Burston/Bloomberg via Getty Images / Getty Images)

Hinton has said artificial general intelligence may be just a few years away, telling NPR that it's not feasible to stop the research.

"The research will happen in China if it doesn't happen here," he said.

The Associated Press contributed to this report.