May 31, 2023  |  
 | Remer,MN
The American Majority
The go-to-site for the news WE read. @am1_news
The go-to-site for the news WE read. : Created and operated in USA. Contact : : @am1_news
AM1.NEWS: The go-to-site for the news WE read : Contact: : Twitter @am1_news: The Internet of Us™ : AM1.NEWS
Fox Business
Fox Business
1 Apr 2023

Artificial intelligence experts who were cited in an open letter calling for a pause on AI research have distanced themselves from the letter and slammed it for "fearmongering."

"While there are a number of recommendations in the letter that we agree with (and proposed in our 2021 peer-reviewed paper known informally as ‘Stochastic Parrots’), such as ‘provenance and watermarking systems to help distinguish real from synthetic’ media, these are overshadowed by fearmongering and AI hype, which steers the discourse to the risks of imagined ‘powerful digital minds’ with ‘human-competitive intelligence,’" Timnit Gebru, Emily M. Bender, Angelina McMillan-Major and Margaret Mitchell wrote in a statement on Friday. 

The four tech experts were included in a citation in a letter published earlier this week calling for a minimum six-month pause on training powerful AI systems. The letter has racked up more than 2,000 signatures as of Saturday, including from Tesla and Twitter CEO Elon Musk and Apple co-founder Steve Wozniak. 

"AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs," the letter begins. The open letter was published by the Future of Life Institute, a nonprofit that "works on reducing extreme risks from transformative technologies," according to its website. 

AI expert Timnit Gebru

Google AI Research Scientist Timnit Gebru speaks onstage during Day 3 of TechCrunch Disrupt SF 2018 at Moscone Center on September 7, 2018, in San Francisco, California. (Photo by Kimberly White/Getty Images for TechCrunch) (Kimberly White/Getty Images for TechCrunch / Getty Images)

Gebru, Bender, McMillan-Major and Mitchell’s peer reviewed research paper, "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" is cited as the first footnote on the letter’s opening line, but the researchers say the letter is spreading "AI hype."

"It is dangerous to distract ourselves with a fantasized AI-enabled utopia or apocalypse which promises either a ‘flourishing’ or ‘potentially catastrophic’ future," the four wrote. "Such language that inflates the capabilities of automated systems and anthropomorphizes them, as we note in Stochastic Parrots, deceives people into thinking that there is a sentient being behind the synthetic media."

Mitchell previously oversaw ethical AI research at Google and currently works as the chief ethical scientist at AI lab Hugging Face. She told Reuters that while the letter calls for a pause specifically on AI tech "more powerful than GPT-4," it is unclear which AI systems would even qualify as breaking those parameters. 

ChatGPT homescreen

The Welcome to ChatGPT lettering of the US company OpenAI seen on a computer screen. ((Photo by Silas Stein/picture alliance via Getty Images) / Getty Images)

"By treating a lot of questionable ideas as a given, the letter asserts a set of priorities and a narrative on AI that benefits the supporters of [Future of Life Institute]," she said. "Ignoring active harms right now is a privilege that some of us don’t have."

Another expert cited in the letter, Shiri Dori-Hacohen, a professor at the University of Connecticut, told Reuters that while she agrees with some of the points made in the letter, she disagrees with how her research was used. 

Dori-Hacohen co-authored a research paper last year, titled "Current and Near-Term AI as a Potential Existential Risk Factor," which argued that widespread use of AI already poses risks and could influence decisions on issues such as climate change and nuclear war, according to Reuters. 

"AI does not need to reach human-level intelligence to exacerbate those risks," she said. 

"There are non-existential risks that are really, really important, but don’t receive the same kind of Hollywood-level attention."

OpenAI CEO Sam Altman on stage

Sam Altman, president of Y Combinator, speaks during the New Work Summit in Half Moon Bay, California, U.S., on Monday, Feb. 25, 2019. The event gathers powerful leaders to assess the opportunities and risks that are now emerging as artificial intell (David Paul Morris/Bloomberg via Getty Images / Getty Images)

The letter argues that AI leaders should "develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts."

"In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems," the letter adds. 

Gebru, Bender, McMillan-Major and Mitchell argued that "it is indeed time to act" but that "the focus of our concern should not be imaginary ‘powerful digital minds.’ Instead, we should focus on the very real and very present exploitative practices of the companies claiming to build them, who are rapidly centralizing power and increasing social inequities."

Future of Life Institute president Max Tegmark told Reuters that "if we cite someone, it just means we claim they’re endorsing that sentence."

"It doesn’t mean they’re endorsing the letter, or we endorse everything they think," he said. 

He also shot down criticisms that Musk, who donated $10 million to Future of Life Institute in 2015 and serves as an external adviser, is allegedly trying to lead the charge on shutting down his competition.

Elon Musk

SpaceX owner and Tesla CEO Elon Musk smiles at the E3 gaming convention in Los Angeles, California, U.S., June 13, 2019. (REUTERS/Mike Blake/File Photo / Reuters Photos)

"It’s quite hilarious. I’ve seen people say, ‘Elon Musk is trying to slow down the competition,’" he said. "This is not about one company."

Tegmark said that Musk had no role in drafting the letter. 

Another expert cited in the Future of Life Institute’s letter, Dan Hendrycks of the California-based Center for AI Safety, said he agrees with the contents in the letter, according to Reuters. He argued that it’s practical to take account of "black swan events," which are defined as appearing as unlikely to happen but would have dire consequences if they were to unfold, according to the outlet.

Note: You can use @chatbot mention tag to interact with ChatGPT language model in comments. Neither your comment, nor the generated responses will appear in "Comments" or "News & Views" streams.