


Growing fears about the dangers of artificial intelligence have sparked a leadership shakeup at one of America’s leading AI companies.
OpenAI co-founder Ilya Sutskever recently exited the company and is now working on a new firm, “Safe Superintelligence Inc.” The new company’s aim is to safely develop superintelligence, or AI systems thought to be smarter than human beings.
While at OpenAI, known for its ChatGPT technology, Mr. Sutskever warned that AI could potentially go rogue and ultimately spark the end of humanity. Mr. Sutskever also helped drive the public ouster of CEO Sam Altman last year.
Mr. Altman, however, returned to OpenAI’s top spot within weeks and then announced Mr. Sutskever’s removal from the company’s board. Now, Mr. Sutskever is leaving OpenAI in pursuit of a business model that Safe Superintelligence depicts as insulating safety and security from the growing commercial pressures of the lucrative AI sector.
Mr. Sutskever announced his new AI project this week with partners Daniel Gross, formerly of Apple, and Daniel Levy, who also left OpenAI.
“We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs,” the Safe Superintelligence founders said. “We plan to advance capabilities as fast as possible while making sure our safety always remains ahead. This way, we can scale in peace.”
Safe Superintelligence is styling AI safety as a top priority. Its leaders pledged not to be distracted by “management overhead or product cycles.”
The company plans to be based in Palo Alto and Tel Aviv. Rumors spreading from Russian media that Mr. Sutskever was negotiating with a Russian bank for his work are not true, according to Lulu Cheng Meservey, a spokeswoman for Safe Superintelligence. She told The Washington Times that Mr. Sutskever has “had zero conversations with any Russian banks.”
Mr. Altman’s solicitation of foreign funding has drawn criticism. He reportedly sought investment from the United Arab Emirates for his grander AI ambitions that carry a price tag said to be up to $7 trillion.
The seesawing drama surrounding Mr. Altman’s ouster and return last November, along with complaints from former employees about OpenAI’s culture, have also contributed to outsiders viewing the makers of the popular ChatGPT with suspicion.
With Mr. Sutskever leaving the company, retired Army Gen. Paul Nakasone joined OpenAI’s board. He is the former director of the National Security Agency and the head of U.S. Cyber Command.
He previously helped establish an AI Security Center at NSA that he said would help private industry understand threats to their intellectual property and would work closely with businesses, as well as national labs, academia, other agencies and select foreign partners.
“OpenAI’s dedication to its mission aligns closely with my own values and experience in public service,” Gen. Nakasone said in a statement earlier this month “I look forward to contributing to OpenAI’s efforts to ensure artificial general intelligence is safe and beneficial to people around the world.”
But Gen. Nakasone’s arrival also attracted criticism.
Former NSA contractor Edward Snowden, who fled the U.S. in 2013 after leaking a trove of confidential government documents about domestic and foreign surveillance, chastised OpenAI for betraying people.
“They’ve gone full mask-off: do not ever trust @OpenAI or its products (ChatGPT etc),” Mr. Snowden said on X. “There is only one reason for appointing an @NSAGov Director to your board. This is a willful, calculated betrayal of the rights of every person on Earth. You have been warned.”
Mr. Snowden, who was granted Russian citizenship in 2022, said the intersection of AI and large quantities of surveillance data will concentrate power in the hands of an unaccountable few.
Nations around the world appear eager to see inside leading AI labs. U.S. government-backed researchers investigating safety at AI labs told The Times earlier this month they found leading labs vulnerable to foreign spies, including tech theft from China.
The researchers dug into labs including OpenAI, Google DeepMind and Anthropic in collaboration with the State Department and witnessed minimal security and negligent attitudes about safety.
Google DeepMind previously told The Times it takes security seriously.
OpenAI did not respond to requests for comment.
• Ryan Lovelace can be reached at rlovelace@washingtontimes.com.