


A congressionally chartered panel is urging the U.S. government to establish a modern-day “Manhattan Project” to develop advanced artificial intelligence capabilities that surpass anything China has created.
The U.S.-China Economic and Security Review Commission’s annual report to Congress listed pursuing artificial general intelligence as its top recommendation, returning to the secret World War II-era crash project to develop the world’s first nuclear bomb as the model.
“The Commission recommends [that] Congress establish and fund a Manhattan Project-like program dedicated to racing to and acquiring an Artificial General Intelligence (AGI) capability,” the report said. “AGI is generally defined as systems that are as good as or better than human capabilities across all cognitive domains and would usurp the sharpest human minds at every task.”
The report published this month comes as the U.S. government and private sector strain to develop and perfect new AI applications, just as China’s Communist regime is doing the same. The panel recommended that Congress enact “broad multiyear contracting authority to the executive branch” for AI, cloud computing and data center companies to compete for AGI dominance, which the secretary of defense would be directed to make a national priority.
The congressional panel is not alone in its thinking. Oak Ridge National Laboratory grew from the Manhattan Project’s effort to make the first atomic bombs. The cutting-edge lab is now prioritizing research into artificial intelligence.
The government lab in east Tennessee created a new AI security research center last year to study the technology. Edmon Begoli, the center’s founding director, told The Washington Times earlier this year he worries about the implications of a ruthlessly efficient AI system.
He is investigating existential risks to humanity from unchecked AI applications. He says he is more worried about an AI system connected to everything than AI as a tool that consciously seeks to harm.
“It’s not like some big mind trying to kill humans. It’s just a thing that is so good at doing what it does, it can hurt us because it’s misaligned,” he told The Times in April.
Private efforts
Leading technologists in the private sector are also at work developing advanced AI.
OpenAI co-founder Ilya Sutskever has warned of AI systems going rogue, fueling human extinction, and used those concerns to help to push CEO Sam Altman out last year. Mr. Altman returned to helm the company, and Mr. Sutskever departed to build a new AI company called “Safe Superintelligence Inc.”
Under Mr. Altman, OpenAI unveiled new models in September that the company contends are capable of “reasoning” at the level of Ph.D students, particularly for tasks involving biology, chemistry and physics.
Some skeptics doubt AI makers’ concerns and claims as marketing hype. For example, Hugging Face CEO Clement Delangue said earlier this year that people who give the false impression that AI systems are human are peddling “cheap snake oil.”
Others, however, think fears about AI are not being taken seriously enough.
The late Henry Kissinger, diplomat and security official, issued a dire warning about the consequences of AI advances in his final book with two top technologists, Eric Schmidt and Craig Mundie.
The book, released this month, advises readers to think about the day “when we will no longer be the only or even the principal actors on our planet.”
The authors of “Genesis” wrote that AI tools have already outperformed humans in some respects and raised the concern that a society one day could choose to create a hereditary genetic line of people who are inherently more capable of working with AI tools. They oppose such a future and fear it could cause the human race to split into different lines, with some having more authority over others.
The authors note that some biological engineering efforts to integrate man and machine are already underway, including research into brain-computer interfaces.
Such BCIs, also called brain-machine interfaces, link the electrical signals of a brain to a device to decipher the signals to accomplish a task.
Concerns are not limited to civilian applications.
Lt. Cmdr. Mark Wess wrote last year in the U.S. Naval Institute’s Proceedings publication that no emerging technology was “potentially more important to the military.” He wrote about a world where his fellow naval officers could use the brain tech to do things such as control a battleship’s navigation, weapons, and engineering systems simply by thinking.
As nations compete for such technology that sounds like science fiction, the U.S.-China Economic and Security Review Commission said the American government cannot wait before taking action, with so many others ready to fill the void.
Commissioner Michael Kuiken, a former Senate aide, said at a hearing on the commission’s report that “being the first mover on artificial general intelligence is critical.”
“If the Chinese government were to get there first, I think the United States will find itself at a perpetual strategic disadvantage and that makes it an imperative to [race] to the front of the line on this one,” he said.
• Ryan Lovelace can be reached at rlovelace@washingtontimes.com.