


Stanford University’s top researchers are attempting to build an artificial intelligence model that can predict the Chinese government’s moves at war, as fears grow in Washington that the communist regime may soon invade Taiwan.
From Beijing to Washington, a growing cohort of technologists are turning to AI for answers about the thinking and intentions of Chinese President Xi Jinping. While the U.S. uses AI to understand how the communist regime may escalate competition into war, China is hoping AI can provide a 21st-century update on Mao’s “Little Red Book.”
Stanford researchers are in the beginning stages of an effort to build a China model, according to Jacquelyn Schneider, director of the Hoover Wargaming and Crisis Simulation Initiative at Stanford.
“We’re trying to get data to build the model,” Ms. Schneider said, “partnering with organizations that have both translated English-language materials and who might have access or help us be able to scrape large amounts of Chinese language.”
Ms. Schneider said a major question facing the Stanford team is determining whether they can build a model representative of Chinese military decision-making or whether the model will more directly reflect what is accessible on the internet.
China is building its own AI model based on “Xi Jinping Thought,” according to reports citing the Cyberspace Administration of China, the communist regime’s internet regulator. Mr. Xi has taken an increasingly ideological stance after recently securing an unprecedented third five-year term as president and, more importantly, head of the ruling Communist Party.
The tool is expected to answer questions, summarize information and translate between Chinese and English. It remains unknown how the tool will be used.
Ms. Schneider said it is unclear how well AI’s “large language” models, employing powerful algorithms, can emulate a specific demographic or genre of decision-makers.
For example, there is some indication that computer-assisted war games simulate more aggressive and escalatory behavior than human counterparts, according to a January 2024 paper from researchers at Stanford, Northeastern University and the Georgia Institute of Technology. The paper said some researchers believe that AI-based models “tend to make less emotionally driven decisions compared to humans.”
While reverse-engineering such models to learn what induces national leaders to escalate and de-escalate in a crisis with total certainty is nearly impossible so far, tricking the models is far easier, according to Max Lamparth, a postdoctoral fellow at Stanford’s Center for International Security and Cooperation.
“To get the hypothetical pacifist [large language model] to be escalatory can be as simple as adding a few words of human-incomprehensible gibberish,” Mr. Lamparth said.
Nevertheless, using new AI tools in preparation for a potential shooting war with China is already underway.
Congressional staffers will complete a war game simulation acting as the U.S. National Security Council against an AI-powered China team as part of a program run by the National Security Institute at George Mason University’s Antonin Scalia Law School. The war-game simulation at the end of the summer program will involve a hypothetical crisis in the Taiwan Strait.
“Our war game exercise, which will test human decision-making against an AI LLM, will tangibly illustrate for staff how AI might be incorporated into national security decision-making, including how it might support or modify human choices during a crisis,” said Jamil Jaffer, founder of the National Security Institute, in a statement.
Lawmakers’ aides are not the only ones who need to prepare, according to Rob Silvers, undersecretary for strategy, policy and plans at the U.S. Department of Homeland Security.
Mr. Silvers encouraged the private sector to begin preparing for a Taiwan invasion and its global fallout in remarks at the Hill & Valley Forum in Washington earlier this month.
“Every company needs to be preparing for a Taiwan scenario,” Mr. Silvers said. “They need to expect that something is going to happen and think three shots, four shots, five shots ahead about how that is going to impact their business and what they would do to make sure that they can keep resilience and operating. And every government agency has to do that as well.”
• Ryan Lovelace can be reached at rlovelace@washingtontimes.com.