Artificial intelligence-generated deepfake news anchors are being used by Chinese state-aligned actors to promote pro-China propaganda videos on social media, according to a new report published on Feb. 7.
The detailed report (pdf) by U.S.-based research firm Graphika marks the first time it has observed “state-aligned influence operation actors using video footage of AI-generated fictitious people in their operations.”
Graphika found that the fake news anchors were created for a likely fictitious news outlet called “Wolf News,” which it claims utilized technology provided by a London-based AI video company called Synthesia.
According to Graphika, the videos were discovered while the company was tracking pro-China disinformation operations known as “spamouflage.”
“This set of two unique videos shared many of the same characteristics as traditional Spamouflage content: they ranged between one-and-a-half and three minutes in length, used a compilation of stock images and news footage from online sources, and were accompanied by robotic English-language voiceovers promoting the interests of the Chinese Communist Party,” Graphika said.
One such video accused the U.S. government of attempting to tackle gun violence through “hypocritical repetition of empty rhetoric.”
The other stressed the importance of cooperation between the United States and China for the recovery of the global economy.
Graphika said it identified Spamouflage promoting the deepfakes on platforms, including Twitter, Facebook, and YouTube, but that the videos were low quality and “spammy,” and that none of them had received more than 300 views.
China has not commented on the report.
The website of Synthesia states that it is an “AI video creation platform” used by thousands of companies to “create videos in 120 languages.” The company offers users more than 100 different “AI avatars,” including two named “Anna” and “Jason.”
It also states that “as a company pioneering this new kind of media,” it is aware of the responsibility it has and that AI and similarly powerful technologies “cannot be built with ethics as an afterthought.”
For this reason, the company says it will “not offer our software for public use” and that “all content will go through an explicit internal screening process before being released to our trusted clients.”
It also states that “political, sexual, personal, criminal and discriminatory content is not tolerated or approved.”
Victor Riparbelli, Synthesia’s co-founder and chief executive, told The Japan Times that consumers who used the company’s technology to create the avatars highlighted in the Graphika report had violated its terms of service.
Riparbelli said the accounts of those who had done so have since been suspended and said he takes “full responsibility for anything that happens on our platform” but declined to provide further details regarding the individual or individuals behind the Wolf News videos.
Riparbelli added that the company has a four-person team dedicated to preventing its deepfake technology from being used to create illicit content but noted that certain materials containing misinformation are hard to detect if they do not include things such as outright hate speech or explicit words and imagery.
“It’s very difficult to ascertain that this is misinformation,” Riparbelli said after being shown one of the Wolf News videos, according to the publication. The CEO also urged policymakers to set clearer rules about how the AI tools could be used.
The Epoch Times has contacted Synthesia for comment.
Graphika’s report comes shortly after Beijing adopted an expansive new law to regulate deepfakes called the “Provisions on the Administration of Deep Synthesis of Internet Information Services,” which went into effect in January.
Under the regulations, deep synthesis providers must, among other things, establish and maintain systems for user registration and verification of user identity, and provide reviews and ethical evaluations of the deepfake services and the algorithms used by the system.
The law also states that deep synthesis providers must also implement procedures to notify and take down the publication of “false, illegal or harmful information” by deep synthesis users.
Despite the new law, Washington has repeatedly raised concerns over China’s advancements in AI, which it fears could give the Chinese regime a stronger competitive advantage over the United States, particularly in terms of its military, which could become one of the most capable in the world.