


The Associated Press recently commented on how President Donald Trump, when confronted about a video, brushed it off as “probably AI.” The footage allegedly showed someone tossing something out of an upper-story window in the White House, and the president seemed to simply assume it’s fake. He might not be wrong, though. It is hard to tell these days. Whether the depicted events happened or not, however, blaming artificial intelligence does seem to be a new play in the political book. Welcome to the rise of deepfake defense: When digital evidence is inconvenient, call it a false and move on.
An August Pew Research Center poll found that about half of US adults feel that AI in their daily life makes them feel “more concerned than excited.” Toby Walsh, chief scientist and professor of AI at the University of New South Wales in Sydney, told AP in an email that blaming AI leads to problems not just in the digital world but the real world as well. “It leads to a dark future where we no longer hold politicians (or anyone else) accountable. “It used to be that if you were caught on tape saying something, you had to own it. This is no longer the case.”
Digital forensics expert Hany Farid, who has warned about fake artificial images for years, puts the risk plainly: “When you enter this world where anything can be fake, then nothing has to be real.”
The politics of blaming AI won’t end soon. In the past year alone, AI has been used to mimic a president’s voice in robocalls, to smear rivals days before voting, impersonate foreign leaders in volatile contexts, and to mass-produce localized campaign messages. Let’s look at some of these examples a little closer.
In January 2024, thousands of New Hampshire voters received robocalls featuring an AI-generated voice that sounded like President Joe Biden, telling Democrats to skip the primary and “save” their vote for November. A political consultant, Steve Kramer, later acknowledged commissioning the call. He said it was intended as a “wake-up call” about AI’s risks. Whatever the motive, the effect was dangerous. A synthetic voice of a national figure pushed false voting guidance into real inboxes on the eve of an election.
Liberty Nation depends on the support of our readers. Donate now!
Just days before the Labour Party’s 2023 conference in the UK, an artificial intelligence audio smear of Keir Starmer circulated on social media. Detection firms assessed it as likely manipulated, and senior officials urged people not to spread it. Still, it was shared and showed how quickly a convincing fake could hijack a political news cycle, as The Record reported.
In Slovakia’s 2023 parliamentary vote, an audio recording popped up hours before a legally mandated pre-election quiet period, claiming to catch opposition leader Michal Šimečka discussing vote-rigging with a journalist. Analysts quickly flagged the clip as AI-generated, but the timing and the format mattered. Since it was audio, it wasn’t covered by Meta’s then manipulated media policy, which focused on videos. The episode became an early case study in how deepfake releases can exploit procedural gaps to seed doubt at decisive moments, Wired explained.
During India’s 2024 general election, deepfake videos of Bollywood stars Aamir Khan and Ranveer Singh criticizing the prime minister spread widely. Singh posted on X, “Beware of deepfakes, friends.” Police opened an investigation, and platforms took some clips down – but others lingered, underscoring how quickly synthetic endorsements can reach millions.
During South Africa’s 2024 election, AI-messages included a deepfake of Joe Biden threatening sanctions if the ruling ANC won and a deepfake of Donald Trump endorsing a new party. Fact-checkers debunked the clips, but researchers noted how easily such content reappeared on different platforms, sometimes without manipulation labels.
In 2023, Turkish presidential candidate Muharrem İnce quit the race after an explicit video circulated. He said it was a deepfake and called the episode “slander,” illustrating how AI has become the default explanation even when forensic analysis is still catching up. Whether the clip is real doesn’t matter because invoking AI allows the politician to recast events and avoid accountability.
Closer to home, Venezuelan Communications Minister Freddy Ñáñez questioned the video that showed an American strike on a vessel in the Caribbean that reportedly killed 11 Tren de Aragua gang members heading into the US. “Based on the video provided, it is very likely that it was created using Artificial Intelligence,” Ñáñez said on his Telegram account, describing “almost cartoonish animation.”
In 2024, Arizona Senate candidate Kari Lake’s campaign sent a cease-and-desist letter after a local outlet produced a clearly labeled “deepfake” PSA using her likeness to warn voters about election disinformation.
Just days before his party chair election, Oklahoma Rep. John Waldron was treated to a reportedly fake audio of him making racist remarks. He said, “Those weren’t my words” and added that the detection software showed the clip was “95 to 99 percent synthetic.”
In 2023, an AI image of an explosion near the Pentagon briefly rattled markets before officials labeled it fake.
The “Liar’s Dividend” is a term created by researchers in 2019. “If the public loses faith in what they hear and see and truth becomes a matter of opinion, then power flows to those whose opinions are most prominent – empowering authorities along the way,” according to the California Law Review. “A skeptical public will be primed to doubt the authenticity of real audio and video evidence.”
The Brennan Center explains that artificial intelligence “can be used… to falsely claim that legitimate audio content or video footage is artificially generated.”
“I’ve always contended that the larger issue is that when you enter this world where anything can be fake, then nothing has to be real,” said Farid. “You get to deny any reality because all you have to say is, ‘It’s a deepfake.’”