


If fake images on social media are such a threat, why are they spotted so easily?
We’re going to hear a lot of ominous warnings about the use of artificial-intelligence-generated images in political campaigns and propaganda. The Associated Press recently warned, “the implications for the 2024 campaigns and elections are as large as they are troubling: Generative AI can not only rapidly produce targeted campaign emails, texts or videos, it also could be used to mislead voters, impersonate candidates and undermine elections on a scale and at a speed not yet seen.”
Three weeks ago, the Republican National Committee unveiled an ad that used imagery generated by artificial intelligence to present the RNC’s vision of what the world may look like if Biden wins again in 2024 — a Chinese invasion of Taiwan, regional banks shutting their doors, an overrun border, and troops attempting to restore order to a crime-and-fentanyl overwhelmed city of San Francisco. The words “built entirely with AI imagery” are in tiny print on the upper left hand corner, and the ad likely received more media coverage for its use of AI images.
But three other recent examples suggest that there are enough sharp eyes out there to recognize when an image is fake, raising questions about whether it will really be all that effective in fooling anyone other than those who want to be fooled, or whether its use is more likely to be counterproductive to its users.
At the beginning of May, Brigette Gabriel, the kind of Trump supporter so brimming with wide-eyed enthusiasm that even Branch Davidians would urge her to pause and take a deep breath, offered a Tweet declaring “President Trump is defending America in ways Biden will never comprehend,” with an artificial illustration of modern-day Trump in a military uniform in a jungle.
Perhaps it is worth reminding Americans that Trump never served in the military, avoided service in Vietnam by obtaining four student deferments and one medical disqualification — a bone spur in one or both of his heels – and in a 1997 interview, claimed he was a “brave soldier” for avoiding sexually-transmitted disease during his single years in the late 1990s, declaring, “It’s amazing, I can’t even believe it. I’ve been so lucky in terms of that whole world, it is a dangerous world out there. It’s like Vietnam, sort of. It is my personal Vietnam. I feel like a great and very brave soldier.”
Trump wasn’t a war hero, but his most ardent fans wish he was, so they imagine an alternate world where he was, and still is – Rambo. The kinds of people who choose to believe that 76-year-old Trump is still personally leading covert operations in hostile territory – and pausing to take a picture in the middle of it — are the kinds of people who respond to those emails from African princes asking for bank account numbers, that Elvis is still alive, and that Milli Vanilli sang their own songs. Images like this are less an attempt to fool the general public than a near-religious expression of faith in Trump’s ability to be all things that his supporters want him to be.
Right around the same time, Amnesty International chose to use an AI image generator to depict protests and police brutality in Colombia. The organization told Gizmodo that it used the AI generator to depict human rights abuses so as to preserve the anonymity of vulnerable protestors.
The same day, “Chris O.,” a self-described “independent military historian and researcher” offered an account on Twitter of “a Russian from western Siberia who volunteered to fight in Bakhmut and elsewhere before being badly wounded says his unit suffered huge casualties, his documents were lost, his salary was stolen and he was dismissed from the army without explanation while in hospital.”
The tweet now has a note warning, “AI scanning tools detected there is a 91-99 percent chance that this image was created by Midjourney AI.”
One of the ironies of this one is that in the latter two examples, the choice to use an AI-generated image was an attempt to use a fake picture to call attention to a true, or at least plausible, story. (Trump fans would likely argue that through tougher border security and immigration enforcement policies, an airstrike on Iran’s Qasem Soleimani, and support for police forces, their adored former president did indeed defend the country in ways that Biden cannot comprehend, and that the fake picture is intended as a metaphor.)
In the case of Amnesty International, numerous other local and international media covered conflicts between protesters and police that turned violent, and used real photos of the protesters. The imperative “preserve the anonymity of vulnerable protestors” seems moot if the protesters are A) already suffering from police violence and B) seen in photos distributed around the world by news agencies. If Amnesty International really wants to dispel the denials of the Colombian police, they set their cause back by using AI images.
Are there Russian soldiers who are disillusioned by their experiences fighting the war in Ukraine? Certainly. Have Russians suffered huge casualties? By just about every measure, absolutely. Are Russian soldiers treated badly by their government? Indisputably. That despondent Russian from western Siberia may or may not exist, but it’s revealing that Chris O. didn’t feel people would believe or care about the account without an image of a wounded, despairing Russian solder. Of course, by using an AI image, the account is much easier to dismiss as unreliable or propaganda.
You can’t spread the truth by lying.
AI will be a tool of the lazy and be most useful at shoring up support among those who already agree. Brigette Gabriel’s fantasy of Trump the jungle warrior is not all that different from the NFTs of himself that the president sold late last year – Trump as a sports star, Trump as a superhero, Trump as a rock star playing a guitar. The former president isn’t any of these things, but some of his fans are happier living in an imaginary world where he is.
As for Chris O. and the imaginary Russian soldier, people’s attitudes about Russia and Ukraine are pretty much set. One made up or exaggerated account of an injured Russian soldier being treated like dirt by his government isn’t likely to sway the public’s views of the conflict. And Amnesty International likely harmed its own cause by using AI images, although it’s probably worth noting that their use of the AI images of the Columbian protesters received more news coverage than the Columbian protesters themselves.
We live in an era where we are inundated with images, mostly through the Internet, all day long. It is exceptionally rare for any image to break through and become memorable or opinion-shaping, the way “Tank Man” did in 1989. It’s exceptionally unlikely that any one image will shape the 2024 elections, and even less likely that an AI-generared image will do it.