THE AMERICA ONE NEWS
Jun 24, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic
Aubrey Gulick


NextImg:Don’t Trust Everything You See on X

It’s a horrifying photo — the kind that slams your gut every time you look at it.

It started circulating after Israel and Hamas began exchanging airstrikes following Hamas’ brutal Oct. 7 terrorist attack. The photo seems to depict an infant partially buried in the rubble in Gaza screaming in pain. (READ MORE: Obama Joins the Chorus of Voices Calling for Israeli Restraint)

Except that the infant doesn’t exist and has never existed.

The picture is a creation of artificial intelligence (a community note has now been attached to the image) — a fact that is so blatant that even a cursory glance reveals obvious problems. The child’s left hand, for instance, has a curious number and arrangement of fingers (AI continues to struggle to understand that humans have just five digits of varying lengths on each hand). The facial features also appear abnormal on closer inspection; creases on the forehead and chin are too deep to be real.

But people scrolling the internet quickly over a lunch break don’t usually take more than a cursory glance at photos appearing on their social media feeds. The image inevitably became part of the plethora of propaganda on the war between Israel and Hamas. It’s appeared across the world: reposted on X; featured on protest posters in Cairo, Egypt; and printed on the front page of progressive newspapers. (The French publication Libération came under fire after it published the photo on its front page.) As it turns out, not only is the photo a fake, but it’s not even new. It appeared on X in February when the AI baby was supposed to be the victim of violence in Syria.

Whether in the war between Israel and Hamas or that of Ukraine and Russia, AI and social media have totally revolutionized the propaganda war. Aspiring proselytizers no longer need millions of dollars for flyers and radio ads; as long as they have a scant social media following, an internet connection, and a rudimentary understanding of artificial intelligence — or a vast collection of war photos from past conflicts — they’re all set.

Context Matters

Photos don’t need to be created by AI to be misleading. Plenty of older photos are circulating on social media accompanied by captions tying them to the current conflict in Israel, despite the fact that they are much too old to be relevant.

Take, for instance, a photo that depicts children in body bags somewhere in the Middle East. The graphic image went somewhat viral after Rep. Ilhan Omar (D-Minn.) reposted it on X, with another user having added a caption that read, “CHILD GENOCIDE IN PALESTINE.” (READ MORE: The Gaza Ground War: What to Expect)

But the photo wasn’t from the current Israeli-Hamas conflict; it’s at least 10 years old and actually depicts victims of a 2013 Syrian chemical weapons attack, according to the New York Post. While those attacks were tragic, the photo is unrelated to the current Israeli response to Hamas’ terrorism. Unfortunately, the vast majority of internet scrollers who may have seen Omar’s repost are unlikely to check back to see whether the image was even relevant. The damage has already been done.

Omar’s repost (and those it exemplifies) are real-time examples of “misinformation” — a term that, although useful when appropriately employed, tends to be overused and leveled as an accusation against the Right far too often. But deep fakes, artificial photos, and old images aren’t the only kinds of misinformation out there.

When Fact-Checking Fails

The Times of Israel reported that the news staff at CBS spent hours combing through 1,000 videos submitted by sources claiming to be “on the ground in Israel or Gaza” only to discover that some 900 of them were “deepfakes.” That means just 10 percent of the videos were usable.

The commonly advocated fix for this kind of “misinformation” is fact-checking — and news organizations have been doing plenty of that (although how successful they have been is debatable). The problem is that fact-checking is only as good as the fact-checker. (RELATED: The New York Times’ Reputation Was Never Earned)

On Oct. 12, Israeli Prime Minister Benjamin Netanyahu showed U.S. Secretary of State Antony Blinken photos of “a baby, an infant, riddled with bullets, soldiers beheaded, young people burnt alive in their cars” while the two were speaking to reporters. The Israeli government then released some of the photos to the public over X — one of which was reposted by Daily Wire co-founder Ben Shapiro.

Shapiro was promptly “fact-checked” by an X user using an AI program called AI or Not, which is supposed to be able to detect whether a photo is AI-generated. The tool is fairly inaccurate and, likely because a name tag in the photo was blurred out, concluded that the photo was generated by AI. But when experts, including Dr. Hany Farid from University of California, Berkeley, inspected the photo, they concluded it was real.

These stories are just cautionary tales, likely just the tip of the iceberg when it comes to the photographic misinformation floating around social media. It’s cliché to note that a picture is worth a thousand words, but it’s also worth pointing out that the picture in question doesn’t have to be real or accurately represented to disseminate a message that’s far more difficult to debunk than it was to plant. In the modern era of social media and AI, such a message makes the “fog of war” we all live in that much more dense while shaping public opinion and international response. It would certainly serve us well if we took viral photos and the claims about them with a hefty grain of salt.