



In the wake of a devastating attack by the Palestinian terrorist group Hamas in southern Israel that left approximately 1,400 Israelis dead, experts warn of a burgeoning new battleground in the Israeli-Palestinian conflict: the digital realm.
The rapid advancement of easy-to-use artificial intelligence (AI) generative tools threatens to usher in an era of deepfake visuals, adding another layer of complexity to a conflict already pitted with misinformation and manipulation.
“AI-generated images will complicate an Israeli-Palestinian conflict already rife with disinformation,” David May, a research manager at the Foundation for Defense of Democracies, told Fox News Digital.
May added that “Hamas controls the narrative in the Gaza Strip,” revealing the group’s history of intimidating journalists and manipulating images to serve its propaganda needs.
The use of AI in the generation of false images is seen as a substantial evolution in the field of disinformation. “I call it upgraded fake news,” Dr. Tal Pavel, founder and director of CyBureau, an Israeli-based institute for the study of cyber policy, stated.
According to Pavel, AI, or deepfake technology, “is when we take those images and bring them to life in video clips,” calling the technology “one of the biggest threats to democracy.”
Pavel cautioned that deepfake technology could be used beyond wartime, as its increasing sophistication makes it “harder and harder to prove what is real or not.”
He pointed to its existing usage in criminal activities, such as fraud, and its potential to manipulate electoral politics.
The application of deepfake technology isn’t limited to the Israeli-Palestinian theater. Ivana Stradner, a research fellow at the Foundation for Defense of Democracies, indicated that similar methods have been used in Russia’s ongoing war in Ukraine.
Stradner cited a false video featuring Ukrainian President Volodymyr Zelenskyy that was manipulated to show him urging Ukrainian soldiers to surrender. Once discovered, the fake video was swiftly taken down.
The introduction of AI-generated deepfakes in the Gaza Strip poses particular challenges, as there are currently almost no well-known credible journalists in the area.
Hamas blocked the main human passage into the Palestinian enclave during its October 7 attack, preventing foreign press from entering and making it increasingly difficult to distinguish fact from fiction.
However, Dr. Yedid Hoshen, a researcher at the Hebrew University of Jerusalem, suggests that the technology is not entirely foolproof. “Creating images in itself is not hard, but when we talk about deepfakes, we are talking about talking faces or face swapping,” he said.
According to Hoshen, while there are many available techniques for generating images or videos, creating deepfake images with synced audio still poses challenges. He pointed out that small details such as hands, fingers, or hair often remain unrealistic, offering subtle but crucial hints that an image may be fake.
Hoshen also noted the current language limitations of the technology, stating that for a conflict like the Israeli-Palestinian one, deepfake videos would have to be created in Hebrew or Arabic, whereas most of the technology is still only in English.
As the lines between reality and falsehood blur, the emerging field of AI-generated visuals becomes yet another terrain that both warring sides may seek to exploit.
While technology for detecting deepfakes evolves, the complexities it introduces into war cannot be understated. This is true for not only war, but also the criminal realm with fraud, blackmail, child pornography and other dark web genres.
RELATED: Obama Gifts Hamas a Ready-Made Excuse for Its Next Terror Attack




