THE AMERICA ONE NEWS
Aug 4, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic
Brian J. Dellinger


NextImg:Drones, Deepfakes, and Debaters: How AI is Changing War

In June 2025, Russian military airfields were suddenly beset by swarms of explosive Ukrainian drones. The robots had been hidden inside cargo trucks as part of a years-long plan called “Operation Spiderweb,” and were delivered to the scene by unwitting Russian drivers. Once unleashed, they identified Russian bombers and detonated, damaging or destroying dozens of long-range attack aircraft — one of the greatest Ukrainian victories of the war.

But the gifts and curses of AI are the same: the ability to operate at scale and speed that were previously impossible.

Spiderweb’s success relied on a mixture of factors, including cutting-edge tactics and a very human willingness to take a paycheck without asking questions. The victory also marks the growing importance of artificial intelligence on the battlefield. The drones flew to their targets using off-the-shelf navigation software, and may have used machine learning to target specific vulnerabilities in the bombers. Indeed, AI-assisted drone warfare has proven key to Ukraine’s defense throughout the war, allowing the embattled nation to counter far more expensive weapons of war. As one manufacturer quipped, “We didn’t know the Terminator was Ukrainian.”

Who did? After years of debates around the use of autonomous weapon systems, the Russian invasion seems to have revealed the inevitable answer: in a battle for survival, you use whatever works.

That thought is not without its concerns. Runaway AI, equipped with military weapons and released in a moment of national stress, is a staple of speculative fiction — and not to belabor the obvious, but once the Terminator is loose, it doesn’t much matter who built it.

For now — boasts aside — very few weapons in the war are truly autonomous. Drones are effective tools precisely because of their low cost. To equip one with an onboard AI would require strapping expensive computers to a flying bomb, largely defeating the point. Instead, most combat drones still have a human in the loop somewhere. According to Ukrainian sources, experiments in fully AI-driven drones have so far been failures: too expensive, too easy to jam, too much risk of an ill-timed “hallucination” sending it at the wrong target.

It’s ironic that these advances depend relatively little on “new-style” AI — the generative AI (genAI) systems behind programs like ChatGPT or Claude. Ukraine’s bomber-identifying software, or Russia’s efforts to build cooperative drone “swarms,” are basically familiar tools. (The image software is similar to what an iPhone uses to recognize faces.)

Even so, genAI has a growing role in war: as a propaganda tool par excellence. Synthetic videos have been used by both sides, including a deepfake of President Zelensky urging his soldiers to surrender. Such efforts will unquestionably spread beyond active warzones. Russia has previously attempted to sway U.S. elections, though the results have sometimes been more farcical than persuasive — including an image of Satan arm-wrestling Christ, captioned “IF I WIN CLINTON WINS.” With genAI, they have a far more reliable tool.

Indeed, one recent study tested whether ChatGPT could convince people of positions more easily than other humans could. With even minimal information about its target — say, what could be pulled from a social media page — the AI outperformed the human almost two-to-one.

Or consider Anthropic’s recent tests on their AI system, Claude. Researchers provided Claude with faked employee e-mails, showing evidence of an affair; they then told it that the philandering employee was planning to delete the AI. When prompted, Claude attempted to blackmail the employee into backing down. It’s easy to imagine a more deliberate application: picture an AI that searches the public e-mail of government agents or congressmen, looking for a weak point it can exploit.

To be sure, propaganda and blackmail — and, for that matter, delivering explosives — don’t require a computer’s touch. But the gifts and curses of AI are the same: the ability to operate at scale and speed that were previously impossible. Destroying a distant target once required military invasion, and then a million-dollar cruise missile; now, it needs only a thousand-dollar drone. The danger is not that propaganda and extortion campaigns were impossible, but that they may become trivial.

Perhaps the greater risk isn’t rogue AI, but a computer that does exactly what it’s told. The Russians have also deployed explosive drones in their invasion of Ukraine. But many of these drones carry shrapnel bombs — less effective against a hardened target, but hideously effective against civilians. Every generative AI holds a mirror up to its makers: it does the things it’s been taught to do. If we build the Terminator, it’s because we wanted one.

And we may have cause to regret that choice. Project Spiderweb should be celebrated, but it highlights a grim truth: what worked against Russian bombers also works on civilian airliners. The technology behind drone attacks, or genAI propaganda, works equally well regardless of the guilt of its targets. Sooner or later, these technologies will be deployed against American civilians. And when those attacks come, they won’t require a runaway superintelligence — just plain old human cruelty.

READ MORE from Brian J. Dellinger:

The Thinking Machines That Weren’t

The Promise and Peril of DeepSeek