THE AMERICA ONE NEWS
Jun 4, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic
Aubrey Gulick


NextImg:Saving Ourselves From AI Deepfakes

Nine fingers? Weird shadows? Too many teeth? Artificial intelligence currently struggles with some fairly mundane details when it comes to generating images, but there’s no doubt it’s rapidly improving. Like it or not, it’s inevitable that AI will eventually discover that humans have just five digits on each hand, and, when it does, it will become crucial to be able to tell real photos and deepfakes apart.

Google has come up with a solution for that. On Tuesday, the company’s AI branch, DeepMind, announced that it is currently testing a permanent, invisible, digital “watermark.” SynthID works by embedding a mark on the pixels of an AI-generated image, which, while indiscernible to the human eye, is readable for a computer and is mostly resistant to tampering. (READ MORE: The Astounding Era: The Men Who Made Science Fiction)

SynthID marks an important step in the right direction. Deepfakes pose a real risk to individuals and to society at large, and watermarks are one way we can try to mitigate those risks.

Deepfakes Are Becoming Increasingly Problematic

AI image generation has become rampantly popular over the last few years. For instance, Midjourney, a popular tool used to generate images, already has more than 14.5 million users despite being in its beta phase of testing. Google has also developed its own image generator, Imagen, as has Meta. Users simply type out a command describing the image they want, and the program delivers it.

Since launching their programs, Google and Microsoft have both been accused of “supercharging” deepfake porn.

Independent analyst Genevieve Oh reported that not only has “the number of [nonconsensual pornographic deepfake] videos increase[ed] ninefold since 2019,” but “nearly 150,000 videos, which have received 3.8 billion views in total, appeared across 30 sites in May 2023.”

For individuals who are already engaged in selling porn, AI deepfakes essentially make their product worthless — and, while it would be great if that eventually shuttered the porn business, it’s an irrational hope. Deepfake porn is a much bigger issue for women who’ve never created porn. Bloomberg reports that “some of the sites offer libraries of deepfake programming, featuring the faces of celebrities like Emma Watson or Taylor Swift.”

While watermarking likely won’t curb the tide of deepfake porn, it will at least make it identifiable, which is the first step in fixing the problem. On the other hand, watermarking could be incredibly effective in mitigating the influence that deepfake photographs and videos have on elections.

Home Security Heroes, an online security platform, recently conducted a survey that found that “AI deepfake videos swayed 81.5% of voters [on the 2024 election], with 36% entirely changing their votes.” Not only are candidates intentionally using AI — former President Donald Trump and Florida Gov. Ron DeSantis have both used it to create “high-profile videos” — but David Klepper and Ali Swenson, both reporters with the Associated Press, note that generative AI could “rapidly produce targeted campaign emails, texts or videos,” and “it also could be used to mislead voters, impersonate candidates and undermine elections on a scale and at a speed not yet seen.” (READ MORE from Aubrey Gulick: Too Little Too Late?: Biden Administration Bans Investment in Chinese Quantum Computing)

At the very least, a digital watermark could make it possible for people on the internet to detect when an image or a video is created by AI. The only problem is that it may not be available fast enough.

The First Viable Watermarking Option

Although it’s still only in the beta phase and likely won’t be widely available for some time, SynthID is the first truly viable option for a digital watermark. It’s apparent that, in order to be effective, something like SynthID will need to be invisible to the human eye while remaining easily detectable; it will have to be a permanent mark on the image; and it will have to be universal — a quality shared by all AI-generated images.

SynthID, at least, meets most of those requirements. Axios reports that it “is designed to remain detectable even after modifications such as adding filters, changing colors or adjusting brightness. And unlike visible watermarks, SynthID can’t be removed just by cropping.” Its major limitation, however, lies in its inability to be universally applied. (READ MORE: Conversing With Chatbots)

SynthID is only a feature of Google’s Imagen, and even if something similar were required by every major tech company, as Axios pointed out, that wouldn’t necessarily prevent “bad actors” from turning to software that doesn’t “document or label images as AI-generated.” In an ideal world, “if enough of the big players do clearly label such content, any images that aren’t authenticated will be treated with more suspicion.”

AI is in the process of creating an entirely new digital world in which myth and reality are difficult or impossible to discern. The question is: Will we be able to develop the technology needed to protect ourselves fast enough?