THE AMERICA ONE NEWS
May 31, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic
Daniel Greenfield


NextImg:House Dems Demand Censorship of AI Images of Pols

The leftist solution to speech they don’t like is always the same: censorship, censorship, and more censorship. Whether it’s social media or AI, censorship is the answer.

A group of Democratic lawmakers are pushing the Federal Election Commission (FEC) to increase regulation on artificial intelligence (AI) deepfakes following the release of the social platform X’s chatbot Grok.

In a Monday letter to the FEC, Rep. Shontel Brown (D-Ohio) and a handful of other House members asked the regulatory agency to clarify whether AI-generated deepfakes of election candidates are classified as “fraudulent misrepresentation.”

The letter focuses in on Grok even though there have been widely available uncensored AI image generation tools already available that are capable of displaying politicians.

(Full disclosure, I’ve made use of them. Here’s an example.)

Anyone who’s seen some of the Trump art, has already seen them in action.

Back in 2019, before this was on anyone’s radar, I wrote that this was not any fundamentally different than Photoshop.

Rep. Adam Schiff claimed that deepfakes, “enable malicious actors to foment chaos, division or crisis — and they have the capacity to disrupt entire campaigns including that for the presidency.”

But do they really? Why hasn’t it happened yet?

For the same reason that a Photoshopped picture of a political candidate with a supermodel has yet to stop an election. Faked pictures can be detected. So can deepfakes. Tampering leaves telltale traces.

And the proliferation of Photoshopping has devalued the significance of the damning photo. The proliferation of deepfakes will, in the same way, prevent people from taking videos too seriously.

And that is exactly what’s going on with the proliferation of AI image generation. People are taking photos and videos (and audio) or political candidates supposedly doing controversial things less seriously.

AI-generated videos and photos are a form of speech. Even when they involve political campaigns. They can no more be banned than political cartoons could be (and some 19th-century politicians certainly tried, just ask Thomas Nash.) Some people out there might be fooled by a fake endorsement, but I don’t believe it’ll swing anyone’s vote.

For that matter, the Kamala campaign just got done bragging about how it fooled everyone into believing Beyonce would show up at the DNC. Is that kind of deception also a crime?

Photoshops of political candidates were free speech. So is AI-generated imagery. Using terms like deepfakes or misinformation pathologizes certain kinds of political speech.

And the discourse is behind the technology. AI image editing tools being added to flagship phones like the Pixel 9 are rendering the very concept of images meaningless. Beyond the AI image editing tools already available on many phones which can remove people from photos, these tools allow for fundamentally altering or reimagining the photos that people take. Arguably they also make photographs created under these conditions absurd and pointless.

The authenticity of the image may be on the way out. No party can stop it, they can just use it as another pretext for censorship.