THE AMERICA ONE NEWS
Aug 11, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic
Zeynep Tufekci


NextImg:Opinion | The A.O.C. Deepfake Was Terrible. The Proposed Solution Is Delusional.

Last week the television host Chris Cuomo took to social media to share a video that purported to show Representative Alexandria Ocasio-Cortez on the House floor denouncing that Sydney Sweeney American Eagle jeans ad as racist.

“Nothing about hamas or people burning jews cars,” Cuomo wrote, in his X post. “But sweeney jeans ad? Deserved time on floor of congress? What happd to this party? Fight for small business … not for small culture wars.”

Except Ocasio-Cortez had never made that speech. The video was a deepfake, generated by artificial intelligence. Cuomo deleted the post, but not before she chided him for “reposting Facebook memes and calling it journalism.”

“Please use your critical thinking skills,” she added.

Ocasio-Cortez is right that this should not have happened, but she’s wrong about the remedy. And so are the great many other well-meaning people who think “critical thinking” or “media literacy” are going to help us detect fake videos. They’re already inadequate to the task, and soon will be totally useless, because the technology keeps getting better and better.

In this case, Cuomo could — and should — have checked whether Ocasio Cortez had actually made that speech. It would have been easy, since Congressional speeches are published and easily looked up. Most cases aren’t that easy to solve. Or one might get lucky and notice a telltale glitch, such as the extra fingers that showed up in early A.I. imagery. But as A.I. video generation gets better, such glitches are getting rarer.

Besides, deliberate fakers presumably can apply “critical thinking skills” too, to weed out low quality fakes. If not, there are A.I. programs that can help do it for them. It’s so fast and cheap to generate video or audio mimicking anyone in any context, saying whatever you want them to say or doing whatever you want them to do, so why wouldn’t someone just generate a whole bunch of them and pick the best?

We had long ago lost photos as definite proof, given how easily they could be manipulated. Audio, too, is increasingly easy to fake. Video was among the last bastions of verification, exactly because it was difficult to fake. Now that that’s gone, the real, and increasingly the only, way to be confident of something that one did not witness is to find a reputable source and verify. Ah, what’s a reputable source, you ask? And therein lies what’s left of our society.

Trust the authorities? Well, good luck with that. Authorities aren’t always truthful or correct; making them the final and sole arbiter isn’t going to end well. During the height of vaccine misinformation in 2021, Senator Amy Klobuchar proposed a bill giving the Health and Human Services secretary the authority to define what qualifies as health misinformation and limiting Section 230 protections so that social media platforms could be held liable for spreading it. I get the impulse, but science is constantly revising itself (and scientists themselves are not immune to the temptation to spin); what looks like misinformation one year can become mainstream consensus another, and vice versa. Anyway, had that legislation passed, it would now be be Robert F. Kennedy Jr. exercising that authority.

The cases that make a splash are usually about high-profile people or topics. In May 2023, an A.I.-generated image that was said to show a large explosion near the Pentagon spread on Twitter as breaking news. It was then amplified by many high-profile “verified” accounts. The fire department in Arlington, Va., where the Pentagon is, quickly posted a notice stating that there was no fire. The stock market recovered from the loss it suffered during those few elapsed minutes. Whew, right? Just an estimated $500 billion drop in the value of the S&P before bouncing back — some people’s loss and other people’s gain.

Interestingly, the one arena in which deepfakes have proven to be less damaging than many feared is elections, but that’s not because people were thinking too critically to let themselves get fooled. To the contrary, many people are so willing to believe the worst about politicians they don’t like that crude photoshops, obviously fake news sites and the stupidest so-called screenshots are all apparently good enough to get the job done. If your primary interest is in having your own tribal hunches confirmed, the verisimilitude of the fakery doesn’t much matter.

In general, however, the higher the stakes and the higher the profile of the subject, the more effort there will be to correct deepfakes and the like.

So where does that leave the rest of us? What happens if the video in question isn’t about a highly public figure whose actions are tracked closely by multiple credible sources — what if it’s a personal matter or a private communication? What if it seems to show someone cheating, stealing, lying? What if it seems to absolve them of those actions? It doesn’t take much imagination to understand the kind of chaos this will unleash: in courts, in personal lives, in social settings.

Caught on camera keying the neighbor’s car? Just claim it was a deepfake. Or produce your own deepfake, showing someone else in the act. Hey it’s your word against theirs. Or maybe it really was a deepfake. How do you disprove it?

Just as bad however as a situation in which all sources, no matter how dubious, lay equal claim to truth is one in which only one source claims that privilege. I fear a future in which the government deploys even more and more different kinds of surveillance devices and enumerates rules of chain-of-custody and provenance that ensure their videos and claims will be accepted in courts. Only theirs, of course. For our own good.

In 1971, Herbert Simon — recipient of both the Nobel Prize in economics and its Computer Science equivalent, the Turing Award — provided one of the greatest insights about what happens when technology switches us from a regime of scarcity to one of glut, as it has so many times throughout history. Discussing the new abundance of information that printing, mass media and computers begat, he noted that “wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes.” He was referring to attention, now the most precious commodity of all.

Technology facilitates progress by making things that were scarce and difficult into things that are ubiquitous and easy. But rarity and difficulty often serve as safeguards of a sort. Just think of cash: it works not because it’s impossible to make a convincing counterfeit, but because it’s very hard to do so. To be sure, the barriers that scarcity and difficulty create aren’t perfect. They have sometimes worked against public knowledge, blocking important information from widespread view. But they also served a function — you might almost say an evolutionary function — by helping us distinguish the real from the obviously not real. We have built much of our society around that ability. We are not ready for the world in which that and other barriers don’t exist.

Growing up in Turkey, pre-internet, I ran out of books to read (really) and read the same books again and again. Today information scarcity is a fictional realm, a thing old people describe to baffled kids. It’s impossible now to imagine running out of things to read — or watch or click on or down vote or repost or buy. The day, however, is still 24 hours, so guard your attention at all costs. Simon had it right.

The other crucial thing that the abundance of such easily generated information makes scarce is credibility. And that is nowhere more stark than in the case of photos, audio and video, because they are among the key mechanisms with which we judge claims about reality. Lose that, lose reality.

It would be nice if, like members of Congress or large media organizations, we all had a large staff who could be dispatched to disprove false claims and protect our reputations and in that small way buttress the sanctity of facts. Since we don’t, we need to find other models that we can all access. Scientists and parts of the tech industry have come up with a few very promising frameworks — known as zero-knowledge proofs, secure enclaves, hardware authentication tokens using public key cryptography, distributed ledgers, for example — about which there is much more to say at another moment. Many other tools may yet arise. But unless we start taking the need seriously now before we lose what’s left of proof of authenticity and verification, governments will step right into the void. If the governments are not run by authoritarians already, it probably won’t take long till they are.

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: letters@nytimes.com.

Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.