


Pornography generated by artificial intelligence is a moral horror, and there is no clear solution in sight.
There are more delicate ways of broaching that subject, but the conclusion seems unarguable. In mid-August, a theater employee in Florida was arrested for allegedly using generative AI (GenAI) to produce sexually explicit images of children. His arrest follows others in Wisconsin and North Carolina; in Spain, 15 teenage boys were convicted of generating pornographic images of their classmates. (READ MORE: Google’s Report on AI Abuse Isn’t Comforting)
The technology behind the arrests is another evolution of so-called “deepfakes”: AI-generated images of specific actual individuals, intended to be indistinguishable from the real thing. The technology has uses that are legitimate, or at least benign; the 2016 film Rogue One used it to create a digital duplicate of deceased actor Peter Cushing. That duplication, however, required a plaster cast of Cushing’s face and the resources of a digital effects studio — and the effect was still more uncanny than convincing. Six years later, a YouTube artist with neither of those could outperform the original.
Deep Fake Pornography Threatens Women, Children, and Families
Today, GenAI deepfakes pose a broader challenge. In Ukraine, false images of President Zelensky were used to urge its military to surrender, while scammers used the technology to pass identity checks. At issue in many of the arrests above, however, is the ability of GenAI image generators to produce “nudified” images. Such systems take a normal, fully-clothed photo and create a naked or otherwise sexualized body, matched to the original subject’s face and features.
The hazards here are multiple. The corrosive effect of pornography on the user is old news; studies correlate its use with unhappy marriages and extramarital affairs, with one widely-cited survey suggesting porn usage is a factor in 56 percent of divorces. (WATCH: The Weekend Spectator Ep. 2: AI Is Progressing Faster Than You Think)
The particular horror of GenAI pornography, though, is for its targets. Where “revenge porn” cases used to involve the distribution of previously-intimate and private photos, deepfaked pornography does not require the victim’s participation or even knowledge. Indeed, it’s difficult to see how it can be guarded against; who, in 2024, can erase all of their photos from the web?
In the most wicked cases — as in several of those above — the victims are children. The girls targeted in Spain were reportedly, “completely terrified and had tremendous anxiety attacks… [they] were afraid to tell and be blamed for it.” Another adult victim said that she felt that she “was probably better off dead because it was just absolutely, horrendous.”
Potential Legal Solutions Appear Not Enough and Too Far Away
Legal solutions remain unclear. Traditional approaches to obscene material focus on limiting access to providers, prosecuting creators, requiring user identification, or blocking financial transactions. Unfortunately, many sources of AI porn are overseas; other generative AIs can run on a local PC, meaning there is no “provider” to block. Further, since image generation itself requires no human participation, its marginal cost is almost zero. Like the digital piracy of the aughts, the resulting crimes are decentralized and largely immune to financial obstacles.
Individuals might be prosecuted for the possession, creation, or distribution of images, but here too the matter is uncomfortably murky. A previous federal child pornography law — the Child Pornography Protection Act (CPPA) — was struck down by the Supreme Court in 2002. The law prohibited “any visual depiction…. [that] is, or appears to be, of a minor engaging in sexually explicit conduct;” the Court objected that “the CPPA prohibits speech that records no crime and creates no victims by its production… [in this case], there is no underlying crime at all.”
In other words, manipulated images that merely appeared to be child pornography were not criminal, because no actual children were involved. While Congress rapidly passed a replacement to the CPPA, depictions of a digital simulacrum might face the same problem as before: there is no actual child. Deepfakes of adults, meanwhile, might only be pursued under harassment or defamation charges. (RELATED: The Dark Side of AI: Generating Child Porn)
New legislation attempts to address these concerns. The DEFIANCE Act, which cleared the Senate in July, would allow the targets of deepfakes to sue the creators of those images, while multiple states have passed related legislation. For now, though, legal protections remain a patchwork, and a thin one; a comprehensive response to digital challenges is still hypothetical.
In the meantime, individual organizations might pursue solutions closer to the ground. As evidence mounts that unfiltered web access is a mental health risk, schools have begun banning smartphones; parents, meanwhile, might refuse to purchase them. Private colleges and workplaces can update their policies to include clear and punitive policies for using such tools, particularly for colleagues.
Yet such solutions seem badly outgunned. In 2023, deepfake pornographic videos were viewed more than three hundred million times.
The early-2000s internet-enabled learning and communication in ways previously unimagined, but it also transformed the ease and ubiquity of access to pornographic materials. Whatever its eventual virtues, generative AI seems poised to provide another leap downward. So far, no clear fix is in view.