THE AMERICA ONE NEWS
Jun 4, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic
Aubrey Gulick


NextImg:The Dark Side of AI: Generating Child Porn

Artificial intelligence may be used for good — like automizing monotonous tasks, protecting your home, or even providing harmless entertainment — but it can also generate massive amounts of child pornography in a matter of minutes.

As generative AI has continued to improve drastically over the last year, some pedophiles have begun to work around company safety features to create child pornography using the new technology. They feed the software images that it then turns into brand-new content to be used for their own, and others’, perverted purposes. AI-generated child porn has become such a rampant problem in the last year that 54 attorneys general from 50 states and four U.S. territories sent a letter to Congress earlier this week to request that legislators do something about it. (READ MORE: Conversing With Chatbots)

Specifically, the attorneys general want Congress to “establish an expert commission” to study exactly how child sexual abuse material (CSAM) is being used to produce AI-generated child porn and then to “propose solutions” to deal with the pedophiles behind it.

The Victims of AI-Generated Child Porn

As the technology continues to develop, pedophiles are increasingly able to use it to victimize children. Not only can AI create sexualized images of children who do not exist, but it also creates new images using preexisting sexualized images of children or overlays images of children who’ve never been victimized on those preexisting photos.

“One day in the near future, a child molester will be able to use AI to generate a deepfake video of the child down the street performing a sex act of their choosing,” Ohio Attorney General Dave Yost warned in a news release. “The time to prevent this is now, before it happens.”

Perpetrators aren’t simply viewing content in a dark room on their own — consuming pedophilic material encourages pedophilic behavior. Psychiatrist Norman Doidge has noted that viewing pornographic material fundamentally changes the functions of the perpetrator’s brain. Scientists have found that 50–60 percent of consumers of child porn admit to abusing children.

AI is especially dangerous because not only can it generate images, but it can also track victims down. “As a matter of physical safety, using AI tools, images of anyone, including children, can be scoured and tracked across the internet and used to approximate or even anticipate a victim’s location,” the attorneys general warned in their letter.

The problem is that, currently, very little is being done on a legal front to prevent exactly that from occurring.

The Kansas City Star reports that “[i]n June, the Federal Bureau of Investigation issued a public warning” informing Americans that “predators have been ‘creating synthetic content (commonly referred to as ‘deepfakes’) by manipulating benign photographs or videos to target victims.’” The agency cautioned individuals to be careful about what they choose to put online — especially when it comes minors.

Even though AI-generated porn increasingly poses a threat to both adults and children, Congress’ debate on AI regulation has been focused on “misinformation,” “election interference,” and deepfake scams. Even if it did want to regulate CSAM, the legislative body could be hamstrung by a 2002 Supreme Court decision. (RELATED from Aubrey Gulick: Will There Be a New Government Agency for AI?)

According to the Heritage Foundation, the Supreme Court “struck down part of a congressional ban on ‘virtual child pornography.'” That ban, the 1996 Child Pornography Prevention Act, had, PBS reports, prevented pedophiles from using “advanced computer imaging technology” to take children and manipulate them into “images of them engaging in sex acts.”

In the 2002 case, the Supreme Court ruled that by banning “virtual pornography,” the law violated free-speech rights. The process used to create virtual pornography — think Photoshop — took hours in 2002; today, it can be done in seconds. If the ban were still on the books, any pedophile today who broke it would be serving 15 years in prison.

Inadequate Safety Features Currently in Place

That’s not to say that producing child porn is totally unrestricted. Currently, private companies are taking regulations into their own hands, independent of specific government regulations. OpenAI told The American Spectator that it carefully avoids using hateful or adult content when training its image generator, DALL-E. OpenAI uses Safer, a program produced by the security company Thorn, to detect any CSAM users might upload.

Its competitor, Stability AI, features somewhat similar features in its image-generator program. However, it is allegedly possible for users to download the program to their computer locally and, with a few lines of code, remove any safety features. Stability AI has not yet responded to The American Spectator’s request for comment. (READ MORE from Aubrey Gulick: Saving Ourselves From AI Deepfakes)

Those safety features have only been partially successful. Avi Jager, the head of child safety and human exploitation at ActiveFence, informed the Washington Post in June that “[o]n one forum with 3,000 members, roughly 80 percent of respondents to a recent internal poll said they had used or intended to use AI tools to create child sexual abuse images.”

Even if major companies were capable of preventing their own programs from being used in the child pornography industry, nothing says that others can’t design generative AI programs for that express purpose.

“We need to make sure children aren’t harmed as this technology becomes more widespread, and when Congress comes back from recess, we want this request to be one of the first things they see on their desks,” said South Carolina Attorney General Alan Wilson, who spearheaded the attorneys general letter.

In so many ways, AI has turned out to be a pandora’s box that we’re unlikely to be able to coax shut again. If society is going to coexist with it — and, at this point, there really isn’t a choice — we need to find a way to prevent demented individuals from using AI to harm the most innocent among us — our children.