THE AMERICA ONE NEWS
Jan 16, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET.COM 
Sponsor:  QWIKET.COM 
Sponsor:  QWIKET.COM Sports News Monitor and AI Chat.
Sponsor:  QWIKET.COM Sports News Monitor and AI Chat.
back  
topic
Elizabeth Allen


NextImg:Criminals Exploit AI to Create Child Pornography, Blackmail Teens - Report

The rise of artificial intelligence (AI) programs capable of producing highly realistic images has given way to a disturbing surge in child pornography and blackmail attempts on the dark web.

Criminals are leveraging these evolving tools to exploit children and teenagers, using deepfakes as a means of generating and distributing explicit images across the internet.

Yaron Litwin, the Chief Marketing Officer and Digital Safety Expert for Canopy, a leading AI solution aimed at combating harmful digital content, shed light on the techniques employed by pedophiles.

One such technique involves taking a genuine photograph of a fully dressed teenager and transforming it into a nude image. Litwin shared a real-life example of a 15-year-old boy who innocently shared a picture of his bare chest following a workout with an online network of gym enthusiasts. Unbeknownst to him, his image was edited into a nude photo and later used to blackmail him.

RELATED: Major Fears Over Unchecked AI Have EU Taking Giant Leap Toward Strict Regulations

In 2022, major social media platforms reported a distressing 9% increase in suspected child sexual abuse materials (CSAM) on their sites. Notably, 85% of these reports came from Meta digital platforms like Facebook, Instagram, and WhatsApp.

According to Antigone Davis, head of safety at Meta, 98% of dangerous content is removed before it is reported, and the company reports more instances of CSAM than any other service. However, the ease and speed with which AI can edit existing images have resulted in devastating experiences for families.

Additionally, AI-generated images of child sexual exploitation, which do not rely on authentic photos, are becoming increasingly prevalent.

Litwin expressed concerns that AI-generated images of children engaged in sexual acts could potentially evade the central tracking system designed to block CSAM from the web.

Do you think artificial intelligence needs to be regulated?
Completing this poll entitles you to our news updates free of charge. You may opt out at anytime. You also agree to our Privacy Policy and Terms of Use.
You're logged in to Facebook. Click here to log out.
0% (0 Votes)
0% (0 Votes)

Law enforcement agencies may now face the challenge of distinguishing between real and AI-generated images, potentially leading to delays in investigations.

These AI-generated images also raise complex questions regarding the violation of state and federal child protection and pornography laws.

While it is generally acknowledged that such materials are illegal, even if the depicted child is AI-generated, there have been no court cases addressing this issue. Past legal arguments have highlighted the grey area surrounding virtual child pornography in U.S. law, as the Supreme Court struck down provisions banning such material in 2002.

The concerns surrounding online child sexual exploitation have been exacerbated by various factors. While generative AI tools have found positive applications in various creative fields, pedophiles are now utilizing specialized browsers to access forums and share guides on creating illicit content. These images are then used to deceive children through fake online personas and gain their trust.

Although many AI programs have restrictions on the prompts they respond to, criminals are increasingly exploiting open-source algorithms available on the dark web.

Furthermore, easily accessible AI programs can be manipulated using specific wording and associations to bypass established safeguards and respond to potentially malicious requests.

Litwin acknowledged that AI has the potential to harm children, but he emphasized that Canopy, developed over 14 years of AI algorithm training, serves as an example of “AI for good.”

Canopy is a digital parenting app designed to detect and block inappropriate content in milliseconds, safeguarding children from exposure.

Using advanced computing technology, including AI and machine learning, Canopy identifies and filters out inappropriate content on the web and popular social media apps.

The Smart Filter feature detects and blocks explicit images and videos, while sexting alerts help identify and prevent the sharing of inappropriate photos.

In addition, Canopy’s Removal Prevention ensures that the app cannot be deleted or disabled without parental permission, providing an extra layer of protection for children.

In the face of escalating concerns regarding AI-generated deepfakes of children, solutions like Canopy offer hope for parents and guardians in their mission to keep their children safe online.

Related: AI Is a Threat to Christianity: It Can Create a ‘New Bible’, Influential Author Declares ‘We Will Be Beyond The God of The Bible’