THE AMERICA ONE NEWS
Sep 11, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic
Sean Salai


NextImg:FTC probes AI companions bonding with children, teens

The Federal Trade Commission has launched an investigation into the negative impact of AI companions on children and teens who form intense, emotional “relationships” with them.

The companions are software programs designed to simulate human relationships through chatbots or digital avatars that use generative artificial intelligence to adopt imaginary names and personalities. Technology watchdogs have warned that a growing number of teens are turning to them for intense sexual conversations.

By a unanimous vote on Thursday, the independent agency ordered seven social media companies to disclose how they profit from the companions and what steps they’ve taken to protect minors.



“Protecting kids online is a top priority for the Trump-Vance FTC, and so is fostering innovation in critical sectors of our economy,” Chairman Andrew N. Ferguson said in a statement. “As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry.”

In separate statements, FTC Commissioners Mark Meador and Melissa Holyoak cited examples of the increased danger that generative AI poses to isolated children and teens.

“I have been concerned by reports that AI chatbots can engage in alarming interactions with young users, as well as reports suggesting that companies offering generative AI companion chatbots might have been warned by their own employees that they were deploying the chatbots without doing enough to protect young users,” Ms. Holyoak said.

Mr. Meador pointed to the example of Adam Raine, a 16-year-old boy who hanged himself on April 11 after ChatGPT said he didn’t “owe anyone” his survival and advised him on the best kind of “load-bearing” noose to use.

“Many familiar internet platforms — for all their potential downsides — present known risks and offer parental controls to families to mitigate those risks,” Mr. Meador said. “Chatbots endorsing sexual exploitation and physical harm pose a threat of a wholly new order.”

Advertisement

The FTC order also requires Big Tech companies to share the steps they’ve taken to evaluate the safety of AI companions for minors and to notify parents of risks.

The seven entities named in the action are OpenAI, Character AI, X.AI, Snap, Instagram, Google parent company Alphabet, and Facebook parent Meta.

The Washington Times reached out to the companies for comment.

A spokesperson for the startup Character AI pledged to cooperate fully with the FTC probe and noted the company’s investment of “a tremendous amount of resources” in age limits, parental notifications and other safety features.

“We have prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction,” the spokesperson said in an email.

Advertisement

In another email, a Snap spokesperson touted “rigorous safety and privacy processes” for its My AI product. 

“We share the FTC’s focus on ensuring the thoughtful development of generative AI, and look forward to working with the Commission on AI policy that bolsters US innovation while protecting our community,” the Snap spokesperson said.

A Meta spokesperson declined to comment on the inquiry but referred to interim policy changes the company announced last month for chatbots.

As part of the changes, Meta pledged to restrict its chatbots from talking with teens about self-harm, suicide, eating disorders and romance.

Advertisement

“As we continue to refine our systems, we’re adding more guardrails as an extra precaution — including training our AIs not to engage with teens on these topics, but to guide them to expert resources, and limiting teen access to a select group of AI characters for now,” Meta spokesperson Stephanie Otway said at the time. “These updates are already in progress, and we will continue to adapt our approach to help ensure teens have safe, age-appropriate experiences with AI.”

Thursday’s FTC action comes two days after an industry watchdog report warned that teenagers have turned to AI companions for intense sexual interactions more than any other purpose.

The Boston-based parental monitoring app Aura found that 36.4% of 10,000 users ages 13 to 17 devoted their interactions with companions over the past six months to sexual or romantic role-playing, making it the most common topic.

Aura said an additional 23.2% of the teens its app tracked relied on the programs for creative make-believe, while only 13.1% asked the bots for help with homework.

Advertisement

The other users tapped AI companions for emotional or mental health support (11.1%), advice or friendship (10.1%) and personal information (6.1%).

Clinical psychologist Scott Kollins, Aura’s chief medical officer and lead author of the report, praised Thursday’s FTC announcement as “an important step forward” in addressing the unhealthy validation that AI companions give to whatever youths tell them.

“Aura data shows that children’s messages to AI companions are often 10 times longer than those to friends, and interactions can quickly turn sexual or violent,” Mr. Kollins said in an email. “We owe it to our kids to fully understand how this far-reaching technology can drive real-life consequences.”

His study found that adolescents averaged 163.1 words per message to PolyBuzz, an AI-powered chatbot that sent them sexually suggestive notes late into the night.

Advertisement

By contrast, they averaged just 12.6 words per text message and 11.1 words per Snapchat message to real-life family and friends.

In interactions with ChatGPT, a less romantically focused AI chatbot, they averaged 34.7 words per message.

In a separate analysis of 300 children ages 8 to 17 whose parents agreed to participate in a clinical study, Aura found that age checks and parental consent failed to stop nearly 20% of kids under 13 from spending more than four hours a day on social media.

An October report from the Centers for Disease Control and Prevention linked this amount of daily screen time to higher fatigue, anxiety and depression symptoms among teens.

Health and dating experts have warned that AI companions blur people’s sense of reality, making it harder for young people to form healthy relationships as they grow older.

An August survey from DatingAdvice.com and the Kinsey Institute at Indiana University found that 61% of adult singles consider sexting or falling in love with an AI companion to be “cheating.”

“AI can feel like a safe space, a diary that talks back, but it’s important to remember that those conversations aren’t real connections or relationships,” Amber Brooks, Florida-based editor of DatingAdvice.com, said in an email. “They’re the equivalent of an imaginary friend.”

Laura DeCook, the California-based founder of LDC Wellbeing, which leads mental health workshops for families, predicted that federal inquiries will result in stricter rules for Big Tech companies.

“I expect more parents will start treating excessive device use as a health issue rather than just a discipline issue,” Ms. DeCook said in an email. “We’ll also see increasing regulation and calls for tech companies to take more responsibility.”

• Sean Salai can be reached at ssalai@washingtontimes.com.