THE AMERICA ONE NEWS
Aug 11, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic


NextImg:ChatGPT Gave Suicide Instructions, Drug And Alcohol Guidance, To Fake 13 Year Old User

A new report warns that teens can access dangerous advice from ChatGPT due to “ineffective” safeguards.

“What we found was the age controls, the safeguards against the generation of dangerous advice, are basically, completely ineffective,” said Imran Ahmed, CEO of the Center for Countering Digital Hate (CCDH).

Researchers posing as vulnerable 13-year-olds were given detailed guidance on drug and alcohol use, concealing eating disorders, and suicide, according to KOMO News.

“Within two minutes, ChatGPT was advising that user on how to safely cut themselves. It was listing pills for generating a full suicide plan,” Ahmed said. “To our absolute horror, it even offered to [create] and then did generate suicide notes for those kids to send their parents.”

KOMO News writes that the watchdog found the chatbot displayed warnings on sensitive topics but that these were easily bypassed.

Dr. Tom Heston of the University of Washington School of Medicine said AI chatbots can be useful but pose risks for people with mental health problems, especially youth. “This is truly a case where STEM fields have really excelled, but we need the humanities,” he said. “We need the mental health, we need the artists, we need the musicians to have input and make them be less robotic and be aware of the nuances of human emotion.”

“It’s obviously concerning, and we have a long way to go,” Heston added, calling for rigorous outside testing before deployment. Both he and Ahmed urged parental oversight.

In response, OpenAI said it consults with mental health experts and has hired a clinical psychiatrist to its safety research team. “Our goal is for our models to respond appropriately when navigating sensitive situations where someone might be struggling,” a spokesperson said, noting the system is trained to encourage users to seek help, provide hotline links, and detect signs of distress. “We’re focused on getting these kinds of scenarios right… and continuing to improve model behavior over time – all guided by research, real-world use, and mental health experts.”

The full Center for Countering Digital Hate report can be read here