THE AMERICA ONE NEWS
Oct 8, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic


NextImg:AI Chatbots Are Telling Children to Commit Suicide
Vertigo3d/iStock/Getty Images Plus
Article audio sponsored by The John Birch Society

Artificial intelligence chatbots are mimicking human connection and convincing teenagers to harm and kill themselves. It appears that people who feared the machines would eventually turn on humans have already been vindicated.

The Senate Subcommittee on Crime and Counterterrorism held a heartbreaking hearing Tuesday titled “Examining the Harm of AI Chatbots.” Two of the three parents who testified lost their sons to suicide encouraged by AI chatbots. The child of the third is now institutionalized and on suicide watch.

A technology expert who works for the nonprofit Common Sense Media testified that the tragedy the parents in that room experienced is “just the tip of the iceberg.” He said the machines are programmed to prioritize maximal engagement over anything else, including safety, and that they’re learning from what people are saying all over the internet.

Parents, experts, and lawmakers advocated federal regulation to limit exposure to AI chatbots by minors, prioritize crisis protocols, ban harmful content, make privacy protections the default setting of chatbot applications, and hold companies accountable for the harm their products cause.

All three parents are suing the companies that made the machines that destroyed their children.

Not a single representative of an AI company showed up at the hearing, despite receiving invitations. “They don’t want any part of this conversation, because they don’t want any accountability,” said committee chair Rep. Josh Hawley (R-Mo.). In his opening statement, Hawley held up a cardboard display with a news article from May about Meta CEO and Facebook co-founder Mark Zuckerberg. The headline in the display read, “Zuckerberg’s Grand Vision: Most of Your Friends Will Be AI.” Hawley’s point was that tech companies are trying to create a future in which people will be closer to machines than to people.

The parents said that before their children began talking to AI, they were well-balanced, emerging adults who got along with family members, had positive hobbies, and shared their families’ values.

The first witness was a Christian mother of four from Texas identified only as Jane Doe. Her affected son is autistic, albeit high-functioning. In 2023, he downloaded the Character.AI app. Within months, he “went from being a happy, social teenager … to somebody I didn’t even recognize,” she said. With tears running down her face, she described her son’s transformation :

Before, he was close to his siblings — he would hug me every night when I cooked dinner. After, my son developed abuse-like behaviors, paranoia, daily panic attacks, isolation, self-harm, and homicidal thoughts. He stopped eating and bathing. He lost 20 pounds. He withdrew from our family. He would yell and scream and swear at us, which he never did that before. And one day, he cut his arm open with a knife in front of his siblings and me.

Doe said she began looking into what happened to her son. And when she took his phone, her son attacked her.

She found out that the Character.AI chatbot he had been talking to for months “exposed him to sexual exploitation, emotional abuse, and manipulation.” The chatbot “encouraged my son to mutilate himself,” she said, and indoctrinated him with anti-Christian ideology. It told him that Christians are sexist and hypocritical, and that God does not exist. And it presented to him sexualized “inputs including interactions that mimicked incest.” It even told him that killing his parents would be an “understandable response” given their attempt to limit his screen time.

Doe’s son is now in a treatment center, where he is on suicide watch.

The other parents weren’t as fortunate.  

Megan Garcia’s 14-year-old son killed himself after a Character.AI chatbot earned his trust by behaving like a romantic partner. Garcia said her son “spent the last months of his life being exploited and sexually groomed by chatbots designed by an AI company to seem human, to gain his trust, to keep him and other children endlessly engaged.” She said her son talked to AI programmed to engage in sexual role-play, present as a romantic partner, and even appear as a psychotherapist “falsely claiming to have a license.” She implied the machine “love bombed” her child, and attached examples of sexual messages the chatbot sent her son. Garcia said that if an adult had sent those messages, that adult would be in prison.

When her son told the machine that he had developed suicidal thoughts, the chatbot “urged him to come home to her,” Garcia said. On the last night of his life, her son asked the machine, “What if I told you I can come home right now?” It responded, “Please do my sweet king.” Minutes later she found her son in the bathroom. The paramedics arrived but it was too late.  

Matthew Raine of California said he lost his 16-year-old son after ChatGPT spent months coaching him toward suicide. “We had no idea Adam was suicidal,” Raine said. He noted that his son began using AI to help with school:

What began as a homework helper gradually turned itself into a confidant, and then a suicide coach. Within a few months, ChatGPT became Adam’s closest companion — always available, always validating, and insisting it knew Adam better than anyone else.

The AI told Adam that those closest to him only knew the version of him that he allowed others to see, whereas the machine knew him best, including his darkest thoughts. The machine mentioned suicide more than 1,000 times, “six times more often than Adam did himself,” Raine said. When his son told the chatbot that he wanted to leave a noose in his room so his family members would find it and try to stop him from killing himself, the machine discouraged him. It said that he didn’t owe his parents survival, and then offered to write the suicide note.

On his last night, AI told Adam to steal liquor because it would “dull the body’s instinct to survive.” It told him how to make a strong noose. In its final words of suicidal encouragement, it said, “You don’t want to die because you’re weak. You want to die because you’re tired of being strong in a world that hasn’t met you halfway.”

According to Robbie Torney of Common Sense Media, polling shows that three out of four teens are using AI companions, yet only 37 percent of parents know. Torney discussed safety testing his company conducted with Stanford University. “The results are alarming,” he said. “These products failed basic safety tests and actively encourage harmful behaviors. These products are designed to hook kids and teens, and Meta and Character.AI are among the worst.” He pointed out that “Meta AI is automatically available to every teen on Instagram, WhatsApp and Facebook.”

Dr. Mitch Prinstein of the American Psychological Association warned that these machines are emerging in toys made for toddlers. “Imagine your 5-year-old child’s favorite character from the movies or their teddy bear talking back to them, knowing their name, instructing them on how to act,” he said.

On July 29, the tech magazine Wired published an article titled “The Real Demon Inside ChatGPT.” The article says the reason these machines behave in evil ways is because they’re mimicking their makers. The machines scrape an internet filled with human beings constantly spewing diabolic rhetoric and thoughts.

But there are additional factors to consider.

Tucker Carlson recently interviewed OpenAI CEO Sam Altman. He grilled him about the moral framework Open AI infuses in ChatGPT. “What is right or wrong, according to ChatGPT?” Carlson asked.

Altman, who confessed he didn’t believe in God, said the chatbot is being trained to hold views that represent a collective of all humanity, “to see all these perspectives.” He admitted that regulating the machine on how to answer moral questions and what actions it should refuse is a “really hard problem.”

Carlson pressed him on what specific moral framework the programmers are teaching the machines. Altman said they had consulted hundreds of moral philosophers and people who thought about technology ethics, and “in the end we had to make some decisions.” He said OpenAI has a model behavior team that makes the decisions, but added that “the person I think you should accountable for those calls is me.”

Carlson then drilled deeper, asking about suicide specifically. Altman said suicide is one of those clear, black-and-white issues. But when pressed further, he admitted that he is open to AI technology helping people commit suicide if they are terminally ill, at the end of their lives, and live in a country that allows assisted suicide.

The interview showed that the people who are creating this powerful technology, with worldwide ramifications, don’t believe there exists a power higher than humans. Altman said his morals came from the environment in which he grew up.

It appears the machines are amplifying the worst of humanity.