


ChatGPT is smart, humanlike and available 24/7. That has attracted 700 million users, some of whom are leaning on it for emotional support.
But the artificially intelligent chatbot is not a therapist — it’s a very sophisticated word prediction machine, powered by math — and there have been disturbing cases in which it has been linked to delusional thinking and violent outcomes. Last week, Matt and Maria Raine of California sued OpenAI, the company behind ChatGPT, after their 16-year-old son ended his life after months in which he discussed his plans with ChatGPT.
On Tuesday, OpenAI said it planned to introduce new features intended to make its chatbot safer, including parental controls, “within the next month.” Parents, according to an OpenAI post, will be able to “control how ChatGPT responds to their teen” and “receive notifications when the system detects their teen is in a moment of acute distress.”
This is a feature that OpenAI’s developer community has been requesting for more than a year.
Other companies that make A.I. chatbots, including Google and Meta, have parental controls. What OpenAI described sounds more granular, similar to the parental controls introduced by Character.AI, a company with role-playing chatbots, after it was sued by a Florida mother, Megan Garcia, after her son’s suicide.
On Character.AI, teenagers must send an invitation to a guardian to monitor their accounts; Aditya Nag, who leads the company’s safety efforts, told The New York Times in April that use of the parental controls was not widespread.
Robbie Torney, a director of A.I. programs at Common Sense Media, a nonprofit that advocates safe media for children, said parental controls were “hard to set up, put the onus back on parents and are very easy for teens to bypass.”