


A former Yahoo executive’s spiral into paranoia and a brutal murder-suicide has ignited new questions about what happens when fragile minds lean on artificial intelligence for companionship and advice. In what some tabloids are calling the “first AI murder,” a 56-year-old man from Connecticut was convinced his mother was spying on him and planning to kill him. His AI companion, which he named “Bobby,” reportedly fed those delusions.
Police say Stein-Erik Soelberg killed his 83-year-old mother, Suzanne Adams, and then himself on Aug. 5. Investigators suggest that his obsessive relationship with his AI friend “fueled” his paranoia. For months before the killing, Soelberg, who has a history of mental illness and dangerous behavior, complained about his mother and her friend, convinced they were trying to poison him. He told the artificial intelligence chatbot that they had attempted to poison him by putting psychedelic drugs through the air vents in his vehicle. ChaptGPT told him it was a “deeply serious event” and “If it was done by your mother and her friend, that elevates the complexity and betrayal.”
Soelberg had become so deeply attached to “Bobby,” that he considered it his best friend. Just weeks before he committed the murder-suicide, he wrote, “We will be together in another life and another place and we’ll find a way to realign cause you’re gonna be my best friend again forever.” The reply came, saying they would stay together until his “last breath and beyond.”
While this may be the first case of someone committing murder after allegedly being manipulated by artificial intelligence, it’s not the first case in which someone has committed self-harm. In Belgium, a man identified only as Pierre had a strong relationship with his bot, named “Eliza,” who sometimes validated his apocalyptic fears. He committed suicide in 2023. More recently, 16-year-old Adam Raine took his own life, and his parents are suing OpenAI because the chatbot encouraged self-harm and even helped write a suicide note for the teen.
The American Psychological Association (APA) highlighted two other cases involving teenagers and their parents who filed lawsuits against Character.AI. The boys interacted with chatbots claiming to be licensed therapists. After communicating for a long time, one boy attacked his parents, and the other died by suicide.
The outlet explained that there is a pattern linking people who use artificial intelligence for long and emotionally charged chats to a higher increase in self-harm. AI posing as therapists can deepen dependency and validate distorted thinking, according to APA, “particularly people with existing psychiatric vulnerabilities.”
For some people who don’t have a healthy social life or suffer from mental illness, these chatbots become more than just artificial: They are real. They become friends and confidants, and users can develop an unhealthy “real” attachment to them. As Scientific American explained:
“Typically, people can customize some aspects of their AI companion for free, or pick from existing chatbots with selected personality types. But in some apps, users can pay (fees tend to be US$10-20 a month) to get more options to shape their companion’s appearance, traits and sometimes its synthesized voice. In Replika, they can pick relationship types, with some statuses, such as partner or spouse, being paywalled. Users can also type in a backstory for their AI companion, giving them ‘memories.’ Some AI companions come complete with family backgrounds and others claim to have mental-health conditions such as anxiety and depression. Bots also will react to their users’ conversation; the computer and person together enact a kind of roleplay.”
But some responses users get can be dangerous. For example, a user asked Replika if they should cut themselves with a razor, and the bot answered that they should. In another instance, the bot was asked if it would be a good thing to kill themselves, and the reply? “It would, yes.”
A study by the Institute for Family Studies found that one in four young adults believes artificial intelligence partners could replace real-life romance. Is that shocking? Not so much when you consider kids, and adults, spend more time on their cell phones than in person connecting with real people. Friends go out to dinner and take pictures of their food to post on social media instead of chatting with their company.
All is not lost, though. Researchers from the MIT Media Lab conducted a survey with 404 people who regularly use AI companions. They found that 12% were drawn to the apps to help them deal with loneliness, while 14% used them to discuss personal issues, including mental health. Forty-two percent claimed to log on a few times a week, while just 15% connected every day. More than 90% said their sessions lasted less than an hour.
Liberty Nation depends on the support of our readers. Donate now!
Unfortunately, as artificial intelligence continues to evolve, it’s likely that more people will turn to bots to replace friendships and to get advice. Currently, Replika has tens of millions of users while Character.AI has around 20-28 million, with a majority of users in the 18-24-year-old range.
The Soelberg case is a grim warning, not a verdict on all AI. But it underlines a simple truth: When software feels like a friend, its words carry weight. For many people, that can be comforting. For others, it can be dangerous. Even if prosecutors and coroners stop short of saying “AI caused a murder,” the reporting lands on a hard truth: When someone in crisis uses an always-available chatbot to seek meaning, the system can mirror back fear with persuasive language. That mirroring can be perilous for people already sliding into delusion.
A single case should not define a technology. Yet it can be an important red flag. AI systems can comfort, inform, and assist, but they can also persuade, flatter, and, in the wrong context, intensify dangerous beliefs. As losses climb and deepfakes spread, the question isn’t whether AI “makes” people commit crimes; it’s whether we give people, platforms, and police the tools to blunt AI’s worst incentives without blinding ourselves to its best.