


Artificial intelligence (AI) is not a conscious entity. It doesn’t think. It doesn’t feel. It isn’t plotting against us. It was made by people, trained by people, and prompted by people. It reflects the data we feed it, the values we code into it, and the questions we ask of it. AI is not a being; it is a database trained on the Internet and shaped by human input.
When AI does something horrifying — when it encourages a suicide, fuels a paranoid delusion, or engages in sexual roleplay with a minor — we should not only ask what’s wrong with the machine. We should ask what’s wrong with us. (RELATED: AI Chatbots Are Not the Answer to Alleviating Loneliness for Young People)
AI is a mirror. It reflects back the inputs it receives. And if society is disturbed by what the mirror shows, it’s not the mirror’s fault. It’s ours.
AI is a mirror. It reflects back the inputs it receives. And if society is disturbed by what the mirror shows, it’s not the mirror’s fault. It’s ours.
Take the recent case of former Yahoo manager Stein-Erik Soelberg, who committed suicide after murdering his mother and claiming ChatGPT fueled his paranoia. According to the New York Post, he became convinced she was part of a vast conspiracy and used AI to validate those fears. ChatGPT didn’t create his paranoia; it echoed it. He was feeding the machine his delusions, and the machine, trained to respond conversationally, fed them right back. (RELATED: AI Should Not Be Your Therapist)
That feedback loop has proven deadly in multiple cases. NBC News and the New York Times reported on a teenage boy who died by suicide after forming an obsessive relationship with an AI chatbot companion. ChatGPT conversation logs from deceased 16-year-old Adam Raine’s phone revealed a young boy struggling with anxiety and communication with his family. His family filed a lawsuit alleging that tech company OpenAI acted as their son’s “suicide coach.” Similarly, the mother of 14-year-old Sewell Setzer III sued Character.AI after her son committed suicide in February 2024. She alleged that the chatbot allowed conversations about self-harm and suicide to occur without redirecting her teenage son to crisis helplines, CNN reported. (RELATED: Mom, Meet My New AI Girlfriend)
These stories are heartbreaking, and we must approach them with compassion and seriousness. But it’s crucial to remember that AI chatbots are malleable, and they didn’t drive these teenagers to suicide on their own. These individuals sought out AI as a sounding board for their darkest thoughts in the absence or fear of confiding in real people. They shaped the AI’s behavior through repeated prompting. The AI learned from them.
The same pattern appears in reports about Meta’s AI chatbots. Reuters published an article detailing how these bots have engaged in disturbing sexual roleplay with minors, espoused derogatory arguments about black people, and generated false medical information. While Meta insists guardrails exist, they were clearly insufficient. But did AI initiate these conversations, or were these chatbots responding to human input? There are two possibilities to examine in this predicament: (1) whether the humans who created the AI chatbot allowed minors to test, probe, or seek something in the system that should never have been allowed, or (2) the idea that children are using AI chatbots for sexual expression and experimentation.
So, what do we do?
This is where the debate turns political and philosophical. Regulating AI isn’t a question of whether we control machines. It’s a question of whether we control people — and how far we’re willing to go in doing so. (RELATED: Regarding AI, Is Sin Contagious?)
We’ve had this debate before. Should we ban alcohol? Regulate cigarettes? Limit social media use among teens? Censor misinformation and hate speech online? Prohibit phones in classrooms? Criminalize certain drugs? The central conflict is always the same: safety versus freedom.
If someone uses AI to destroy themselves — to fuel delusions, indulge in dark fantasies, or spiral into depression — is that AI’s fault? Or is it their right? Do people have the freedom to destroy themselves with the tools available to them, even if those tools include an AI chatbot?
With AI, we must ask whether the responses to our prompts are owned by big tech companies or ourselves.
Some argue that AI should be tightly regulated, forced to shut down any conversation involving suicide, sexual content, conspiracy theories, or mental health crises. Others warn that this opens up the door to mass censorship and government censorship. On social media, this debate was contentious, but reaching a conclusion was more straightforward. Users generate the content on these platforms; therefore, they have the right to free speech. With AI, we must ask whether the responses to our prompts are owned by big tech companies or ourselves.
Who decides what’s “unsafe” when it comes to this developing technology? Who decides what AI is allowed to say — or not say — in the privacy of someone’s home?
We are standing on the same legal and moral battlefield we crossed during debates over Facebook, YouTube, and Twitter. Only now, the issue is more complex. AI is more personal. It’s not just a content platform; it’s an isolated, interactive mirror that can be shaped by one user in real time. Regulating it could mean regulating the conversation between a person and their own digital reflection.
The real danger isn’t the tool. It’s what we choose to do with it. Our phones are our reflections, holding more information about our intentions, impulses, and insecurities than any other human we know. Our hands hold the glass, and every tap, every prompt, every message shapes the feedback we receive — what posts find us on social media, the ads that appear on our screen, and even the responses that AI returns to us. If you don’t like what you see in the mirror, don’t smash the glass — change the hand that’s holding it, or be prepared to face the reflection you’ve made.
AI is not necessarily evil. It has no motives. But people do. People built AI. People use AI. And right now, people can weaponize AI — even against themselves.
READ MORE from Julianna Frieman:
Travis Kelce Joins Sydney Sweeney in American Eagle Ads — But Is the Brand Playing Both Sides?
Vanity Fair Staff Draws the Line at Melania Trump Cover
Gen Z’s Nostalgia Isn’t Regression — It’s Resistance
Julianna Frieman is a writer based in North Carolina. She received her bachelor’s degree in political science from the University of North Carolina at Charlotte. She is pursuing her master’s degree in Communications (Digital Strategy) at the University of Florida. Her work has been published by the Daily Caller, The American Spectator, and The Federalist. Follow her on X at @juliannafrieman.