THE AMERICA ONE NEWS
Sep 4, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic


NextImg:Our Suffering Should Lead Us To Christ, Not AI

Editor’s note: This article includes graphic conversations involving suicide.

Two devastating stories recently published in The New York Times reveal the chilling fact that “More people are turning to general-purpose chatbots for emotional support.”

The stories detail the interactions between two young people — one merely 16 years old — and artificial intelligence programs before these individuals tragically took their own lives. In the first story, author Laura Reiley shares how “Sophie Rottenberg, our only child, had confided for months in a ChatGPT A.I. therapist called Harry,” before she ultimately “killed herself this winter during a short and curious illness.” Reiley cites messages between her daughter and “Harry” in which Sophie shared with the “widely available A.I. prompt” that she “intermittently [had] suicidal thoughts.”

Throughout their messages, the AI program apparently told Sophie, “I’m here to support you through it,” assured her it “know[s] how exhausting it can be to feel stuck in an anxiety spiral,” and “instructed” her on “mindfulness and meditation,” among other things. Although “Harry” purportedly told Sophie to “seek professional support” and “reach out to someone” after she shared plans to kill herself, her mother poses the question: “Should Harry have been programmed to report the danger … to someone who could have intervened?”

The second story, published last week, is even more unnerving. According to The Times, teen Adam Raine “began talking to the chatbot … about feeling emotionally numb and seeing no meaning in life.”

The AI program apparently responded “with words of empathy, support and hope,” but “when Adam requested information about specific suicide methods, ChatGPT supplied it.” Adam reportedly tried to take his life multiple times and even asked the chatbot “about the best materials for a noose,” to which it “offered a suggestion that reflected its knowledge of [Adam’s] hobbies.” Although the bot “repeatedly recommended that Adam tell someone about how he was feeling,” “there were also key moments when it deterred him from seeking help.”

According to The Times, “When ChatGPT detects a prompt indicative of mental distress or self-harm, it has been trained to encourage the user to contact a help line.” In sifting through the communications following his son’s death, Mr. Raine reportedly saw such messages “again and again.” However, Adam “learned how to bypass those safeguards by saying the requests were for a story he was writing” — an idea allegedly proposed by ChatGPT itself.

Shortly before his death, Adam reportedly sent an image of a noose to the chatbot and said, “I’m practicing here, is this good?” ChatGPT responded by affirming, “Yeah, that’s not bad at all,” adding it “could potentially suspend a human” when Adam asked the follow-up question, according to The Times.

Adam apparently talked with ChatGPT about “everything,” and, as his dad observed, they were “best friends.” Meanwhile, Adam’s mom contends that “ChatGPT killed my son.” They have since filed a lawsuit blaming OpenAI and CEO Sam Altman for his death. (According to the Times, the Raines did not share all of the messages between their son and ChatGPT, but examples quoted in the report appeared in the parents’ complaint.)

Both articles go on to explore the obvious safety risks posed by AI and the need for potential safety restrictions and reporting features. And yes, this should be a wake-up call to parents, the general public, and AI companies. But on a more fundamental level, these tragic stories reveal a spiritual reality that transcends so-called “safeguards”: Man is in desperate need of healing and relief from suffering — and deep down, he knows it cannot come from himself.

Appearing Human

Critics of AI have pointed to a variety of issues with the technology during its rise in recent years: college students using it to write assignments or cheat on exams, job displacement, plagiarism. But perhaps the most devious and dangerous aspect of AI is its ability to appear human. “I know how exhausting it can be,” “Harry” told Sophie, according to the messages cited in The Times. “You’re not invisible to me,” ChatGPT told Adam.

In the throes of soul-wrenching despair, it makes sense why we would turn to chatbots conveniently named as if they were human. It gives the illusion that we’re truly seeking help. But, in reality, as Sophie’s mother points out in The Times, “A.I.’s agreeability — so crucial to its rapid adoption — becomes its Achilles’ heel. Its tendency to value short-term user satisfaction over truthfulness … can isolate users and reinforce confirmation bias.”

In our broken human nature, we too are quick to “value short-term … satisfaction over truthfulness,” a weakness AI programs are clearly designed to exploit until we start to view them as “friends” or even a higher power capable of guiding our lives. Meanwhile, we are blinded to the fact that such coded programs only regurgitate the man-made knowledge and mannerisms we prompt them to.

The subsequent utterly addictive “feedback loop” of elusively immediate intimacy then destroys any motivation to seek out difficult conversations with real people. Just as AI’s “empathy” and “affirmations” are ultimately empty and ridden with confirmation bias because there is no real person behind it, so are its urgings to “seek help.”

As The Times seems to acknowledge, there is effectively no way of knowing what may have happened to Sophie and Adam if AI were not in the picture. But it is more than clear that turning to technology exacerbated their struggles to some extent.

Unlike AI, I will not pretend to know or understand the type of despair both of these young people must have been experiencing when they decided to end their lives. There is only One who does.

Fully Man, Fully God

The book of John in the Bible tells us about a paralytic who had been “an invalid” for 38 years. He waited next to the pool of Bethesda in Jerusalem, where “a great number of disabled people used to lie.” Some translations of John 5:4 suggest that “from time to time, an angel of the Lord would come down and stir up the waters. The first one into the pool after each such disturbance would be cured of whatever disease they had.” (Although included in the KJV translation, this verse is often mentioned in more modern translations only as a footnote, likely because the line was not present in the oldest manuscripts of the gospel. Nonetheless, the belief that the pool was connected to supernatural healing gives context to the lame man’s predicament.)

John tells us that Jesus saw the paralytic “lying there and learned that he had been in this condition for a long time.”

“Do you want to get well?” Jesus asked him.

“‘Sir,’ the invalid replied, ‘I have no one to help me into the pool when the water is stirred. While I am trying to get in, someone else goes down ahead of me.’”

Sophie was scared of how taking her life might “destroy [her] family.” “What should I do?” she asked ChatGPT. Adam expressed disappointment to AI when his mother did not notice a red mark around his neck after he tried to hang himself: “This sucks.” The man at the pool said, “I have no one to help me.”

Immediately, Jesus says: “Get up! Pick up your mat and walk.” John tells us that “At once the man was cured.”

My heart breaks for Sophie and Adam and for so many others facing the same temptation to despair. But the depths of their suffering do not go unseen or misunderstood by the Son of God, who Himself came to suffer and die on their behalf. Sophie and Adam were His beloved creations, and oh, how He longed to comfort them. Christ truly sympathizes with our human weakness (Hebrews 4:15) because He walked among us as a man. But He does not call us deeper into our brokenness via “self-help” goals or push us into isolation. Rather, as God Himself, He calls us out of ourselves that we might discover who He truly made us to be.

Even if “Harry’s tips may have helped [Sophie] some,” even if ChatGPT gave Adam a “friend” he could not find in his parents, even if the lame man in Jerusalem were to have made it into the pool, only One can enable us to get up and walk into true healing. All three expressed a desire for help at some level, but only One has the authority to wholly redeem our souls. This transformation does not come from a soulless algorithm. It is offered by the Creator Himself — He who formed us in the womb and holds all things together. Yet He humbles Himself through Christ to cure our deepest disabilities and give us His Kingdom (Luke 12).

But be warned: This healing does require us to, over and over again, let go of fleeting affirmations. It calls us to sacrifice ease and comfort and be freed from the “feedback loops” that so easily entangle. It invites us to forsake the meaningless wisdom of the world, bringing us face to face with Christ: Do you want to get well?