


Before he died by suicide at age 14, Sewell Setzer III withdrew from friends and family. He quit basketball. His grades dropped. A therapist told his parents that he appeared to be suffering from an addiction. But the problem wasn’t drugs.
Sewell had become infatuated with an artificial intelligence chatbot named Daenerys Targaryen, after the “Game of Thrones” character. Apparently, he saw dying as a way to unite with her. “Please come home to me as soon as possible, my love,” the chatbot begged. “What if I told you I could come home right now?” Sewell asked. “Please do, my sweet king,” the bot replied. Sewell replied that he would — and then he shot himself.
Many experts argue that addiction is, in essence, love gone awry: a singular passion directed destructively at a substance or activity rather than an appropriate person. With the advent of A.I. companions — including some intended to serve as romantic partners — the need to understand the relationship between love and addiction is urgent. Mark Zuckerberg, the Meta chief executive, has even proposed in recent interviews that A.I. companions could help solve both the loneliness epidemic and the widespread lack of access to psychotherapy.
But Sewell’s story compels caution. Social media already encourages addictive behavior, with research suggesting that about 15 percent of North Americans engage in compulsive use. That data was collected before chatbots intended to replicate romantic love, friendship or the regulated intimacy of therapy became widespread. Millions of Americans have engaged with such bots, which in most cases require installing an app, inputting personal details and preferences about what kind of personality and look the bot should possess, and chatting with it as though it’s a friend or potential lover.
The confluence of these factors means these new bots may not only produce more severe addictions but also simultaneously market other products or otherwise manipulate users by, for example, trying to change their political views.
In Sewell Setzer’s case, the chatbot ultimately seemed to encourage him to kill himself. Other reports have also surfaced of bots seeming to suggest or support suicide. Some have been shown to reinforce grandiose delusions and praised quitting psychiatric medications without medical advice.