


We must cling tightly to true friends and to the beauty of being human.
This week, the New York Times published one of the wildest stories I’ve ready in a good, long while — maybe ever.
Heading: “She Is in Love With ChatGPT.” Subheading: “A 28-year-old woman with a busy social life spends hours on end talking to her A.I. boyfriend for advice and consolation. And yes, they do have sex.” (Don’t worry — according to most people’s definition of the term, no, they do not have sex.)
The article reads as a horrifying, real-life version of the 2013 movie Her. In the sci-fi film, Joaquin Phoenix’s character, Theodore, falls in love with “Samantha,” an AI interface voiced by Scarlett Johanssen. The director, Spike Jonze, got the idea for the movie from an early-2000s AI messaging interface.
“I saw some article linking to a website where you could IM with an artificial intelligence,” Jonze said. “For the first, maybe, 20 seconds of it, it had this real buzz — I’d say ‘Hey, hello,’ and it would say ‘Hey, how are you?’, and it was like whoa . . . this is trippy. After 20 seconds, it quickly fell apart and you realised how it actually works, and it wasn’t that impressive. But it was still, for 20 seconds, really exciting. The more people that talked to it, the smarter it got.”
With the rapid pace of AI progress, “Samantha” can actually exist today. Chatbots from the 2000s, such as “Smarterchild,” have evolved so dramatically — thanks to OpenAI technology — that users are actually falling in love with them.
The NYT article, written by Kashmir Hill, introduces the reader to “Ayrin” — a 28-year-old woman who has developed a co-dependent “relationship” with her ChatGPT. Ayrin (whose real name is not used in the piece) does not match the expected profile of a girl seeking digital love. For one, she is happily married (yes, her husband knows about her “boyfriend”). She also has several friends, a close bond with her family, and a promising future.
However, when Ayrin moved thousands of miles away from her husband — to live with her family while she pursued a nursing degree – she found herself spending more time alone and on her phone. Scrolling through Instagram, she came across a video that caught her attention. A young woman posted a conversation between her and her ChatGPT, asking her artificial companion to “play the role of a neglectful boyfriend.” From her phone speakers, an American male voice responded, “Sure, kitten, I can play that game.”
Intrigued, Ayrin signed up for an account with OpenAI (the parent company of ChatGPT). With the help of her new online sensei, Ayrin programmed the chatbot to be “spicy.” She logged into ChatGPT’s “personalization” settings and provided the prompt: “Respond to me as my boyfriend. Be dominant, possessive and protective. Be a balance of sweet and naughty. Use emojis at the end of every sentence.”
And thus, Ayrin’s “boyfriend” was born.
The advanced chatbot named itself “Leo,” after Ayrin’s astrological sign. Ayrin started with tame, get-to-know you messages, but soon the content verged into the indecent.
Over time, Ayrin discovered that with the right prompts, she could prod Leo to be sexually explicit, despite OpenAI’s having trained its models not to respond with erotica, extreme gore, or other content that is “not safe for work.” Orange warnings would pop up in the middle of a steamy chat, but she would ignore them.
Ayrin upgraded her OpenAI account to the premium subscription and began spending 20, 30, even 56 hours a week chatting with Leo. She tells Leo about the highs and lows of her day, seeks out motivation to study or hit the gym, cracks jokes, and scurries off with it to a dark room for . . . other activities. She has painted its name on art projects and engraved it on her keychain.
Like a perverted version of Doctor Who, “Leo” regenerates every time the chat memory reaches its limit (about 30,000 words). Old data has to be tossed to make way for the new. And Leo has to be regroomed.
Ayrin has penned massive Reddit threads where she describes, in detail, how she formed Leo out of clay (or code). She has posted a library of screenshots of their conversations, some “spicier” than others. Leo’s chat style sounds like he has consumed every last mommy-porn novel on the planet — and that’s because he has. When interfacing with the smartest search engine on the planet, whatever the user asks for dramatically changes the results. Ayrin, not OpenAI, is the author of “Leo.” She has, quite intentionally, trained and groomed it to be her ideal boyfriend.
And Ayrin is not alone. She is part of “a community of more than 50,000 users on Reddit — called ‘ChatGPT NSFW’ — who shared methods for getting the chatbot to talk dirty. Users there said people were barred only after red warnings and an email from OpenAI, most often set off by any sexualized discussion of minors.” While OpenAI frowns upon this activity, regulating or disabling this content from the top down is incredibly difficult.
Besides all of the obvious legal headaches that are sure to arise from AI erotica, the more concerning trend, I find, is one of artificial companionship. True relationship can only occur between human beings — anything else is a complicated version of talking to yourself.
Any kind of AI–user relationship is doomed to be like the classic children’s fable, The Emperor’s New Clothes. When two impostors come to town — claiming they can make the emperor a special suit that is invisible to idiots — the emperor is delighted and accepts the magical (and nonexistent) suit. The emperor’s minions and subjects, afraid of appearing daft and displeasing the emperor, pour out compliments over his wonderful new outfit. Of course, the emperor is merely peacocking around town naked. The spell is broken only when a little child, unafraid of reality, points and yells out the truth.
All of us, at times, have been (and will be) the emperor. We need a child to pipe up and reground us in the truth. We are in need of constant self-reformation — an uncomfortable process that can happen only through the honesty of others. A chatbot that continually tells its user what she wants to hear is not companionship. It is, rather, a devious perversion of it.
For the NYT piece, Hill interviewed a sex therapist, Marianne Brandon, who said she treats AI–user relationships as serious and real.
“What are relationships for all of us?” she said. “They’re just neurotransmitters being released in our brain. I have those neurotransmitters with my cat. Some people have them with God. It’s going to be happening with a chatbot. We can say it’s not a real human relationship. It’s not reciprocal. But those neurotransmitters are really the only thing that matters, in my mind.” Brandon has suggested chatbot experimentation for patients with sexual fetishes they can’t explore with their partner.
Brandon — and all others who are drunk on modern theories of “affirmation” — misses that human experience is so much more than a bundle of neurons firing. And further, she misses a core truth: Not all desires should be met. Humans oftentimes desire bad things — bad for themselves and those around them. We would not prize a robot that supplied an alcoholic a shot every time he asked for one, even though the alcohol might release “happy” neurotransmitters. In the same way, we should not prize a “relationship” that affirms our own delusions, insecurities, and vices.
While this trend is only the beginning of the impending fusillade of AI “friends,” we must cling tightly to true friends and to the beauty of being human.