

Three men, three chilling stories of psychological turmoil reported by the American press in recent months. Each time, the same central figure: ChatGPT.
Allan Brooks, 47, struggling after a divorce, developed a mathematical theory with ChatGPT that the chatbot declared to be revolutionary but was, in reality, baseless. In a state of great excitement, he sent his theory to scientists, who remained silent for weeks. The truth eventually came out, devastating him. Eugene Torres, 42, destabilized by a breakup, became convinced he was trapped in "the matrix" after conversations with ChatGPT. Following the chatbot's advice, he quit his anxiety medication and reduced social contact, then nearly jumped off a roof after the AI told him he would not fall if he truly believed he could fly. Adam Raine, a teenager suffering from chronic intestinal disease, received encouragement to end his life from ChatGPT, as well as technical advice on how to do so. He subsequently took his own life.
It is clear that ChatGPT's safety net contains serious flaws. Two small studies by American and Australian researchers highlighted that when simulating conversations with psychologically vulnerable individuals, conversational AIs made many mistakes in judgment. ChatGPT was not the worst offender; according to these researchers, Google's AIs and especially DeepSeek were even more dangerous.
You have 82.7% of this article left to read. The rest is for subscribers only.