THE AMERICA ONE NEWS
Aug 27, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic
John Mac Ghlionn


NextImg:A New Psychosis Consuming America

We live in the golden age of outsourcing. First, we sent our jobs overseas. Then we outsourced our memory and research to Google. Now, with ChatGPT, we outsource thinking itself — and, more troubling still, responsibility. And with every handoff comes a new hysteria. The latest specter is “ChatGPT psychosis,” the claim that artificial intelligence is tipping otherwise stable people into madness. At first glance, the outrage seems justified. However, upon closer inspection, it quickly becomes clear that it follows the same tired pattern we’ve seen countless times. What’s missing, as always, is nuance, clear thinking, and a sober assessment of reality.

Consider a recent case involving a California lawyer with a documented history of “temporary psychosis.” After marathon sessions with ChatGPT, he deteriorated. His wife recognized the signs instantly — because she had seen them before. The relapse wasn’t engineered in a lab in San Francisco; it was the latest chapter in a long struggle. The chatbot was not the architect of the collapse, only the stage on which it played out. Yet headlines blared: “AI Drives Man Insane.” What never makes the front page is the less clickable truth: “Man Struggles With Chronic Mental Illness.”

This reflex to assign blame to technology is old and well-rehearsed. Scroll compulsively, and it becomes “social media addiction.” Binge-watch a series and it becomes “streaming disorder.” Talk to a chatbot and suddenly it’s “AI psychosis.” The script is predictable: identify a troubled individual, add a new technology, and declare an epidemic. Meanwhile, the deeper forces — poverty, isolation, untreated illness — are ignored. Why? Because they don’t sell.

Each time, technology became the villain, while the harder, less sensational truths stayed buried.

History shows the same cycle. The printing press was supposed to dull minds. The telephone would kill conversation. Television was going to corrupt society. Video games, we were told, would turn children into killers. Each time, anecdotes were inflated into epidemics. Each time, technology became the villain, while the harder, less sensational truths stayed buried.

The reality is starker but less dramatic. People experiencing psychosis have always latched onto whatever is available: conspiracy tracts, manifestos, and so forth. Swap ChatGPT with doomsday cult documents and political pamphlets, and the pattern remains the same — sleepless nights, sprawling theories, furious note-taking, and concerned loved ones. The props change. The illness does not. (RELATED: AI Chatbots Are Not the Answer to Alleviating Loneliness for Young People)

Of course, none of this means LLMs are harmless. They can magnify existing struggles. They can fuel compulsive use. For children and adolescents, whose brains are still developing, the risks are profound. These tools can warp attention spans. They can foster dependency and blur the line between real life and a simulated one. No one should dismiss those dangers. But to say they cause psychosis is to move from evidence into embellishment, from precision into propaganda. (RELATED: Gen Z Isn’t Just Online — They’re Living in Parallel Realities)

The panic’s timing is telling. Mental health crises didn’t appear with ChatGPT. They’ve been climbing for well over a decade. Depression, anxiety, and suicide — especially among young people — are at record highs. A Yale report shows that between 2007 and 2021, suicide among Americans aged 10 to 24 rose by 62 percent. Sadly, suicide is now the leading cause of death among young people in America.

AI didn’t build this dreary landscape. It simply entered it, and yes, it might play a role in dismantling it.

Also, without meaning to sound like AOC, there’s a class issue worth addressing. When working-class Americans numb themselves with alcohol or drugs, society labels them reckless or immoral. But when professionals go into sleepless nights talking to a chatbot, we call it a “new disorder” and push for sweeping reforms. The bottle, the needle, the chatbot — it’s often the same story wrapped in different packaging. The only things that change are the zip code, the color of the collar, and the income bracket. And in both cases, the root problem isn’t the prop in hand, it’s untreated mental illness compounded by collapsing institutions and the absence of real psychological or social safety nets.

Again, recognizing this doesn’t mean that I am minimizing the risks of LLMs. They can be serious, sometimes harmful, especially for kids. But they are not the spark. Putting them in proper context matters: they are accelerants, not instigators.

Until we face those realities, “AI psychosis” headlines will multiply. Not because they’re true, but because they’re convenient. It’s always simpler to demonize a chatbot than to confront the demons residing in our own heads. Blaming technology is easier than demanding responsibility, easier than admitting that free will still exists, and that our choices still matter. The stories will keep coming. The panic will keep flaring. And the real work — the slow, costly, essential work of rebuilding care, strengthening families, and restoring community — will continue to be ignored.

READ MORE from John Mac Ghlionn:

Facing up to Black Crime in America

OnlyFans and the Economics of Empty Conversions

Why Democrats Can’t — and Won’t — Replicate MAGA