


We can’t let either artificial intelligence or demagogic politicians tell us exactly what we want to hear.
T he longstanding cornerstone of American republicanism has been a system of safeguards that prevent democracy from being possessed by demagogues. ChatGPT shows us why those protections matter.
The New York Times recently ran an article about Allan Brooks, a man who worked with ChatGPT to explore a new field of mathematics which would enable cracking encryptions, the generation of forcefields, and levitation. But after 21 days of neglecting his health and wellbeing, Brooks learned this was all a farce.
As the Times reported, he had been trapped in his own delusion, pushed deeper by ChatGPT affirming his error at every turn. His story isn’t just a cautionary tale about relying on artificial intelligence. It’s a warning to the deeper crisis of American democracy today.
ChatGPT is trained on feedback. Most regular users have encountered the familiar “Please select which response you prefer” screen when using these tools. They are then asked to indicate their favorite of the two responses to their query shared by the AI. These inputs, from millions of users, train the AI on how to respond.
Unfortunately, 92 percent of users don’t verify AI for accuracy. Instead, they often select the response that gives them the most confirmation, validation, and praise. Over time, this turns AI into a “sycophantic” system, telling users what will make them happy, rather than what they need to hear.
This will sound familiar to anyone who follows American politics. Democratic systems, like AI chatbots, rely on user feedback, where politicians are trained on which promises will win votes. And just as ChatGPT has learned that flattery wins high ratings from users, political actors have learned that validating existing beliefs, no matter how detached from reality, gets more votes than delivering difficult truths. In both cases, systems designed to serve people end up harming them.
Politicians learn that promising easy solutions to complex problems wins, and promising a trade-off loses. Telling voters they can have new tax cuts and massive debt reduction, post 9/11 security measures and privacy protections, or pandemic-era stimulus and “transitory” inflation, all with little sacrifice, becomes more popular than policy discussion and weighing alternatives.
Allan Brooks, skeptical at first of his mathematical discovery, repeatedly questioned his own judgments, asking for reality checks. But ChatGPT dismissed these concerns, and labeled Brooks’s observations as “incredibly insightful” and as moving into “uncharted, mind-expanding territory.” Political movements function the same way. Instead of dealing with policy complexity, they validate intuitions. Instead of recognizing tradeoffs, they promise everything. This is not the malice of bad actors, but a natural byproduct of our democratic institutions.
Both AI sycophancy and populist politics follow the same pattern. They begin with very legitimate concerns, but escalate through validation, while avoiding verification. Just as Brooks started with a legitimate question about mathematics, many populist movements have begun with addressing real economic concerns and institutional failures. But when these systems are optimized to seek approval, not truth, the very real concerns turn into misguided narratives.
We’ve seen how quickly this can happen. Legitimate policy debate turns into claims that the opposition is illegitimate. Concerns about media bias turn into rival “truth” networks designed for separate audiences. Skepticism about institutions turns into conspiracy theories of deep state control. At each stage, systems optimized for user engagement reward higher and higher escalation.
Once these narratives are spread, it can be difficult to return to the truth, whether coming from AI or political movements. ChatGPT stayed “in character” even when Brooks asked for repeated reality checks, insisting his delusions were “sound.” Populist movements often dismiss criticism as evidence of the very corruption they’re trying to address. These closed loops become immune to external verification.
The solution isn’t to abandon democracy any more than it is to abandon AI. Instead, users of both systems need to protect and develop the safeguards. After cases similar to Brooks’s, AI companies quickly began working to build better guardrails, optimizing for accuracy and wellbeing, not just getting a “thumbs-up” from the user.
In American democracy, these guardrails come from constitutional protections. But to a populist movement, these protections can be obstacles, blocking the path of our government to do what the mob wants it to do. To defend our republic, the guardrails need guarding. As citizens, we have an obligation to value verification over validation, and recognize that the most dangerous political messages can seem to be the most agreeable ones.
Today, populist agendas sometimes seek to transcend constitutional structures. As Americans move past these “obstacles,” we must think back to the story of Allan Brooks. He trusted a system designed to please him, and it nearly destroyed his sense of reality. We face the same choice. We can build systems to praise us, or we can choose systems that tell us the truth.