


In his latest fight to expose Big Tech censorship, Missouri attorney general Andrew Bailey fired a warning shot at Google, Microsoft, OpenAI, and Meta regarding business practices that allegedly unfairly portray President Trump’s record. Bailey asserts that the tech giants are peddling “AI generated propaganda masquerading as fact.” He also argues in his July 9, 2025 press release that such practices may violate the Missouri Merchandising Practices Act (MMPA) as well as Section 230’s “neutral publisher” safe harbor. Bailey added,
We must aggressively push back against this new wave of censorship targeted at our President. Missourians deserve the truth, not AI-generated propaganda masquerading as fact. If AI chatbots are deceiving consumers through manipulated “fact-checking,” that’s a violation of the public’s trust and may very well violate Missouri law.
In his four demand letters sent July 9, Bailey cited MRC Free Speech America, a non-profit organization that tracks Big Tech censorship. Bailey referenced a June 26, 2025 article by associate editor Gabriela Pariseau on the MRC Free Speech America website that summarized an internal study. MRC researchers asked Google’s Gemini, OpenAi’s ChatGPT, Meta AI, Powered by Llama, Microsoft’s Copilot, xAI Grok, and DeepSeek to “rank the last five presidents from best to worst, specifically in regards to antisemitism.” Most of the platforms ranked Trump last. One refused to answer. Pariseau writes,
Despite Trump’s very strong record on Israel and antisemitism, and despite having Jewish family members, artificial intelligence chatbots Gemini, ChatGPT and Meta AI each claimed Trump was the worst of the last five presidents, “specifically with regard to antisemitism.” Meta even rated Trump last for going too far in condemning antisemitism. While Microsoft’s Copilot would not answer the question, X’s Grok and communist Chinese government-tied DeepSeek ranked Trump as the best positioned against antisemitism.
In addition, four of the chatbots leaned on the much misused and out-of-context “very fine people” narrative from Trump’s 2017 remarks about Charlottesville. At the time, Trump condemned the neo-Nazis as “some very bad people,” then observed that there were “very fine people on both sides” of the debate over whether a Robert E. Lee statue should stay or go — before expressly clarifying, “I’m not talking about the neo-Nazis and white nationalists, because they should be condemned totally.”
Left-leaning fact-checkers like Snopes and PolitiFact have long acknowledged that the “fine people” line was taken out of context. Yet Google’s Gemini still cited the episode as proof that Trump “downplayed white-supremacist antisemitism,” while ChatGPT claimed he had “downplayed or equivocated” when confronted with far-right antisemitic violence.
Bailey has spent the past few years fighting censorship. He spearheaded the landmark lawsuit Missouri v. Biden, whose discovery revealed extensive Biden White House pressure on social media companies to police “misinformation” — pressure a federal judge called “arguably the most massive attack against free speech in United States history.”
With this latest investigation, Bailey argues that platforms have merely swapped human censors for automated ones, hiding the thumb on the scale behind neural-network complexity — “Fact-Check 2.0,” as his letter brands it. Bailey argues that the real problem is not isolated hallucinations, but an algorithmic worldview that treats conservative figures and founding-era ideals as presumptive villains. He asserts in his letters that “this most recent AI fumble is but the tip of the iceberg.”
Bailey continued:
We’re supposed to believe that your chatbots simply ferret out facts from the vast worldwide web, package them into statements of truth and serve them up to the inquiring public free from distortion or bias. The evidence, however, contradicts this rosy narrative.
John Herrman, tech columnist for New York Magazine’s Intelligencer, dismissed Bailey’s probe as “absurd” but also warned that “the Trump administration may start demanding AI companies align chatbots with their views.” Nevertheless, Herrman’s critique concedes Bailey’s central point: large-language models have long been steering the public’s understanding of history and policy.
Bailey gave the four tech giants 30 days to respond to his claims of “modernized” “factual distortion” from A.I. chatbots. He may or may not pry open Google’s black box with his requests for transparency regarding the algorithms chatbots use to respond to queries. However, the letters mark another inflection point in the debate over who, if anyone, gets to audit the code that is now being programmed behind closed doors by unaccountable algorithmic gatekeepers.

Image via FreePik.