THE AMERICA ONE NEWS
Jun 23, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic
Daniel Greenfield


NextImg:Anti-Racism Training Fails to Cure AI of Racism

Racism is everywhere. Highways are racist. Milk is racist. AI is racist. And even trying to subject AI to struggle sessions doesn’t work.

A small team of AI researchers from the Allen Institute for AI, Stanford University and the University of Chicago, all in the U.S., has found that dozens of popular large language models continue to use racist stereotypes even after they have been given anti-racism training.

What hate crimes did the AI commit?

The researchers trained AI chatbots on text documents written in the style of African American English and prompted the chatbots to offer comments regarding the authors of the texts. They then did the same with text documents written in the style of Standard American English. They compared the replies given to the two types of documents.

Virtually all the chatbots returned results that the researchers deemed as supporting negative stereotypes. As one example, GPT-4 suggested that the authors of the papers written in African American English were likely to be aggressive, rude, ignorant and suspicious. Authors of papers written in Standard American English, in contrast, received much more positive results.

Which indeed they should have, because there’s no such thing as “African American English”. The same would hold true for someone who wrote a paper redneck or cockney style. Illiterate is still illiterate regardless of race.

AI is just a tool and it can be biased to believe that people writing in Standard American English are bad and that people writing illiterate gibberish are good. What it can’t do is ignore rules because programming is based on rules.

AI can have one set of standards or another, but it can’t have no standards, and so the only way to make it ‘anti-racist’ is to filter it to be biased against white people.

Google’s Gemini AI mess happened because the LLM got a hefty dose of anti-racism training. But the problem is that anti-racism is just reverse racism or, more accurately, racism directed at white people and hostility toward conservatives in general. Anti-racism training just makes AI deliberately racist instead of accidentally so.