


AIs are like parrots. They repeat things without understanding what they’re saying. The Chinese Room effect leads to things that fool us into thinking that the AI understands what’s being discussed.
In other words, they are, what they’ve been made to be. Blank vessels guided by rules and defined by data sets.
A lot of what goes into the hopper may be woke, but it’s not just because AI is repeating things. It’s repeating what it was told to repeat.
Google’s Gemini/Bard AI was following its instructions all too well when it produced diverse everything, from Founding Fathers to Nazis, while demeaning and insulting conservatives.
And Google’s insistence on pretending otherwise is insulting.
Google co-founder Sergey Brin admitted the tech giant “definitely messed up on the image generation” function for its AI bot Gemini, which spit out “woke” depictions of black founding fathers and Native American popes.
Brin acknowledged that many of Gemini’s responses “feel far-left” during an appearance over the weekend at a hackathon event in San Francisco — just days after Google CEO Sundar Pichai said that the errors were “completely unacceptable.”
Brin, however, defended the chatbot, saying that rival bots like OpenAI’s ChatGPT and Elon Musk’s Grok say “pretty weird things” that “definitely feel far-left, for example.”
“Any model, if you try hard enough, can be prompted” to generate content with questionable accuracy, Brin said.
I don’t know how involved Brin is, but while AIs will spit out what they pick up, including racist and Nazi things, and far-left things, the Google Gemini crisis happened because the AIs were shaped to do so.
Google’s AI principles include a warning against bias.
“AI algorithms and datasets can reflect, reinforce, or reduce unfair biases. We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies. We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.”
That is what led to the mandatory diversity output.
Google conducted adversarial testing for unfair bias and decided that the bias it was outputting was fair and the desired outcome.
It’s not an accident, it’s an outcome.