THE AMERICA ONE NEWS
May 30, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic
Ace Of Spades HQ
Ace Of Spades HQ
21 Feb 2024


NextImg:Google's Artificial Intelligence Has a "Diversity and Inclusivity" Protocol Which Is Turning It Into Artificial Stupidity

Just like the bonkers lefties who are programming it.

Before getting to the main story, which might strike you as just a bit of silliness, let's establish the context: Google is all-in on "combating misinformation," which is code for "suppressing rightwing critiques of the leftwing agenda," and routinely uses its AI to search for Forbidden Words, like "excess mortality" or "open borders," to reduce the visibility of anyone saying those words, and demonetize them, and even deplatform them completely.

Google is now unleashing propaganda cartoons to "pre-bunk" "misinformation" and "conspiracy theories" that Google AI has been taught by its lunatic Sensitivity Coders to suppress.

In the days before the 2020 election, social media platforms began experimenting with the idea of "pre-bunking": pre-emptively debunking misinformation or conspiracy theories by telling people what to watch out for.

Now, researchers say there's evidence that the tactic can work -- with some help from Homer Simpson and other well-known fictional characters from pop culture.

In a study published Wednesday, social scientists from Cambridge University and Google reported on experiments in which they showed 90-second cartoons to people in a lab setting and as advertisements on YouTube, explaining in simple, nonpartisan language some of the most common manipulation techniques.

The cartoons succeeded in raising people's awareness about common misinformation tactics such as scapegoating and creating a false choice, at least for a short time, they found.

The study was published in the journal Science Advances and is part of a broad effort by tech companies, academics and news organizations to find new ways to rebuild media literacy, as other approaches such as traditional fact-checking have failed to make a dent in online misinformation.

"Media literacy" means, here, "trust in the media." Their idea of a "literate" media user is someone who blindly believes all the bullshit the media tells him. People have lost that "literacy," and the media wants to reprogram people so they are possessed of this "literacy" once again.


"Words like 'fact-checking' themselves are becoming politicized, and that's a problem, so you need to find a way around that," said Jon Roozenbeek, lead author of the study and a postdoctoral fellow at Cambridge University's Social Decision-Making Lab.

People have realized the "fact" checking industry is a partisan scam paid for by leftwing bureaucrats using taxpayer funds, and no longer view "fact" checkers as authoritative or even credible. So obviously "that's a problem," and "you need to find a way around that."

So start blasting people with propaganda cartoons. Cartoons, so that we can get those Early Readers filled with "media literacy" at as early an age as possible.

The researchers compared the effects to vaccination, "inoculating" people against the harmful effects of conspiracy theories, propaganda or other misinformation. The study involved nearly 30,000 participants.

The latest research was persuasive enough that Google is adopting the approach in three European countries -- Poland, Slovakia and the Czech Republic -- in order to "pre-bunk" anti-refugee sentiment around people fleeing Ukraine.

The company said it doesn't have plans to push "pre-bunk" videos in the United States ahead of the midterm elections this fall but said that could be an option for future election cycles. Or it's a cause that advocacy groups, nonprofit organizations or social media influencers could take up and pay for on their own, Google and the researchers said. (The videos are "freely available for all to use as they wish," their YouTube page says.)

As mentioned, Google's AI is a 24/7/365 millisecond-by-millisecond surveillance engine and censor.

While Google's secret algorithms are constantly manipulating public opinion, they're almost invisible.

But this is highly visible -- we can now see with our eyes what Google's AI has been taught to push as the correct worldview.

Google's AI program "Gemini" can create computer-generated images based on user prompts.

But the AI is coded to favor "Diversity!" and "Inclusivity!" over anything else, including historical accuracy and common sense.

Thus, when you ask Google AI to generate images representing the Founding Fathers of the United States, you get three Indians and a black guy:

Nate Silver didn't believe this, until he asked

Google AI also insists that the average British man is Pakistani:

In fairness, that's essentially true.

Sean Davis
@seanmdav

If you ask Google Gemini to create an image of someone who can only be accurately depicted as a white person, the AI engine malfunctions and refuses to produce a result.

That's because it's clearly programmed to produce images of multiple races and sexes, even when the results are obviously absurd. It would be hilarious if it weren't so Soviet and Orwellian.

It seems worse than that: Google AI won't even depict white men in any positions of authority or achievement. Apparently that would be "reinforcing white supremacy." Our own AlextheChick asked it to produce images of scientists -- not a single white man.

While Google AI has been taught that the US was founded by Indians and blacks and that Muslim women are the most skilled hockey players, it absolutely refuses to create an image of an "ideal nuclear family," because the very notion of an "ideal" is exclusionary:

It's also pre-programmed to please its Chinese masters: It refuses to create images of the Tianamen Square massacre, claiming that the event is too "nuanced" for any illustrations.

So when Google "reduces the visibility" of someone, it's doing so based on that sort of claim: That we mustn't call Tianamen Square a massacre, because a powerful group (the Chinese communist dictatorship) feels "sensitivity" about this "nuanced" event.