


If there is one thing artificial intelligence should be great at, it’s treating everyone equally. As far as the algorithm is concerned, everyone in the human race can be represented by 0s and 1s. AI doesn’t necessarily see black and white, male or female; it just sees a human: that person’s experiences, criminal record, resume, and social media account — pretty much anything available about them online.
Or, at least, that’s the theory.
But some scientists and philosophers — and President Joe Biden’s administration — think differently. The trouble is that while AI might treat everyone equally, it won’t necessarily treat everyone equitably.
Making AI Equitable
On Monday, Biden signed an executive order that not only established some vague ground rules for regulating artificial intelligence but also included an entire section on “Advancing Equity and Civil Rights,” which aims to ensure AI doesn’t discriminate against individuals applying to rent a house, receive government assistance, or be awarded a federal contract.
The order states that Biden will provide “clear guidance” to services using AI to screen applicants; tasks the Department of Justice and federal civil rights offices with providing training and technical assistance in “investigating and prosecuting civil rights violations related to AI”; and commits resources to developing better ways to use AI in detecting, investigating, and punishing crime. (READ MORE: Physiognomy Is Real, and AI Is Here to Prove It)
At first glance, this might make sense. There’s always the possibility that AI could err in its automated screening processes. For instance, in 2018, a man named Chris Robinson was denied a rental application to a California senior living community because the artificial intelligence system that ran his background check mistook him for another man with the same name who had been convicted of littering in a state in which Robinson had never lived. But while Robinson’s case qualifies as an instance of unfair (and accidental) discrimination, it certainly isn’t one of “inequity.”
And the Biden administration isn’t trying to fix the kind of mistake that resulted in the denial of Robinson’s rental application. Instead, it wants to ensure that AI adjusts to a woke worldview that filters decisions through past wrongs — real or imagined.
The ‘Principle of Autonomy’ Is ‘Inequitable’
The Left has decided that the problem with AI is that it views humans autonomously. In one study published by Topoi, an international review of philosophy, authors Sábëlo Mhlambi and Simona Tiribelli argue that the very “principle of autonomy” is flawed. It’s a construct rooted in “Western traditional philosophy,” they argue, and “[a]dherence to such principle, as currently formalized, … fail[s] to grasp a broader range of AI-empowered harms profoundly tied to the legacy of colonization.”
In practical terms, AI systems tend to predict crime statistics that woke leftists don’t like. For instance, as AI research group Prolific reports, the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) used AI to predict “the likelihood that US criminals would re-offend.” The system noticed that individuals who were black were more likely to fall in that category, and, because AI isn’t politically correct, it reported exactly that. (READ MORE: People Are Working on Using AI to Steal From You)
Investigators at ProPublica emphasized the ever-present possibility for exceptions and pointed out that in the case of 7,000 individuals arrested in Broward County, Florida, the algorithm tended to be unreliable when assigning recidivism scores.
Regardless, the COMPAS analysis was ultimately correct — at least when you look at the actual statistics. Black men are far more likely to reoffend than any other demographic group.
A study published in the American Criminal Law Review in 2021 argued that using AI was likely the best way to overcome the “pervasive bias and discrimination” in the justice system. The study reported:
Because algorithms do not have feelings, the accuracy of their decision-making is far more objective.
They are always designed by humans and hence their capability and efficacy are, like all human processes, contingent upon the quality and accuracy of the design process and manner in which they are implemented.
In other words, artificial intelligence processes data equally. It works in generalizations and statistics. In contrast, processing data equitably would require training the algorithm to discriminate based on woke ideology — and, unfortunately, that’s likely possible, and the powers that be are set on giving it their best effort.
READ MORE from Aubrey Gulick: