


Despite the deluge of sensationalist headlines, it is unlikely you will die in an AI robot apocalypse. Death by a thousand e-papercuts, though, is not out of the question.
AI is not exactly new. If you define it simply as any heuristic algorithm that does a task normally (or previously) done by humans, AI dates back to at least the 1970s.
Classic examples include mail-sorting algorithms used by the post office. More recent ones with which you have been interacting for more than a decade include Google Maps and Amazon’s recommender system.
Taking this non-novelty into consideration, it should seem far less impressive when a company boasts of its new AI-infused whatsit while making AI seem less scary and calls to regulate it broadly seem more clueless.
That said, it is also worth acknowledging there have been considerable advances in AI in the past five to 10 years, especially in generative AI, which has gotten a lot of attention for fairly impressive works of visual art and quick summaries of practically any topic, neither of which can always be trusted for factual accuracy or political neutrality.
It’s also worth acknowledging there are dangers to AI, although not the ones that come quickest to mind. The real threat is probably not Skynet becoming sentient and deciding the best way to solve humanity’s problems is to eliminate all people.
Instead, it is the faith we place in AI, the often opaque metrics used to sell AI solutions, and a broad willingness to embrace AI to appear fashionable or tech-savvy, even when it comes at costs to privacy, liberty, or convenience.
For those writing AI programs, the question of whether a program “works” is thought of differently than by most people outside of that world. A certain amount of error is expected.
Taking the example of a facial recognition program designed to identify criminals, programmers might calculate several metrics that take into account how many times the program correctly flags a target individual, correctly fails to flag a nontarget, and yields either false positives or false negatives.
Similar metrics might be used to assess the performance of programs developed to flag alleged “misinformation” or narrowly defined proxies for someone being distracted in school or while driving.
Yet, how accurate is accurate enough to claim a program “works” and whether “works” means it achieves some larger goal are decisions made by businesspeople. Whether the consequences of a false positive are excusable is a philosophical question that may be overlooked entirely.
CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER
Instead of circumspectly thinking through the implications of this, many fetishize anything a marketing team loosely labels as AI and scramble to relinquish their decision-making and agency to beautified versions of fallible algorithms few understand.
Hence, you probably won’t be having shootouts with Terminators anytime soon. But finding yourself pulled over at the side of the road because an accurate enough facial recognition program misidentified you as a criminal, or your own personal HAL deemed you too distracted to drive after your eyes darted away from your windshield too many times, doesn’t seem out of the question.
Daniel Nuccio is a Ph.D. student in biology and a regular contributor to the College Fix and the Brownstone Institute.