


It was said many years ago that the reason they don’t put legs on computers is that they’d walk off a cliff if you told them to. And yet, that wisdom now seems to be ignored with projects like ChatGPT, DeepSeek, etc.
I suggest that one possible outcome of AI not being considered is the specter of Landru, a machine that ruled a planet in the sci-fi Star Trek universe. Landru seems a reasonable model for well-intentioned AI gone wrong. From Memory Alpha:
Circa 3733 BC, war threatened to destroy Beta III and its inhabitants. The leader at that time was a gifted engineer and philosopher, Landru. He believed the way to preserve his people was to take them back to a time of peace and tranquility. He sought to end war, crime, disease — all of the evils that plagued his world, and to produce "the unity of good" — a world without hate, without fear, without conflict. To that end, he built and programmed a sophisticated machine, which took on his identity and enforced a repressive peace on the planet's people, broken only by the annual "Festival", permitting 24 hours of violence and crime. The cybernetic Landru was deactivated by the crew of the USS Enterprise under command of Capt. James T. Kirk in 2267. (TOS: "The Return of the Archons")
The truth is, visions of utopia are invariably linked to repression, however it’s generated — by humans (in that direction lies Socialism) or, as with Landru, by machines.
I am also reminded of a line from Donald Fagen’s tune, "I.G.Y.," a song which refers to 1958 and the visions of the International Geophysical Year — and the ridiculous predictions from it. Among them:
A just machine to make big decisions
Programmed by fellows with compassion and vision
We’ll be clean when their work is done
We’ll be eternally free, yes, and eternally young
What could possibly go wrong?
See, here’s the problem: Humans make mistakes, and they all have biases. And who is programming AI? That’s right: fallible humans, with their own mistakes and their own biases. Even the ones in lab coats. We can cite the COVID-19 mess and suggest such fallibility to be particularly prevalent among the lab coats, whom far too many people view with a trust approaching what one would hold for a deity.
Even that is not as frightening in itself as the government getting involved. You and I both know the government will get into the act sooner rather than later, and at that point, we have a truly apocalyptic scenario on our hands, one that even Asimov couldn’t come up with.
So eventually, as AI takes precedence, we’ll still be badly informed by those same fallible people, with the same biases and mistakes. The only difference is that we’ll be badly informed much faster, and the judgments of AI will have the power and finality of government backing them. Ya know, somehow, I don't see that as an overall advantage. (Talk about controlling the narrative!)
Our Scott Pinsker points out that there are currently two front-runners in the field of AI: the United States and China. The risks of dealing with each in this context should be fairly obvious.
As Pinsker suggests, the Americans building and pitching their AI models have lost sight of the marketplace (hugely expensive), and as such, the Chinese are chewing them up. Take, as an example, the Chinese package called DeepSeek. That platform collects your IP address, your keystroke patterns, your device info, and so on — and that data gets stored in China, where it can easily be dredged up by the Chinese government. DeepSeek isn’t unique in that regard, either, of course. So, all these AI packages you can download to your smartphone? Yeah, the Chinese government is collecting data on everyone who uses them.
And you thought TikTok was a security threat? It is, of course, along with a few other seemingly innocuous platforms. But AI brings the danger to a whole new level.
Look, don’t mistake me here. I have been something of a technology buff my entire life, and for many years, I made my living in various forms of it. I worked in computer support for a bank. I’ve been a broadcaster. I’m a Ham radio operator, as well. I was running BBSs before the internet was a thing. I’m no Luddite, by any stretch.
That said, I view the push to AI as a clear danger, and oddly, the biggest part of that danger is the people who don’t have a clue as to its nature — people who will be making government policies regarding AI and deciding the legal and ethical limits of the tech. (Government deciding what is ethical? Have we gone insane?)
Government bureaucrats are always a threat, particularly when, as is usually the case, they don’t understand what they’re attempting to regulate, even when technology isn’t involved. Given what we know of the habits and history of government without AI, can we trust government to regulate itself as regards AI, much less regulate AI and how it gets used by the rest of us?
Just as importantly, our government has never taken the Chinese threat very seriously on any level, at least until very recently. Trust in either one on its own is not smart. Trusting them both, which seems the way we are headed, is downright suicidal.
More succinctly, can we trust the government to both use and regulate artificial intelligence, given its long-standing history of lacking real Intelligence?
Exclusively for our VIPs: It's the Culture
Our collective wisdom over the centuries is that mistakes are part of the learning process. Thus far in human history, we have learned that, with mistakes, you gain a level of self-confidence in proportion to the mistakes you make because you then know what not to do. The burned hand teaches best, etc.
AI is completely different in that regard. We MUST be very cautious with this thing. There’s no do-over possible, here. We will not be able to put the genie back in the bottle if we get it wrong, and the way we're going at the moment, I'd say getting it wrong is a lock-sure bet.
Before it all goes wrong — (click) goes wrong — (click) goes wrong — (SLAP!) — use promo code POTUS47 to get 74% off your VIP membership.