



For those of us raised as kids at a time Superman comic books were the rage, some of us, able to remember that far back, might recall a story that, by today’s technological advances, seems frighteningly realistic. This Superman adventure dealt with confronting robots that had taken over a world from their creators.
With the fast-pace development of Artificial Intelligence (AI), concerns are emerging about it evolving quicker than we are prepared to handle it. The validity of one concern in particular is evidenced by a bipartisan effort recently undertaken. Rep. Ken Buck, R-Colo., and Rep. Ted Lieu, D-Calif., seek to eliminate the possibility that AI alone could launch a nuclear attack by including a human decision-making element as well.
The two congressmen have ample reason to be concerned.
Firstly, the pleas of more than 1,100 developers and industry experts who signed off on an open letter for a temporary moratorium have gone unheeded. Additionally, Stanford University issued a report indicating one-third of experts they surveyed warned that AI could result in a “nuclear-level catastrophe.”
Secondly, a study was conducted by Scientific Reports after a widow in Belgium claimed her husband had been persuaded by an AI chatbot to commit suicide. New research indicates AI chatbots are so advanced they may actually influence users’ choices about life and death!
Thirdly, history is on Buck and Lieu’s side.
These examples clearly demonstrate the need to ensure an element of human decision-making is implemented within AI functioning to prevent the technology from operating totally independently.
Of course, there are instances of human behavior that we have witnessed in the past that give us pause to wonder whether even a human decision-making override provides us with an absolute fail-safe guarantee. While the U.S. mandates today at least two people be involved in the actual launch process, conceivably even that is no guarantee of a fail-safe system.
Consider what we have seen over the past few decades in commercial air travel. While the industry has actually become progressively safer, there is still one cause of deaths that has stubbornly persisted. It is the intentional crashing of a commercial aircraft by a pilot committed to committing murder-suicide. The term “suicide by pilot” has been applied to several aviation crashes and listed as the most likely cause in at least six others.
This should provide us with ample concern that, even in trying to design a fail-safe AI trigger for nuclear weapons, limitations on human controls receive equal consideration as well. After all, we also need to protect against a suicidal mindset of one or both nuclear launch operators.
Of course, based on the above concerns, the question becomes how do we construct the ultimate fail-safe override involving both an AI and human operator element?
Such a guarantee cannot be all AI as that technology today is non-existent – it lacks total independence as it is only responding to inputs provided by coders. We have seen how slanted such input can be as ChatGPT, for example, has demonstrated a clear bias against conservatives. Such a liberal mindset is a direct result of its programmers, whether intentional or not. Yet, by the same token, the fail-safe guarantee cannot be all human due to the potential of a suicide mentality by one or both operators.
If a truly independent-thinking AI can ever be developed, maybe only at that state of evolution will the technology exist that can assure us of a fail-safe guarantee. Until then, as history has shown us, we can only cross our fingers and hope for the best.
This article was originally published by the WND News Center.
This post originally appeared on WND News Center.