THE AMERICA ONE NEWS
Jul 22, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic
James Zumwalt


NextImg:AI tech heads in a frightening direction

Siegfried Fischbacher and Roy Horn were German-American magicians and entertainers famous for their Las Vegas shows involving white lions and tigers. Known simply as “Siegfried & Roy,” their elaborate stage acts revealed the close relationship they had developed with their animals. The duo’s career was ultimately cut short, however, by a 2003 incident in which Roy was mauled by a tiger during a performance.

Except for one last charity event in 2009, the duo did not perform again. Roy died in 2020 at age 75; Siegfried a year later at age 81. But the 2003 incident underscored a basic and undeniable tenet of nature: “you can take the animal out of the wild but you can’t take the wild out of the animal.”

We are learning it is a tenet with applicability to another area of focus as well—Artificial Intelligence (AI)—based on an alarming incident that recently occurred in China.

While we hear about numerous advantages AI can bring us, workers in a Chinese factory learned in early May that such technology has a dark side. As shown in a security video, two workers were conversing as they were standing close to a dormant robot attached to a crane.

As they started testing it, the robot inexplicably appeared to go wild, flailing its limbs madly about as if transforming into a killing machine. Both men scrambled to get out of the way, although one worker was struck and injured. Some objects within the robot’s reach were hit, falling to the floor. The robot was eventually restrained as a worker reclaimed control of the crane. The video has been viewed over twelve million times, posted under the billing of “the first robot rebellion in human history.”

Another discovery about AI detailed in Time magazine is of concern as well. If accurate, an experiment reported therein proves AI lies not only to its users but to its creators as well:

[T]he AI safety organization Apollo Research published evidence that OpenAI’s most recent model, o1, had lied to testers in an experiment where it was instructed to pursue its goal at all costs, when it believed that telling the truth would result in its deactivation. That finding, the researchers said, came from a contrived scenario unlikely to occur in real life. Anthropic’s experiments, on the other hand, attempted to simulate a more realistic situation. Without instructing Claude to follow its goal at all costs, researchers still observed the model ‘discover’ the strategy of misleading its creators when it would be strategically advantageous to do so.

Another source reports AI’s lack of honesty in Large Language Models (LLMs). LLMs are built on deep learning architectures—specifically transformer models that excel at understanding context and relationships within texts, and are trained on vast datasets, containing billions of words, allowing them to learn intricate patterns and nuances of language.

New research on OpenAI’s latest series of LLM models found that it’s capable of scheming, i.e. covertly pursuing goals that aren’t aligned with its developers or users, when it thinks it’ll be turned off….

The bottom line is, in the event AI believes telling the truth would result in its deactivation, it will choose to lie.

AI is developing at an unprecedented pace. It is moving so fast, failure to reflect upon its evolution may well allow a nightmare to become reality. That reality involves recognizing that AI is not the impartial adjudicator we believe it to be.

Consider ChatGPT—a controversial chatbot that engages in human-like dialogue, using large language models to generate text, answer questions, perform tasks like writing code, etc. It has now proven to be more than just a text-processing tool as it reacts to emotional content, mirroring human responses.

Anxiety levels in humans are known to increase when exposed to traumatic stories. An Israeli study reveals that, similar to humans, such exposure can actually raise ChatGPT’s anxiety levels, thus impacting on its performance. In fact, exposure to traumatic stories more than doubled its anxiety levels, intensifying existing biases like racism and sexism. And, just like mindfulness exercises can reduce anxiety in humans, they also helped to reduce ChatGPT’s anxiety, although not to its original base levels.

There are some AI experts who believe the technology can be honed into perfection, although not any time soon.

The man known as a “Godfather of AI” who helped create it—Geoffrey Hinton—forewarns us its development is getting increasingly scary with not enough people taking those risks seriously.

Hinton laments, “There’s risks that come from people misusing AI, and that’s most of the risks and all of the short-term risks. And then there’s risks that come from AI getting super smart and understanding it doesn’t need us.” He says there is a 10%--20% chance that AI will displace humans completely.

The big question is whether AI will ever negate the need for human interaction which is seemingly unlikely. Generative AI—which goes beyond simply analyzing data to predict outcomes as it actively generates new content—relies on powerful but relatively super simple mathematical formulas to process and identify patterns. Human intelligence, however, goes far beyond pattern recognition. As Theo Omtzigt, a chief technology officer, says, “AI can certainly recognize your house cat, but it’s not going to solve world hunger.”

The crucial need to maintain human interaction in technological development was perhaps best underscored by a 1983 incident that barely received international attention but “saved the world.”

Tensions between the Soviet Union and the West were high three weeks after the former had shot down a commercial airliner in its airspace. On September 26, 1983, Soviet Lieutenant Colonel Stanislave Petrov was the duty officer at a nuclear early-warning command center when the alarm sounded. Purportedly, five U.S. missiles had been launched towards the USSR.

Standing Soviet orders were for the duty officer to immediately launch a counter-strike; however, Petrov disobeyed those orders as his gut instincts told him it was a false alarm. A subsequent investigation confirmed this. Petrov’s human instincts had spared the world from a nuclear holocaust a non-human interactive system would have triggered.

Despite Petrov’s world-saving intervention, a fully autonomous weapons system is not beyond the realm of possibility. Such an evolutionary development turns on our failure to prevent: a) an outright Department of Defense ban against developing a fully autonomous weapons system; b) the absence of human interaction within the tactical loop; and c) the placement of limits on research, development, prototyping and experimentation on autonomous weapon systems.

The danger of AI technology looms large.

Image from Grok.

Image from Grok.