THE AMERICA ONE NEWS
Jun 25, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic
National Review
National Review
2 Nov 2023
Andrew Stuttaford


NextImg:The Corner: AI: There’s an A and an I in ‘Panic’

The evolution of AI is clearly going to come with some risks, perhaps considerable risks (as well, of course, as opportunities), but a greater danger may well come from heavy-handed state intervention, even more so when fueled by some sort of panic.

And so check out the British prime minister’s AI summit, which is currently underway. Upping the usual excuse for bad policy — “for the children” — Sunak is claiming that “we owe it to our grandchildren to take urgent action on the risks posed by artificial intelligence.” Note the call, not just for action, but urgent action — another sign that heavy-handed state intervention is either on the way or — looking in the direction of the White House — is already upon us.

One of Sunak’s concerns is the weaponization of AI, a not unreasonable concern, but despite that, he has invited China to this gathering. The Beijing regime is one of the signatories to Sunak’s fatuous Bletchley Declaration on the development of AI. The declaration is not legally binding, but China’s involvement will be designed to achieve two main objectives. The first will be to derive as much intelligence as it can from its Bletchley “partners.” The second will be to encourage all those participating in this project to slow down or hamstring AI R&D in their countries, while China proceeds as quickly as it can with any development that Beijing may find militarily, commercially, or technologically useful, regardless of what it has promised, and regardless of how dangerous it might be.

And so far as the current AI panic is concerned, it’s well worth reading some smart comments from James Pethokoukis.

Under the heading “Oh, no: Biden is learning about AI from movies. We’re in deep trouble,” Pethokoukis cites this AP story:

President Biden was profoundly curious about [artificial intelligence] in the months of meetings that led up to drafting the executive order. His science advisory council focused on AI at two meetings and his Cabinet discussed it at two meetings. The president also pressed tech executives and civil society advocates about the technology’s capabilities at multiple gatherings. . . . The issue of AI was seemingly inescapable for Biden. At Camp David one weekend, he relaxed by watching the Tom Cruise film “Mission: Impossible — Dead Reckoning Part One.” The film’s villain is a sentient and rogue AI known as “the Entity” that sinks a submarine and kills its crew in the movie’s opening minutes. “If he hadn’t already been concerned about what could go wrong with AI before that movie, he saw plenty more to worry about,” said [White House aide Bruce] Reed, who watched the film with the president.

As Pethokoukis points out, Biden is not the only president to have had his views shaped by a movie, but still . . .

Pethokoukis:

In the case of President Biden, the (already) hoary plot device of a rogue AI may well have crystallized his growing concern about the emerging technology, concerns manifested in his 111-page “Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence.” It’s a directive that I described in the previous issue of this newsletter as premature and both excessively broad and excessively detailed, risking harmful unintended consequences. The better-safe-than-sorry Precautionary Principle in action, again.

(Look, given the nascent state of AI technology and limited understanding of potential risks, lawmakers and regulators should exercise humility and restraint. I worry that such a rushed, expansive regulatory framework could stifle beneficial innovation, favor entrenched tech giants, and cement their dominance. Better that policymakers take a cautious, tailored approach focused on studying AI impacts and establishing safeguards against specific demonstrated harms.)

But the Biden White House and many in Congress both Ds and Rs — see things differently. Then again, everyone in the White House and on Capitol Hill has been soaking their entire lives in a popular culture that overwhelmingly presents dystopian images of the future and depicts technology as only enabling humanity’s worst impulses . . .

Pethokoukis is not wrong. Ominously and interestingly, at least to me, I think that this dystopic vision has also contributed to the rise of the “degrowth” phenomenon that I wrote about here.  But that is a discussion for another time.