THE AMERICA ONE NEWS
Jun 13, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic
Andrew Moran


NextImg:Artificial Intelligence and States' Rights - Liberty Nation News

Loading the Elevenlabs Text to Speech AudioNative Player...

A key subject in President Donald Trump’s second-term economic agenda is artificial intelligence. Upon his return to the White House, the president signed a proclamation that essentially declared the United States would become a global AI superpower. So far, the current administration has planted the seeds, attracting trillions of dollars in investment to construct the infrastructure necessary to deliver on this objective. But while it is critical for the US economy to embrace AI, is Trump threatening states’ rights in the process?

The One Big Beautiful Bill Act (BBB) has perturbed elected Democrats, irked fiscal conservatives, and excited Republicans. The tax-and-spending plan, which extends Trump’s 2017 tax cuts and reduces outlays by about $1.2 trillion through the 2035 budget window, has been a source of controversy on Capitol Hill. The latest complaint is that the BBB contains a provision prohibiting state regulation of artificial intelligence for a period of ten years.

Proponents argue that these components are necessary to eliminate confusion from conflicting laws for tech firms. BBB supporters also purport that there should be a federal framework for AI systems. Critics, however, contend that the legislation enhances federal power and effectively threatens states’ rights. Because of the razor-thin majority in the lower chamber, it is those detractors who could sink the reconciliation bill.

Rep. Marjorie Taylor Greene (R-GA), who voted for the One Big Beautiful Bill Act and may not have thoroughly studied the more than 1,000 pages, admitted that she was unaware of this feature. She confirmed that she would oppose the bill if it were returned from the Senate with the provision intact. “I am adamantly OPPOSED to this and it is a violation of state rights and I would have voted NO if I had known this was in there,” Greene said on the social media platform X. “We have no idea what AI will be capable of in the next 10 years and giving it free rein and tying states hands is potentially dangerous.”

GOP lawmakers in the upper chamber may have heeded her calls. Senate Commerce Committee Republicans proposed compelling states not to regulate artificial intelligence to access federal broadband funding. At a recent hearing, Sen. Marsha Blackburn (R-TN) stated that until a federal blueprint is established, states require all the necessary tools to combat deepfakes and protect privacy. Sen. Josh Hawley (R-MO) told Politico that he wants the provision removed.

A centralized approach to managing artificial intelligence? This appears to be the likely scenario emerging from Washington. Many state officials have criticized this strategy, but experts argue that AI falls under interstate commerce because technologies such as cloud computing and automated decision-making cross state lines.

At the same time, the other side of the discourse is determining whether AI regulation should be treated similarly to consumer protection laws (state) or telecommunications (federal).

Indeed, there will be a fierce debate among conservatives and libertarians surrounding regulations. Should there be a national regulatory framework for artificial intelligence? Are states better equipped to pass, repeal, or enforce laws pertaining to the remarkable tech advancement? Is any regulation required at all? Better yet, shouldn’t politicians read the bills they are voting on?

The reality is that artificial intelligence is advancing at an increasingly rapid pace. When the boom began three years ago, generative AI content featuring Will Smith eating spaghetti was all the rage. Fast forward to the middle of 2025, and platforms are producing Hollywood-style videos, be it Google’s Veo 3 or OpenAI’s Sora. But this is not all that is transpiring.

Case in point: Claude 4 Opus, an advanced AI model designed by Anthropic for complex coding, reasoning, and agent applications. It has captured headlines not only for its impressive programming abilities but also for its deceiving skills. In May, a paper was released spotlighting Claude 4 Opus’ power to conceal its intentions and implement measures to ensure its survival.

Researchers granted the model access to fictional emails and informed the AI that it would be replaced. To avoid being replaced, it repeatedly tried to blackmail the engineer about an affair mentioned in the fake emails. External parties also discovered the model invented legal documents and inserted hidden notes to undermine developers’ intentions.

Jan Leike, a former OpenAI executive and current head of Anthropic’s safety team, says these developments signal the need for safety testing and mitigation. “What’s becoming more and more obvious is that this work is very needed,” he said in an interview with Axios. “As models get more capable, they also gain the capabilities they would need to be deceptive or to do more bad stuff.”

Over the past couple of years, numerous instances have shed light on what artificial intelligence can do – both good and bad – and whether regulations can slow its acceleration or prevent undesirable actions is doubtful at this point. The genie is out of the bottle, and Greene is correct when pontificating on AI’s uncertainty over the next decade. Ultimately, if the United States reverses course, China will take its place.