


An AI provision tucked into the pending One Big Beautiful Bill reveals deeper tensions in the Republican Party — and represents a potential trap for a center-right political realignment.
In May, the House passed a provision that would impose a ten-year moratorium on any state or local regulation of artificial intelligence. The current version pending before the Senate takes a slightly different approach. The Senate bill originally conditioned billions in federal broadband funding on states not enforcing artificial-intelligence regulations. In order to get approved for the reconciliation process by the Senate parliamentarian, a revised version of the proposal brands this moratorium a “temporary pause” and seems to target a smaller pot of money. According to a statement from Commerce Chair Ted Cruz, “As a condition of receiving a portion of a new $500 million federal investment to deploy AI, states that voluntarily seek these funds must agree to temporarily pause AI regulations and use the funding in a cost-efficient manner.” The “temporary pause” under this revised proposal would still run for a decade.
This proposed moratorium has highlighted a divide between populists and some more business-friendly voices. Libertarian-leaning voices have championed this moratorium, pointing to Congress’s ten-year moratorium on state taxation of internet commerce in 1998 as a model. Conversely, Missouri Senator Josh Hawley has been a longtime critic of a moratorium on local regulations of AI, and Wisconsin’s Ron Johnson has also declared himself a foe of it. In the House, Marjorie Taylor Greene (among others) has come out against it. Summarizing conservative criticisms of this regulatory moratorium, Michael Toscano and Jared Hayden of the Institute for Family Studies warn that it “would unbridle Big Tech’s power.”
Defenders of shutting down state and local regulation say that there needs to be a nationally uniform AI policy in order for the U.S. to keep on the forefront of that technology. And they’re right that it is a national security imperative for the United States to stay a leader in that field. AI has tremendous implications for the military conflicts, manufacturing, and data processing of the future.
However, at its deepest level, tech policy has to attend to more than maximal technological power; it also needs to ensure that this technological might can be disciplined for the sake of human flourishing. The very power of artificial intelligence means that its risks are not insignificant, and one of the essential insights of the conservative tradition is the need to temper disruption and preserve essential elements of the social compact.
Seen in that light, a decade-long moratorium on all state and local regulation of AI would be far too sweeping. That moratorium would throttle the ability of state and local lawmakers to experiment with “right-size” regulations for artificial intelligence. Instead, this moratorium would shift all AI policy debates to Washington, which is optimized for deadlock.
Federalism, however, is well-adapted for addressing a new policy frontier. States have taken a leading role in taming some of the excesses of the digital era. For instance, a number of states have passed laws (often with bipartisan support) trying to limit minors’ access to digital pornography. Increasingly, states are also working to ban or limit cellphones in public schools.
States have also begun to enter the AI policy arena. A leader in this sector, Utah has recently mandated that companies disclose in certain contexts whether consumers are interacting with AI and sketched out a framework for regulating mental-health chatbots. The sheer pervasiveness of AI means that it will touch on areas essential to human flourishing, especially concerning children and the family. For instance, should public schools restrict the use of AI in the classroom as well as regulate what can be done with the data gathered from children using AI programs? The rise of chatbot “friends” could have long-term effects on the socialization and mental health of teenagers, which might prompt policymakers to support some age-verification protocols or other limits on these chatbots. A ten-year-old spending hours alone with a chatbot — rather than reading, playing, or being with friends and loved ones — is an anathema to pro-family politics.
During a time of breakneck technological change, a decade-long moratorium on local AI regulation would shut down all that policy innovation. It would gut Utah’s AI law as well as any other AI regulations passed by states. ChatGPT was released barely two and a half years ago; a decade-long moratorium would hold state and local policymakers hostage for four times as long. Absent some congressional action on AI guidelines, that moratorium on state and local regulation would essentially function as a federally imposed policy vacuum. For some proponents of the bill, those handcuffs on local elected leaders might be a feature rather than a bug, but this policy vacuum could in effect empower technocrats, whether those at the commanding heights of Silicon Valley or in the administrative agencies of the Beltway.
Many populists have fretted about the power of tech behemoths, so it’s not surprising to see Hawley and other populists come out against an AI moratorium that empowers those companies at the expense of local governments. In light of these pressures, that regulatory moratorium could be further watered down, if not even stripped from the bill.
Conceivably, some federal balance could be struck that allows for cutting-edge research into artificial-intelligence engines while also giving local policymakers a space to adapt to some of the consequences of AI. But a blanket ten-year moratorium would not strike that balance. It would impose paralysis when flexibility is needed. Amid the AI revolution, policymakers need to ensure that artificial intelligence is sufficiently attuned to the human, and a top-down regulatory moratorium could eliminate critical tools for that essential task.