


U.S. lawmakers are close to taking drastic and ill-advised measures in the name of competitiveness and innovation.
The Senate is currently considering a reconciliation bill that includes a rule that would ban state-level AI regulation for 10 years. Their objective is to speed up the United States’ AI development and help American AI technology get ahead, and stay ahead, of the rest of the world.
However, if Congress imposes this moratorium, it will effectively shoot itself in the foot by stifling U.S. AI innovation and even endangering national security. The reality is that AI governance infrastructure is necessary for both objectives, and states are well-equipped to contribute to infrastructure — indeed, they are already building it.
The ban has been hotly debated in a variety of forums. To be sure, some may argue that AI regulation should be significantly limited to avoid harming innovation, and in those rare cases when AI regulation is needed, the federal government, rather than state governments, is the right actor to impose that regulation, since AI governance will affect U.S. national security.
Some may criticize the emerging patchwork of state laws, which can complicate compliance for firms that operate across state lines and leave gaps in protections for state residents.
However, congressional gridlock and partisanship, and the failure to pass meaningful tech regulation for over a decade, mean that state laws are necessary. States are more strongly incentivized to address their constituents’ concerns about AI, and states face fewer partisan barriers to implementing policy — take, for example, state laws protecting Americans’ privacy and digital rights.
While these debates are worth having, they miss a key component of why this moratorium is a bad idea: State governments are essential for building the United States’ AI governance infrastructure. What’s more, this infrastructure is necessary to achieve the moratorium’s stated policy goals of enhancing AI innovation and promoting national security.
AI governance goes beyond placing concrete demands on AI developers and deployers. It includes strengthening workforce capacity, sharing information and gathering evidence about emerging risks and building shared resources to facilitate AI experimentation. These activities enhance developers’ abilities to build performant systems, engender consumer trust and preserve U.S. national security interests.
For example, a robust system of third-party auditors can help AI companies detect emergent security risks, prevent AI failures and streamline internal testing. Similarly, strong information sharing between developers, users and government actors can aid in rapidly triaging and responding to AI risks and harms.
This frees up time for developers to continue innovating and builds consumer trust, making it more likely that people adopt the technology and developers accelerate their learning through real-world deployment. Information sharing between government and industry also helps policymakers prioritize the most pressing national security risks and ensure that the U.S. models maintain an edge over their competitors.
Finally, shared resources such as computing clusters can help academic and research communities conduct security research that developers may overlook and enable smaller developers to contribute to the innovation ecosystem.
The proposed moratorium could very well foreclose progress in these key areas, and current and future state bills that touch on these governance activities may no longer be in force if the moratorium passes. This is a problem because states are currently well-positioned — better positioned than the federal government — to support AI governance infrastructure that enables innovation and enhances U.S. national security.
States are already working on components of AI governance infrastructure, including talent, computing infrastructure and regulatory frameworks. Nearly every state has registered AI apprenticeship programs and related skills-based training to ensure that the United States has a workforce capable of building, testing and overseeing AI systems.
States have also taken the initiative to build out hardware infrastructure; New York Gov. Kathy Hochul (D) recently announced a consortium to launch an AI computing center in the state to promote research and development and create job opportunities.
Finally, AI bills are already in the works in several state legislatures, and in some cases have already passed into state law.
State-level governance is also advantageous for the U.S. because it serves as a sandbox for experimentation and debate, allowing for innovation in governance approaches.
Individual states can experiment with policies that may eventually be taken up by other states, such as how California’s environmental laws became a model nationwide. This will be particularly important for AI, which intersects with many other policy and regulatory concerns.
State lawmakers can coordinate their approaches, learn from each other’s experiences and try to build consensus to minimize fragmentation across state lines. Organizations and networks already exist to bring government employees together to share ideas, and policymakers have previously coordinated to try and reduce fragmentation on previous issues such as consumer data privacy.
By imposing this moratorium, Congress would act in direct contravention of its stated objectives of supporting U.S. innovation and national security. Effective AI governance, including state-driven governance, is necessary for a maximally innovative and secure AI ecosystem. As such, Congress should remove this moratorium from the bill.
Jessica Ji is a research analyst at Georgetown University’s Center for Security and Emerging Technology (CSET), where she works on the CyberAI Project. Vikram Venkatram is a research analyst at CSET. Mina Narayanan is a research analyst at CSET.