THE AMERICA ONE NEWS
Aug 16, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic
David Zimmermann


NextImg:AI is being implemented in government. How does that work legislatively?

As government entities increasingly use artificial intelligence to expedite research or streamline regulations, lawmakers are finding ways to put guardrails on the fast-growing technology.

The legislative landscape surrounding AI use in government is rapidly evolving. While AI-related legislation tends to stall in Congress, states are passing their own measures to regulate the public sector’s integration of the technology.

Recommended Stories

Texas recently enacted a law designed, in part, to regulate state and local government use of AI systems by prohibiting AI from implementing a “social scoring” system to disburse or deny certain benefits and prohibiting AI from capturing individuals’ biometric data. The goal of such legislation is to promote ethical and responsible use of AI, as well as safeguard principles like fairness and transparency.

In fact, much of the public-sector AI legislation introduced by states thus far aims to ensure transparency and identify risks associated with AI deployment.

Texas is currently using AI to aid law enforcement. The Department of Public Safety uses an AI-powered surveillance tool to gather information from social media and the dark web. Also, a North Texas law enforcement agency became the first in the Lone Star State to employ an AI assistant for helping draft police reports, which appears to be a growing trend.

The new law, called the Texas Responsible Artificial Intelligence Governance Act, requires state agencies to publicly disclose how they integrate AI in their operations. It takes effect on Jan. 1, 2026.

As some states pass measures to regulate how government entities use the technology, AI has proven to be a helpful tool in the legislative drafting process.

AI can assess the economic impact of proposed legislation on different sectors. Lawmakers can then make the AI-generated impact assessment become part of the official legislative process by passing a bill, adopting the specific use in the legislative rules, or letting legislative offices incorporate it themselves, a Capitol Hill staffer told the Washington Examiner.

Colin Raby, one of the first congressional AI specialists, noted most AI adoption happens through the third method. “It’s a workflow upgrade, not a statutory mandate,” he said.

“However, if lawmakers wanted to formalize its use, the bill or rule would need to specify which analyses AI must perform, the transparency requirements, and how results are reviewed by humans before being relied on,” he added.

He said that when AI is assessing proposed legislation, legislatures are still responsible for amending and acting on the impact assessment.

Virginia was set to regulate the development, deployment, and use of “high-risk” AI systems until Gov. Glenn Youngkin (R-VA) vetoed that bill in March. If it had been signed into law, the legislation would have been similar to Colorado’s law enacted last year.

The Virginia bill was vetoed over concerns that it would have placed undue regulatory burden on the state’s economy, particularly affecting small businesses that may not have had the resources to comply with the legislation. It would also have stifled AI innovation in the state, critics argued.

Instead of approving AI legislation that comes with unnecessary restrictions, Virginia’s executive branch is accelerating the integration of AI in its internal processes.

Youngkin signed an executive order last month authorizing state agencies to launch an “agentic AI” pilot program for the purpose of downsizing Virginia’s regulations. The program is intended to flag contradictions in the statute, identify redundancies, and suggest updates to regulatory language.

This is the first time that a state is using agentic AI to streamline its regulations.

Agentic AI is different from generative AI, in the sense that the technology takes action on behalf of human users without as many prompts. Many staffers in legislative offices and government agencies are already using generative AI, like ChatGPT, to analyze data and perform other automated tasks.

Youngkin’s authorization of agentic AI, along with his bill veto, appears to be aligned with the Trump administration’s AI adoption. The White House recently unveiled its AI Action Plan, in which the Trump administration says states shouldn’t overregulate technological innovation.

“The Federal government should not allow AI-related Federal funding to be directed toward states with burdensome AI regulations that waste these funds, but should also not interfere with states’ rights to pass prudent laws that are not unduly restrictive to innovation,” the plan states.

The Senate previously sought to place a 10-year moratorium on AI legislation at the state and local levels, so that the Trump administration’s agenda to compete in the AI race with China isn’t impeded by different laws across 50 states. The proposal was then reduced to five years before ultimately being taken out of the GOP’s One Big Beautiful Bill Act due to bipartisan resistance, but Congress will likely consider an AI moratorium again.

Sen. Ted Cruz (R-TX), who chairs the Senate Commerce Committee, previously vowed to pursue a standalone moratorium if the proposal didn’t become part of the GOP megabill. Rep. Brett Guthrie (R-KY), who leads a similar committee in the House, also promised to keep trying.

“We’re still gonna work it, and hopefully we’re gonna have to have state preemption in the end,” Guthrie said.

It remains to be seen how the Republican-dominated Congress tackles state preemption, but for now, state legislatures are free to legislate AI as they see fit.

While there are some concerns about the lack of human supervision over agentic AI, whatever decisions it makes must still be approved by Virginia’s agencies overseeing the pilot program. Therefore, it is not fully autonomous.

From what he can tell about Youngkin’s executive order, Raby said the AI program is not “going to go ahead and change the order” or “file a motion” without human input.

Raby explained that agentic AI, in the context of how Virginia is using it, is analogous to a “research assistant” that can read long regulations and flag areas of interest efficiently.

It may even be helpful in pointing out human errors. Raby gave an example where he used AI to cross-reference a pesticide label and state research to identify regulatory conflicts. The state admitted a mistake in its research, which it wouldn’t have known if Raby hadn’t reached out.

“Mistakes happen when you have a giant guidance book or when you have these giant reports,” he said, adding that AI systems can help find the “needle in a haystack” where regulations conflict. “That’s a small example in the world of agricultural regulation, where among different organizations and different sets of guidance, there are conflicts that come up.”

Regarding AI’s performance of a certain task in public service, human oversight can ensure the technology is operating ethically by mitigating risks, such as algorithmic bias or discrimination. This supervision is typically conducted by a chief AI officer or AI governance board.

Laura Caroli of the Center for Strategic and International Studies emphasized that legislative actions taken by AI are still subject to human review and oversight, thus ensuring accountability and preserving democratic checks and balances in policymaking processes.

Caroli noted that although lawmakers don’t necessarily legislate the government’s use of AI when it comes to automatizing certain processes or slashing regulations, other AI functions that verge on the unethical side are currently being targeted at the state level and could one day form part of a future federal bill.

“What could be regulated one day [beyond the state level] is when an AI system is used by the government to decide whether or not you obtain legal migration status, whether or not you go to prison, or whether or not you have access to public housing,” she said.

ILLINOIS BECOMES FIRST STATE TO BAN AI THERAPY

Experienced in AI policy, Caroli served as a senior policy adviser at the European Parliament for ten years.

“These are uses that are normally targeted in regulating AI,” she added. “I don’t see that happening in the U.S. anytime soon.”