THE AMERICA ONE NEWS
Oct 2, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic
David McGarry


NextImg:Jon Husted's AI age verification bill is dangerous for children

In the formulation of Sir William Blackstone, the legal colossus of 18th-century England and 19th-century America, a judge investigating the meaning of a statute must begin by undertaking to understand the status quo ante. One of the first tasks, according to Blackstone, was to discover what deficiency in the law the legislator had sought to rectify. 

This formulation, a maxim of traditional Anglo-American jurisprudence, points to another investigation: the legislator’s investigation of a perceived problem in his search for a solution. However, this necessary investigation is often forgone in fits of regulatory fervor and do-gooderism — as the Children Harmed by AI Technology Actintroduced recently by Sen. Jon Husted (R-OH), demonstrates.

Recommended Stories

It is implicit that the legislator bears a duty to endeavor to understand the illness before prescribing a remedy. In the modern day — just as American jurisprudence has morphed — legislative efforts are sometimes marked by an absence of due diligence and precision. Nowhere has this failure manifested more prominently than in technology policy.

PAXTON THREATENS ACTION IF YOUTUBE CHARGES SUBSCRIBERS $15 EXTRA FOR UNIVISION

The CHAT Act, purported to be a measure to protect children from dangers purportedly stemming from artificial intelligence chatbots, would require regulated AI products to verify the ages of their users—and for underage users, to secure verifiable parental consent.

From the first page to the last, the CHAT Act is predicated on a flawed understanding of the technologies it proposes to regulate. The bill displays no regard for the dangers attendant on enforced age verification, which exposes the personal information of those who submit to it — adults and children alike — to hacks, data breaches, and other cybersecurity incidents. Even the French, who enacted an age verification mandate with safeguards intended to mitigate the attendant privacy dangers, discovered recently that such dangers persist (according to a just-published report from auditors at AI Forensics).

American children already suffer startlingly high rates of identity theft. The R Street Institute reports that “research by Experian suggests that 25% of children will be victims of identity fraud or theft by the time they are 18.” Having to submit government-issued identification documents or to undergo facial scans merely to access everyday AI-integrated services will not protect, but rather endanger, the very minors whose safety the CHAT Act seeks to secure. These dangers will only multiply given the bill’s requirement of parental consent, a process which would necessitate the collection of still more extensive data to prove the parent’s identity and legal relationship to the child.

The CHAT Act’s definition of a “companion AI chatbot” indicates further that the bill might not have grasped its subject. It defines such a system as “any software-based artificial intelligence system or program that exists for the primary purpose of simulating interpersonal or emotional interaction, friendship, companionship, or therapeutic communication with a user.” 

The breadth of this definition is quite striking. The bill overlooks a simple — and critical — fact: the distinguishing feature (and the primary appeal) of generative AI tools is their ability to simulate interpersonal interactions between technology and user. 

For example, a Google search requires the user to employ a stilted and unnatural cadence of search terms, and returns a series of links (leaving aside, for the moment, the AI features now incorporated into search results). ChatGPT, alternatively, frees the user to query in far more natural a fashion, and the model synthesizes and returns information in the fashion a human would. The former is information retrieval; the latter, a conversation — or, rather, a simulation of a conversation. Language itself — and certainly conversation — is interpersonal, and it is precisely this that generative AI tools seek to simulate.

But the CHAT Act would extend far past ChatGPT, Google’s Gemini, Anthropic’s Claude, and the rest of the gang of chatbots that have captured the public’s attention. From Apple’s Siri to online customer-service chats to many video games whose characters respond to user acts and speech, numerous other AI-enabled features would also become ensnared in the bill’s net.

NEWSOM SIGNS ‘AI SAFETY’ LAW TO ‘BUILD PUBLIC TRUST’ IN TECHNOLOGY

It’s unclear whether the broad language of the CHAT Act was intended by its sponsor to stretch so far — but therein the problem can be found. Even in pursuit of a well-meaning goal such as protecting children, legislation developed without a clear grasp of the underlying technology it proposes to regulate all too often generates unintended and regrettable effects. Good intentions cannot contain regulatory effects to those foreseen by the well-intended.

The first act of governance is legislation, and legislation done hastily introduces cracks into the foundations of the American legal system, destabilizing the whole structure. The problem is particularly acute — and ought to be particularly worrisome — when it affects novel and revolutionary technologies such as AI, on which an ever-greater share of Americans’ everyday lives occurs, and the future prosperity of the U.S. will be built.

David B. McGarry is the research director at the Taxpayers Protection Alliance.