THE AMERICA ONE NEWS
Jun 19, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic
https://www.facebook.com/


NextImg:The FCC’s proposed rule on AI in political ads is misguided - Washington Examiner

The Federal Communications Commission initiated a new rulemaking proceeding last month to require broadcasters, along with cable and satellite TV providers, to disclose the use of any AI-generated content in political advertisements. Probably not coincidently, this new administrative state power grab mirrors a plea submitted to the Federal Election Commission by the Democratic National Committee asking the FEC to regulate AI-generated political speech.

FCC Chairwoman Jessica Rosenworcel and her two Democratic colleagues who control the five-member agency claim the rule is justified to “provide greater transparency regarding the use of AI-generated content in political advertising.”  

Although the FCC won’t even finish receiving comments on the proposal for more than six weeks, Rosenworcel has said, incredulously, that she wants to fast-track a decision before Election Day on Nov. 5. Never mind that early voting begins in many states in September.

In dissenting from initiating the rulemaking, Republican Commissioner Nathan Simington readily conceded that artificial intelligence is a “buzzy topic,” but warned that “absent the compelling force of an ongoing crisis or a situation worsening moment-to-moment, the worst time to regulate a domain is when everyone is talking about it.” 

His fellow Republican Brendan Carr declared in dissent: “Far from promoting transparency, the FCC’s proposed rules would mire voters in confusion, create a patchwork of inconsistent rules, and encourage monied, partisan interests to weaponize the law for electoral advantage.”

I can’t be sure whether the FCC’s proposed AI rule is merely ill-considered, deliberately mischievous, or both. Regardless, it’s highly problematic as a matter of policy and of law.

To highlight the proposed rule’s policy conundrums, it’s useful to quote the agency’s definition of “AI-generated content”: “An image, audio, or video that has been generated using computational technology or other machine-based system that depicts an individual’s appearance, speech, or conduct, or an event, circumstance, or situation, including, in particular, AI-generated voices that sound like human voices, and AI-generated actors that appear to be human actors.”

Most of today’s political ads involve some use of “computational technology” or “other machine-based” systems to produce the images and audio and video components of the ads. Think of the commonly available software tools used to enhance a candidate’s appearance or voice. As Simington asks, “why would political advertisers not just say: let’s label most, or every, ad as generated by artificial intelligence,” quickly making such disclosures meaningless. 

Carr highlights another aspect of the FCC’s proposal inviting confusion and political mischief. He predicts the FCC’s proposal will induce partisans to flood the agency, with complaints alleging that ads contain AI-generated content.

The commission, for example, suggests that perhaps a broadcaster should be obligated to take corrective action if informed by a “credible third party” that an ad without a disclaimer has AI-generated content. We should not be so naive as to believe there would be agreement among partisans as to which third-party arbiters are “credible.” After all, shortly before President Joe Biden withdrew from the presidential race, the Washington Post, following the Biden administration’s lead, criticized videos depicting his frailties as “cheap fakes.” I certainly wouldn’t want the government designating third parties as “credible” or not for purposes of deciding whether political ads should be taken down.

Another significant problem with the FCC’s proposal is that the disclosure mandate only applies to broadcasters, cable, and satellite TV operators, not online providers, at a time when an increasing amount of political advertising is rapidly moving online. It’s likely that online political ads will contain more deliberately deceptive AI-generated content than on-air ads. And as on-air operators fight to retain viewers moving to nonregulated streaming platforms, this new regulatory burden would be just another competitive disadvantage.

Aside from the policy problems, the FCC’s proposal almost certainly is legally flawed. Sean Cooksey, the chairman of the FEC, has written Rosenworcel explaining that the FCC’s proposal “would invade the FEC’s jurisdiction.” According to Cooksey, the FEC has “sole authority” to administer the federal election laws, including “the disclaimer and reporting requirements specific to political communications.”

Indeed, the FEC is already engaged in its own rulemaking to consider whether the use of AI in political ads should be regulated. There is a real concern that if the FCC proceeds, its regulations will create conflicts with those that may be adopted by the FEC, creating real confusion for political campaigns and voters alike. 

CLICK HERE TO READ MORE FROM RESTORING AMERICA

And the FCC’s own claim to authority under the Communications Act is shaky at best, especially now that, after the Supreme Court’s recent decision in Loper Bright Enterprises v. Raimondo, the agency will no longer receive any deference to support novel interpretations of ambiguous statutory authority.

In sum, the FCC’s rush to adopt a novel AI political ad rule is a misguided power grab — a combination of bad policy and bad law.

Randolph May is president of the Free State Foundation, a free market-oriented think tank in Rockville, Maryland.