THE AMERICA ONE NEWS
Jun 2, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic
National Review
National Review
24 May 2024
Dominic Pino


NextImg:The Corner: Election Season Is Handling AI Just Fine So Far

One variety of fearmongering about artificial intelligence was really just a subset of the media’s fearmongering about “misinformation,” i.e., that some voters might be duped into believing things that aren’t true in such a way that it would affect their voting.

A cynic might say we already have a word for that phenomenon: “campaigning.” But we saw an example of how this might look when someone made robocalls with an AI-generated voice that sounded like President Biden, spreading false information about elections in the lead-up to the New Hampshire primaries.

This is an abuse of a new technology. Do we need new laws and regulations to prevent it?

It turns out authorities already have legal tools to counter it. The Associated Press reports that the political consultant who created the phone calls, Steven Kramer, faces a $6 million fine from the Federal Communications Commission for his actions, and the company that is accused of transmitting them, Lingo Telecom, faces a $2 million fine. Kramer already admitted he sent the calls, which he claimed were to raise the alarm about AI in elections and not to influence the outcome. Lingo Telecom denies any wrongdoing.

In New Hampshire, Kramer also faces 13 felony charges for “attempting to deter someone from voting using misleading information” and 13 misdemeanor charges for “falsely representing himself as a candidate by his own conduct or that of another person.”

We’ll see how these state and federal proceedings play out. This is why the U.S. has an adversarial court system that can adapt to new cases as they come. The courts will have the opportunity to disentangle exactly who is responsible for abuses and how they ought to be punished if they are responsible. The patience required for this approach has not resulted in widespread panic during election season, as a recent piece from the New York Times pointed out.

“With less than six months until the 2024 election, the political uses of A.I. are more theoretical than transformational, both as a constructive communications tool or as a way to spread dangerous disinformation,” the story says. It quotes several political strategists who either haven’t found really good uses for AI in campaigning or are using it in uncontroversial ways. One who used an AI voice for an advertisement — not impersonating any real person — said it wasn’t any cheaper than hiring a human voice actor. One Democratic primary candidate for Pennsylvania’s tenth congressional district used an AI phone-banking system and finished in third place with less than 15 percent of the vote.

There’s still plenty of time left before Election Day, and perhaps egregious and clever AI abuses are upcoming. But so far there’s no reason to believe AI poses an unusual threat to elections, and calls to preemptively regulate it on those grounds are not justified.