


{T} he sponsors of a new AI bill called the “No AI Fraud Act” claim it “gives all Americans the tools to protect their digital personas.” Sounds good, right?
Whether they are famous or not, many individuals are concerned that AI tools could result in deep fakes of them saying or doing something they never said or did, but this proposal — and many other attempts to regulate AI in the name of such protection — would have farther-reaching consequences for much of the online creativity that we already enjoy and could chill speech more generally.
The problem with regulations that purport to limit the malicious use of AI is that many of them would also take away the beneficial and benign uses of AI that we see in the creative arts. AI is used in many elements of the creative process already — for instance, the bill’s broad terms could make it difficult to develop AI tools in film editing or make it harder to use existing tools that can lower post-production costs by not having to bring back an individual to re-record lines.
Beyond that, the proposal could take away the ability to engage in certain types of content, like parody or sampling. This type of speech has been previously protected by the courts as “fair use,” so a ban on it likely violates the First Amendment.
One of the key problems with trying to regulate AI is trying to define it. The definition of artificial intelligence, at its most basic, is the use of programming or machines rather than a living being to solve problems. As a result, the average consumer has been using AI much more frequently and more often than they realize. AI is already in our online searches, the chatbots with which we interact, and the mapping algorithms that help us find the fastest route in traffic.
The bill recognizes that there are potential First Amendment concerns, so the text establishes a “First Amendment defense” (something that highlights the proposal’s many problems). An abundance of caution to ensure compliance could result in a chilling effect on existing things like parody TikTok videos, AI-generated political cartoons, or even translating a message to another language if it was in the original speaker or singer’s voice.
Reason’s Elizabeth Nolan Brown noted that if this measure becomes law, she expects to see “a lot more takedowns of anything that might come close to being a violation, be it a clip of a Saturday Night Live skit lampooning Trump, a comedic impression of Taylor Swift, or a weird ChatGPT-generated image of Ayn Rand.”
“I would also expect to see more platforms institute blanket bans on parody accounts and the like,” Brown added.
We have past examples that make such concerns and the desire for caution more than valid. For example, while the Digital Millennium Copyright Act (DMCA) is designed to help protect intellectual property on online platforms, these platforms often act first to limit or remove content when receiving DMCA takedown notices. This is out of an abundant concern for their potential liability if they don’t.
For instance, DMCA claims have taken down a law-school panel on music copyright and embroiled a Star Wars parody video in a copyright dispute. In fact, about 40 percent of such notices in 2016 (when there was less content online) were found to be invalid, false, or misleading.
The potential fears raised by AI — for instance, misleading celebrity endorsements or the manipulation of an average user’s likeness — are not new; they’ve been raised by other technological innovations before, from Photoshop to voice recordings to the manipulation technologies used to edit and produce special effects for movies.
These concerns are, ultimately, about bad actors’ use of a technology, and not the technology itself. The response and responsibility should reflect this. Existing laws are still in play when it comes to concerns about fraudulent misrepresentation or other harms that might arise from AI. As with prior technologies like the camera, our societal norms will evolve to help understand what we can and can’t trust.
We’re not wrong to worry about the potential abuse of AI. But should we be careful that let our fears lead us to regulate away our speech rights and the beneficial uses of the technology.