THE AMERICA ONE NEWS
Jun 3, 2025  |  
0
 | Remer,MN
Sponsor:  QWIKET 
Sponsor:  QWIKET 
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge.
Sponsor:  QWIKET: Elevate your fantasy game! Interactive Sports Knowledge and Reasoning Support for Fantasy Sports and Betting Enthusiasts.
back  
topic
Kaelan Deese


NextImg:Integration era: Congress and courts confront AI ambiguities

Artificial intelligence is fast becoming an integral part of everyday life. But as the technology explodes, the debate intensifies on how to harness it properly and ethically. This Washington Examiner series, The Integration Era, will look at how AI is being used responsibly in Congress, how its usage is causing headaches in schools, and how Congress and courts are addressing abuses that target vulnerable people and threats to intellectual property. Read Part 2 here and Part 1 here.

The artificial intelligence revolution is running headfirst into a legal barricade, and it’s unclear what features will be able to survive the impact.

Recommended Stories

As more victims of explicit deepfake images emerge and copyright fights in courtrooms percolate, the legislative and judicial branches are racing to draw lines around large language models, or LLMs. All the while, these technologies have been rewriting the standards of society before government institutions have a chance to react.

The chance to regulate AI through Congress and the courts has bubbled up to a contentious question: When do laws cut down on abuse and when do they overstep First Amendment rights?

In Washington, lawmakers are pushing forward bills aimed at curbing the worst abuses of artificial intelligence, particularly when it comes to sexually exploitative content and unauthorized digital impersonation of musicians and copyright materials. Meanwhile, a judge recently allowed the lawsuit between the New York Times and OpenAI to press forward, one which will decide whether ChatGPT plagiarized and stole intellectual property from the news giant.

The stakes are high as the tools of generative AI test the boundaries of free speech, privacy, and intellectual property law.

Congress nears First AI criminal law with the Take It Down Act

The most advanced AI-related legislation in Congress is the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes On Websites and Networks Act, better known as the Take It Down Act. Sponsored by Sens. Ted Cruz (R-TX) and Amy Klobuchar (D-MN), the Senate passed the bill by unanimous consent in February. On Tuesday, the House Energy and Commerce Committee approved the bill in a 49–1 vote, clearing the way for a full House vote.

The legislation would criminalize the nonconsensual distribution of explicit images, real or AI-generated, and require websites and platforms to take down such content within 48 hours of a valid request. The bill’s rapid momentum is due in part to high-profile backing from first lady Melania Trump, who has made it a central cause of her revived “Be Best” initiative.

Supporters argue it’s long overdue. “As bad actors continue to exploit new technologies like generative artificial intelligence,” Cruz said last week, “the Take It Down Act is crucial for ending the spread of exploitative sexual material online, holding Big Tech accountable, and empowering victims of revenge and deepfake pornography.”

Policy advocates say the bill addresses an urgent problem that has outpaced existing law. Annie Chestnut Tutor, a policy analyst at the Heritage Foundation’s Tech Policy Center, called it “a necessary law,” adding that the committee’s near-unanimous vote shows broad support. “The only thing standing in the way of it being signed into law is the timing of a vote on the House floor,” Tutor said. “It’s a question of when, not if, since Speaker Mike Johnson and Majority Leader Steve Scalise pledged their support when the first lady held a roundtable on the bill in March.”

The law’s urgency is underscored by tragedies like that of Elijah Heacock, a Kentucky teenager whose suicide followed harassment involving deepfake imagery. “I will do everything in my power to get the TAKE IT DOWN Act across the finish line and signed into law by President Trump to honor the lives of Americans like Elijah,” said Rep. Brett Guthrie (R-KY), who chairs the committee shepherding the bill in the House.

“Part of the problem currently is that victims have had trouble getting their images removed or even getting responses from platforms,” Tutor said. “This bill provides a clear path for recourse and gives victims hope.”

President Donald Trump endorsed the bill during his March 4 address to Congress—but also drew criticism for joking about using it to target critics. “Once it passes the House, I look forward to signing that bill into law,” he said. “And I’m going to use that bill for myself too, if you don’t mind, because nobody gets treated worse than I do online, nobody.”

Free speech fears and enforcement gaps raise red flags

Trump’s remark, intended to draw laughs, instead fueled concern from civil liberties groups. The Electronic Frontier Foundation warned the bill’s notice-and-takedown system could be easily exploited by those seeking to silence criticism or control speech. “There is nothing in the law, as written, to stop anyone—especially those with significant resources—from misusing the system to remove speech that criticizes them,” EFF wrote in response.

Even supporters of the bill’s aims worry about its practical implementation. Rep. Frank Pallone (D-NJ), the committee’s ranking member, warned that a “shorthanded FTC” could make enforcement nearly impossible. “There will be no enforcement of anything related to kids’ privacy,” he said.

Pallone proposed an amendment to add a safeguard against fraudulent takedown requests, designed to stop bad actors from impersonating victims and removing consensual content. The amendment was rejected in a voice vote.

AI legal scholar Kevin Frazier said he supports the bill’s values but cautioned against permanent federal legislative solutions for emerging technologies. “We need regulatory humility,” he said, noting there needs to be ways to get certain laws “off the books easier” through easy repeal provisions.

“Sunset provisions or clauses are a must — so that Congress is forced to revisit whether the law is working or if it’s producing unintended harm,” Frazier said.

Tutor dismissed concerns that such comments undercut the law’s intent. “Claiming an AI-generated or deepfake nonconsensual intimate image was just a joke is not an excuse,” she said.

“Nor should it be a reason for an image to remain online. Given how much media attention this issue and the TAKE IT DOWN Act has received, I hope people, particularly teenagers, grasp how serious it is to publish this type of content without someone’s consent,” Tutor added.

NO FAKES Act tackles AI impersonation—but could conflict with state-level laws

Running parallel to the Take It Down Act is a revived push for the NO FAKES Act (Nurture Originals, Foster Art, and Keep Entertainment Safe), a bill designed to protect against unauthorized digital impersonations using AI, particularly of artists, performers, and public figures.

The newly expanded version of the bill would:

Establish a federal right of publicity to sue over unauthorized replicas of voice and likeness; require digital fingerprinting to block reuploads of removed content; Extend safe harbor protections for platforms that comply with takedown requests; And impose civil penalties up to $750,000 per violation for platforms that fail to act in good faith.

Backed by Google, OpenAI, SAG-AFTRA, and the major record labels, the bill has gained traction in both chambers. Supporters argue it would end the confusing patchwork of state-level protections—laws that vary widely in scope and enforcement.

But cautious observers, including Frazier, argue that state-level regulation is more appropriate when it comes to AI “adoption”—how the technology is used in day-to-day life.

“States like Tennessee have already passed meaningful legislation,” he said, referencing the Elvis Act, which bans AI impersonations without consent. Similar laws are now in effect in California and Illinois.

“And so my issue with trying to regulate AI adoption at the federal level is you may preempt some of those state laws, and you may undermine the ability of states to tailor AI adoption to the needs and values of their residents,” Frazier said while caveating that AI regulations for training models could be better handled at the federal level.

Copyright clash could rewire the AI industry

Meanwhile, a major court battle may prove just as transformative as any new law. In New York Times v. OpenAI, a federal judge recently ruled that the outlet can continue its lawsuit accusing OpenAI and Microsoft of copyright infringement.

The New York Times claims that its articles were used without permission to train ChatGPT and the chatbot has output material resembling its journalism. U.S. District Judge Sidney Stein found the New York Times had shown “numerous” and “widely publicized” examples of potential infringement—keeping the core claims alive.

“This is a monumental case with huge ramifications for America’s AI competitiveness,” said Frazier.

Taking OpenAI at its word, Frazier said the company has “made clear that a shortage of data is an existential threat to its ability to continue to develop models that are competitive with Chinese models.” This underscores a central problem with the suit, which threatens to stifle AI growth and maturity as U.S.-based companies compete with Chinese LLMs like DeepSeek.

He added that U.S. copyright law, meant to promote the diffusion of knowledge, has increasingly become a tool of large institutions, while often limiting smaller businesses due to a lack of resources to take legal action when necessary.

“I think it’s also important to put this in context, that copyright law as it stands right now isn’t necessarily [shielding] the sorts of local artists and small-town publications that people really want to protect,” Frazier said.

WIDESPREAD STUDENT USE OF AI NEEDS CODE OF CONDUCT: ‘TOO BIG TO IGNORE’

“So we kind of need to have a more nuanced narrative about what this case means for AI development as a whole,” he added.

If the courts side with the New York Times, OpenAI and Microsoft could be forced to retrain models, alter outputs, or compensate rights holders — outcomes that would ripple across the industry.