


While the One Big Beautiful Bill is going through its motions in Congress, there are a few things that some Republicans don't feel too comfortable about when it comes to how it approaches AI.
As Jennifer Van Laar reported in late June, the bill places a moratorium on regulating AI at the state level, which opens its own Pandora's box of issues:
Additionally, the moratorium prevents states from passing laws to protect creatives from having their work product stolen by AI companies who want to use it to train their AI models without having to pay for the use of that product. What does that mean, exactly?
Let's take the example of Meta's AI model, Llama 3. The company was under pressure to quickly train the program to compete with more established models like ChatGPT and, according to court filings in a related lawsuit, the senior manager for the project emphasized that they needed books, not web data, to properly train their product. Internal documents reported on by The Atlantic show that Meta employees believed the process of properly licensing books and research papers would be too slow and expensive, so they got permission from "MZ" (likely Mark Zuckerberg) to use a huge database of pirated books called Library Genesis, or LibGen. Free and fast - and using stolen intellectual property.
Meaning, states can't defend your private property from being trained on, nor provide a venue in which to sue for compensation for the data AI companies used to train the model.
She also points out that the issue could potentially harm conservative speech via Big Tech censorship, as it could prevent states like Texas or Florida from enforcing laws that push back against AI systems suppressing conservative news or viewpoints. It would effectively be another information blackout that could potentially lead to misleading people and causing them to vote against their own best interests in ignorance.
As I write this, AI companies are in an arms race to create better, faster, and more knowledge-rich models, and they're pulling out all the stops and cutting every corner they can to do so. Lawsuits have sprung up to protect private property, and AI companies are trying to poke and prod loopholes to see if they can get around them. As the Associated Press reported, Anthropic's plan involved buying books, ripping pages out, and scanning them in order to try to bypass copyright laws:
“That Anthropic later bought a copy of a book it earlier stole off the internet will not absolve it of liability for the theft but it may affect the extent of statutory damages,” Alsup wrote.
The ruling could set a precedent for similar lawsuits that have piled up against Anthropic competitor OpenAI, maker of ChatGPT, as well as against Meta Platforms, the parent company of Facebook and Instagram.
Anthropic — founded by ex-OpenAI leaders in 2021 — has marketed itself as the more responsible and safety-focused developer of generative AI models that can compose emails, summarize documents and interact with people in a natural way.
But the lawsuit filed last year alleged that Anthropic’s actions “have made a mockery of its lofty goals” by building its AI product on pirated writings.
Companies are even raiding libraries for old books, newspapers, and documents to feed their models. Harvard University is actually assisting in this and creating a treasure trove of old texts, and when I say old, I mean centuries old. It's public domain data for the most part, but as the copyright war with AI continues, it's a significant addition.
As the OBBB puts that moratorium on states being able to regulate AI companies, some Republicans are getting cold feet, including Tennessee Senator Marsha Blackburn, who pulled out of a deal regarding its inclusion on the basis that states should be able to take AI companies to task themselves if necessary. As reported by The Hill, Blackburn sent an open letter to Texas Senator and Commerce Chair Ted Cruz announcing her withdrawal, noting how she's particularly concerned about how this could harm children:
“While I appreciate Chairman Cruz’s efforts to find acceptable language that allows states to protect their citizens from the abuses of AI, the current language is not acceptable to those who need these protections the most,” Blackburn said in a statement.
“This provision could allow Big Tech to continue to exploit kids, creators, and conservatives,” she continued. “Until Congress passes federally preemptive legislation like the Kids Online Safety Act and an online privacy framework, we can’t block states from making laws that protect their citizens.”
Blackburn has been a key proponent of the Kids Online Safety Act (KOSA), which she reintroduced last month alongside Sen. Richard Blumenthal (D-Conn.) and Senate leadership.
“For as long as I’ve been in Congress, I’ve worked alongside federal and state legislators, parents seeking to protect their kids online, and the creative community in Tennessee to fight back against Big Tech’s exploitation by passing legislation to govern the virtual space,” she added.
Blackburn joined Democrat Senator Maria Cantwell of Washington in having the provision stripped out of the OBBB entirely.
“It’s just another giveaway to tech companies,” Cantwell said in a statement. “This provision gives AI and social media a brand-new shield against litigation and state regulation. This is Section 230 on steroids.”
Editor’s Note: Every single day, here at RedState, we will stand up and FIGHT, FIGHT, FIGHT against the radical left and deliver the conservative reporting our readers deserve.
Help us continue to tell the truth about the Trump administration and its major wins. Join RedState VIP and use promo code FIGHT to get 60% off your membership.