


As part of the budget legislation being debated on Capitol Hill, the Senate this week rejected an amendment that would have prohibited states from regulating artificial intelligence. Backers of the amendment feared, with some merit, that a patchwork of rules would slow down technological development. But the amendment went too far.
The answer to this problem is not to prevent states from protecting their citizens from Big Tech’s excesses; it’s for the federal government to do so itself. Congress must create guardrails and set out a clear set of rules to ensure AI’s development is beneficial for us all. And there’s no time to waste.
Recommended Stories
- Corn — the balanced story behind America’s crop
- Activist group helps convicted child rapist evade ICE agents
- Policymakers need to stop stalling E15 access
Because I represent the interests of content creators such as news publishers and magazines, you might find it surprising that I am optimistic about our AI future. AI offers potential benefits in almost all walks of life — from reporters needing to sift through thousands of historical documents, to doctors looking for early cancers. We don’t want to lose these opportunities. But we can and should pursue these goals while ensuring AI conforms to ethical standards.
AI depends on the creation of content by a huge range of people, from those who write for medical and engineering journals to reporters to those who create illustrations for children. AI companies scoop up all of that material, mostly without paying for it. They just steal it, although they don’t like to use that word; they are trying to convince the courts that their grabbing of this material is “fair use” and already covered by existing law and custom.
But when an entire article is scraped without compensation and regurgitated to someone who will then never look at that article, know where it came from, or find out who wrote it, that is not fair use. It is theft. And it’s theft that is already occurring on such a large scale that creators and publishers are worried they won’t be able to afford to make the content that consumers want for much longer. If this continues, exploitative AI will eventually preside over its own demise by destroying the incentive to create the content on which it feeds.
Congress should issue rules to protect the creators of the content that AI tools use — publishers, authors, journalists, artists, and creatives of all types. These rules will help creators and AI consumers alike. They ensure that the creators are compensated and that the AI users have access to reliable information.
If Big Tech is left to its own devices, Americans will just have to trust AI output while knowing little about its basis. In most cases, we’ll have no other reference points to measure the quality of an AI response, or whether it’s even correct at all.
Right now, we don’t know much about what AI companies are scraping to train their systems, and so we often have no way of knowing whether it is useful or accurate unless it’s startlingly obvious. When Google AI’s tool Gemini first launched, it famously generated embarrassing pictures of people of color as German soldiers in World War II, among other foibles. (Google’s chief executive had to admit this was “completely unacceptable.”)
But mistakes are often more subtle. And what if, over time, fewer and fewer of us have the information to determine what is true and what is false? Because nothing about AI is transparent at the moment, publishers cannot even begin to enforce their intellectual property rights. Congress should require that the sources used by AI be fully disclosed — both the ones it uses to train its models and the ones it uses to keep them current.
Armed with that information, publishers will be able to press AI companies to enter agreements before using their work. That will help preserve thousands of jobs, especially in small businesses.
REPUBLICAN DEAL TO LIMIT AI REGULATIONS IN GOP MEGABILL FALLS APART
Disclosure will also help users, who will know where the information they consume comes from and whether some sources have been excluded because of bias. The agreements themselves will probably include provisions to keep the companies, and foreign actors, from distorting the flow of information.
The White House blueprint for AI recognizes that AI development must respect intellectual property and the rule of law. Our nation’s leadership in this field depends on AI products being accurate and drawing from a solid base of journalism. To achieve AI’s full potential, and avoid its serious dangers, Congress needs to act quickly.
Danielle Coffey is president and CEO of the News/Media Alliance, which represents more than 2,200 publishers nationwide.