


California Governor Gavin Newsom on Monday signed into law a new set of rules to ensure the safe development of artificial intelligence, creating one of the strongest sets of regulations in the nation.
The Transparency in Frontier Artificial Intelligence Act, or S.B. 53, requires the most advanced A.I. companies to report safety protocols used in building their technologies and forces the companies to report the greatest risks posed by their technologies. The bill also strengthens whistle blower protections for employees warning the public about potential dangers posed by the technology.
California Senator Scott Wiener, a Democrat from San Francisco, who proposed the legislation, said the law was crucial to fill a vacuum to protect consumers from potential harms of the fast-growing technology.
“This is a groundbreaking law that promotes both innovation and safety, the two are not mutually exclusive, even though they are often pitted against each other to be,” Mr. Wiener said.
The closely watched California law will escalate the tech industry’s war against states taking regulation of A.I. into their own hands.
Meta, OpenAI, Google and the venture capital firm Andreessen Horowitz have warned that state legislation will put too much of a burden on A.I. companies, which now face dozens of state laws around the country attempting to govern the rapidly advancing technology. The companies have pushed for federal legislation that blocks states from passing a patchwork of rules.
Last month, Meta and Andreessen Horowitz pledged $200 million to two separate super PACs that aim to elect politicians friendly to A.I. to replace legislators creating regulations for the industry.
The California law is a diluted version of a safety bill vetoed last year by Mr. Newsom after a fierce lobbying campaign by the industry against it.
The new bill is more focused on transparency about safety measures and risks of the models.
This story is a developing news story. Check back for updates.