


Legislators should start listening to the techno-optimists and focus on the benefits of rapid, unimpeded AI innovation, rather than just worrying about the risks.
A fter declining rapidly between late 2022 and late 2024, tech employment in the San Francisco Bay Area has stabilized slightly above 2019 levels. Ongoing corporate efforts to gain efficiency by trimming programming staff are now being offset by the employment growth in companies participating in the local artificial intelligence boom. But now, San Francisco’s state senator, Scott Wiener, is proposing state regulation of AI. If enacted, the new regulatory framework could push the AI business out of the Bay Area to other states with more permissive regimes, reducing local tech employment further.
Wiener’s SB 53 is a rehash of his previous AI regulation bill, SB 1047, which sailed through the legislature last year before being vetoed by Governor Gavin Newsom. The governor was heavily lobbied by industry and apparently realized that unleashing AI regulation could jeopardize the state’s hold on the world’s fastest-growing business.
The gates opened for SB 53 on July 1 when the U.S. Senate stripped a prohibition on state-level AI regulation from the One Big Beautiful Bill. While that prohibition was arguably a case of federal overreach, states should be careful before using their federally acknowledged prerogative. As we saw earlier this decade, companies can and will move to other states when the tax and regulatory environment in their home state becomes too arduous.
The AI Safety push is another outgrowth of Effective Altruism (EA) and Long Termism: popular Bay Area philosophies that sound good but have thus far produced questionable results. The most famous proponent of these ideas, Sam Bankman-Fried, used them as a rationale for stealing investor funds. After all, if you could effectively use the purloined assets for the long-term welfare of “society,” why not take them? California EA proponents later placed a “pandemic preparedness” income tax on the state’s 2024 ballot because apparently the state had not done enough to lock down residents during the Covid-19 outbreak. Fortunately, it was withdrawn ahead of that year’s general election.
In response to EA, Marc Andreessen and other Silicon Valley leaders have been promoting another philosophy, Effective Accelerationism (e/acc), which better meets the moment for the tech community in 2025. Proponents of e/acc are techno-optimists who believe the faster we develop AI, the quicker it can help us solve poverty and other social problems. From this perspective, anything that slows down the advance of AI, such as the regulatory red-tape imposed by SB 53, slows down social improvement.
Relative to last year’s AI regulation bill, SB 53 appears to be less onerous. Its main purpose is to assume a series of reporting requirements, including publication of a “safety and security protocol” and submission of “critical safety incident reports” when something bad happens. Starting in 2030, regulated firms would have to commission third-party audits of their AI safety reporting. Finally, the bill includes protections for company whistleblowers alleging AI safety violations.
While the reporting regime may not seem too onerous, it is unclear how state regulators will use the information or how frequently they will penalize regulated firms.
Furthermore, all this reporting will not ensure good behavior, because regulation often does not work. I learned this firsthand as an executive in the credit-rating business. After we failed to downgrade Enron on a timely basis, Congress enacted a new regulatory regime for credit-rating agencies in 2006, but that reform did not prevent the ratings inflation on residential mortgage-backed securities, a failure that contributed to the Great Recession. Congress followed up with Dodd-Frank, which, in turn, failed to prevent inflated ratings on commercial mortgage-backed securities (as I documented in a 2020 National Review commentary). So, requiring AI firms to produce regulatory reports will not necessarily prevent them from releasing malicious code.
Along with its various sticks, SB 53 offers a carrot: a public cloud-computing cluster funded at least in part by California taxpayers. Apparently, private clouds like Amazon Web Services and Microsoft Azure are insufficient and can somehow be bested by a state that has been struggling for two decades to implement its new accounting system. This provision of SB 53, now known as CalCompute, appears to be the AI equivalent of Zohran Mamdani’s government supermarkets and comes with similar prospects for success.
As I discussed in a 2024 NR commentary, Senator Wiener previously gave California a half-baked corporate climate change reporting regime. State regulators are still struggling to implement that law. Rather than rushing forward with more half-baked legislation, the legislature should take a pause on AI regulation this year and encourage industry self-regulation in ways that will not disadvantage California and threaten its AI leadership.
Legislators should start listening to the techno-optimists of e/acc and focus on the benefits of rapid, unimpeded AI innovation, rather than just worrying about the risks.