Governor Gavin Newsom has vetoed a contentious bill aimed at regulating AI development.
In his veto message, he explained that SB 1047 only applied to the largest and costliest AI models. He warned that this focus could create a misleading sense of safety and overlook the risks posed by smaller, specialized models handling critical decisions involving sensitive data, such as medical records. In contrast, larger models often manage lower-risk tasks like customer service. Newsom criticized the bill for imposing strict standards on basic functions, arguing it fails to address real public safety threats.
Governor Newsom vetoes SB 1047, @Scott_Wiener's AI regulatory bill 🎉
"California is home to 32 of the world's 50 leading Al companies…I do not believe this is the best approach to protecting the public from real threats posed by the technology." pic.twitter.com/RiWwlJ4Suv
— Adam Kovacevich (@adamkovac) September 29, 2024
Newsom’s office reported that he has signed 17 bills on AI regulation in the past 30 days and consulted experts like Tino Cuéllar, Fei-Fei Li, and Jennifer Tour Chayes to help create effective guardrails for generative AI deployment. Newsom stated that he had sought assistance from leading experts at the US AI Safety Institute on generative AI to help California create “workable guardrails” through a science-based analysis of frontier models and their risks.
Scott Wiener, the bill’s author and a Democratic state senator, called Newsom’s veto “a setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and welfare of the public and the future of the planet.”
My statement on the Governor’s veto of SB 1047: pic.twitter.com/SsuBvV2mMI
— Senator Scott Wiener (@Scott_Wiener) September 29, 2024
Controversial AI Safety Bill
California’s legislature advanced an AI safety bill on August 28. The California State Assembly approved SB 1047, requiring major AI companies like Microsoft and OpenAI to implement safety measures before public release, with a deadline of September 30th for signing or vetoing it.
SB 1047 mandated that developers of large AI models conduct safety tests to reduce the risk of “critical harm,” defined as cyberattacks causing at least $500 million in damage or mass casualties. Developers must ensure their AI can be human-shutdown if it behaves dangerously. The bill applied to models exceeding a specific computing-power threshold and costing over $100 million to train, covering any company operating in California. It would also allow the state attorney general to take legal action against developers for significant harm caused by their systems.
Critics of the AI safety bill, including OpenAI and politicians like Nancy Pelosi, expressed concerns about its enforcement powers for the state’s attorney general. Industry groups like the U.S. Chamber of Commerce also opposed it. Many smaller companies argued the bill would stifle innovation by deterring large developers from sharing their models, threatening the startup ecosystem that relies on openness.
In contrast, Elon Musk supported the bill, citing the public risks posed by AI as justification for regulation. Anthropic initially raised concerns about the bill but later concluded that the benefits outweighed the costs after the legislature amended some clauses.
California is home to over 30 leading AI companies and has been an early adopter of the technology. The state also plans to use generative AI tools to improve road safety and reduce congestion.