As artificial intelligence continues to advance, regulatory measures become increasingly crucial to ensure safety and ethical standards. On September 29, California Governor Gavin Newsom vetoed SB 1047, a proposed bill aimed at setting stringent guidelines for AI developers. The decision highlights the ongoing debate over balancing technological innovation with necessary oversight.
Legislative efforts to regulate AI in California have evolved over time. Earlier proposals often lacked specific enforcement mechanisms, leading to challenges in implementation. The veto of SB 1047 reflects a shift towards more targeted and practical approaches to AI regulation, distinguishing it from past legislative attempts.
Governor Newsom’s Veto and Rationale
Governor Newsom cited concerns that SB 1047 imposed stringent standards irrespective of the deployment context of AI systems.
“I do not believe this is the best approach to protecting the public from real threats posed by the technology,”
he wrote. The bill was criticized for potentially stifling small developers by applying broad safety requirements to all AI systems developed by large companies.
New Legislation Introduced as an Alternative
In place of SB 1047, Newsom enacted AB 2013, which mandates transparency in AI training data. This law requires developers to disclose the sources of data used to train generative AI systems, aiming to enhance accountability without imposing excessive regulatory burdens.
“While well-intentioned, SB 1047 does not take into account whether an A.I. system is deployed in high-risk environments,”
explained Tatiana Rice, deputy director for U.S. Legislation at Future of Privacy Forum.
State and International AI Regulatory Trends
The 2024 legislative session saw over 45 states introducing AI-related bills, illustrating a nationwide interest in AI governance. Colorado became the first U.S. state to pass the Colorado AI Act, mandating safeguards against algorithmic discrimination in high-risk AI applications, preceding the European Union’s AI Act.
“What is acceptable for A.I. being used to diagnose a health condition should be different from A.I. being used to power driverless cars,”
stated Ashley Casovan, managing director of the AI Governance Center at the International Association of Privacy Professionals.
The rejection of SB 1047 indicates a preference for more nuanced AI regulations in California, focusing on targeted transparency rather than blanket safety measures. Industry voices such as Tatiana Rice advocate for federal or international standards to create a cohesive regulatory environment. Concerns about inconsistent state regulations persist, as highlighted by Craig Smith, who warns that a patchwork of laws could complicate AI development and enforcement nationwide.