California Governor Rejects Artificial Intelligence Safety Legislation Targeting Tech Giants

Published:

California Gov. Gavin Newsom Vetoes Bill Restricting AI Model Development

California Governor Gavin Newsom (D) has vetoed SB-1047, a bill that sought to impose strict regulations on developers of advanced artificial intelligence (AI) models. The bill, which had the support of leading AI researchers, the Center for AI Security, and others, aimed to establish safety and privacy standards around AI development.

In his veto message, Governor Newsom cited concerns that the bill did not take into account the nuances of AI deployment, such as critical decision-making and sensitive data usage. He emphasized the need for a more tailored approach to regulating AI technologies, especially in high-risk environments.

The proposed legislation, known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, would have required companies developing large language models to prevent any potential critical harm that could result from their technology. The bill also included provisions for implementing failsafe measures to shut down AI models in certain circumstances.

Despite broad bipartisan support for SB-1047, Governor Newsom’s decision to veto the bill has sparked debate among tech industry stakeholders. While some view the veto as a missed opportunity to enhance AI safety and security, others argue that the legislation was based on unfounded doomsday scenarios about AI.

Ultimately, the veto highlights the ongoing challenge of balancing innovation with regulation in the rapidly evolving field of artificial intelligence. As the debate continues, stakeholders are calling for a more nuanced and comprehensive approach to AI regulation to address the complex ethical and safety challenges posed by advanced AI technologies.

Related articles

Recent articles