California lawmakers pass extensive AI safety legislation

1comment
California lawmakers pass extensive AI safety legislation
While the conversation around the ethics of generative AI continues, the California State Assembly and Senate have taken a significant step by passing the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047). This legislation marks one of the first major regulatory efforts for AI in the US.

Developers have to quickly and fully disable any AI model considered unsafe


The bill, which has been a hot topic of discussion from Silicon Valley to Washington, is set to impose some key rules on AI companies in California. For starters, before diving into training their advanced AI models, companies will need to ensure they can quickly and completely shut down the system if things go awry.

They will also have to protect their models from unsafe changes after training and keep a closer eye on testing to figure out if the model could pose any serious risks or cause significant harm.


Critics of SB 1047, including OpenAI, the company behind ChatGPT, have raised concerns that the law is too fixated on catastrophic risks and might unintentionally hurt small, open-source AI developers.

In response to this pushback, the bill was revised to swap out potential criminal penalties for civil ones. It also tightened the enforcement powers of California's attorney general and modified the criteria for joining a new "Board of Frontier Models" established by the legislation.

Governor Gavin Newsom has until the end of September to make a call on whether to approve or veto the bill.

Recommended Stories
As AI technology continues to evolve at lightning speed, I do believe regulations are the key to keeping users and our data safe. Recently, big tech companies like Apple, Amazon, Google, Meta, and OpenAI came together to adopt a set of AI safety guidelines laid out by the Biden administration. These guidelines focus on commitments to test AI systems' behaviors, ensuring they don't show bias or pose security risks.

The European Union is also working towards creating clearer rules and guidelines around AI. Its main goal is to protect user data and look into how tech companies use that data to train their AI models. However, the CEOs of Meta and Spotify recently expressed worries about the EU’s regulatory approach, suggesting that Europe might risk falling behind because of its complicated regulations.

Recommended Stories

Loading Comments...
FCC OKs Cingular\'s purchase of AT&T Wireless