California Advances Landmark Legislation To Regulate Large AI Models
Source: The Guardian
Photo: A person stands in front of a Meta sign outside of the company’s headquarters in Menlo Park, California, on 7 March 2023. (Jeff Chiu/AP)
Groundbreaking bill aims to reduce potential AI risks – requiring model testing and disclosure of safety protocol
A California bill that would establish first-in-the-nation safety measures for the largest artificial intelligence systems cleared an important vote Wednesday. The proposal, aiming to reduce potential risks created by AI, would require companies to test their models and publicly disclose their safety protocols to prevent the models from being manipulated to, for example, wipe out the state’s electric grid or help build chemical weapons – scenarios experts say could be possible in the future with such rapid advancements in the industry.
The measure squeaked by in the state assembly Wednesday and won procedural approval in the state senate. It now heads to the governor’s desk for his signature, though he has not indicated his position on it. Governor Gavin Newsom has until the end of September to decide whether to sign it into law, veto it or allow it to become law without his signature. He declined to weigh in on the measure earlier this summer but had warned against AI overregulation.
Supporters said it would set some of the first much-needed safety ground rules for large-scale AI models in the United States. The bill targets systems that require more than $100m in data to train. No current AI models have hit that threshold.
The proposal, authored by Democratic senator Scott Wiener, faced fierce opposition from venture capital firms and tech companies, including OpenAI, Google and Meta, the parent company of Facebook and Instagram. They say safety regulations should be established by the federal government and that the California legislation takes aim at developers instead of targeting those who use and exploit the AI systems for harm.
Wiener said his legislation took a “light touch” approach.
“Innovation and safety can go hand in hand – and California is leading the way,” he said in a statement after the vote.
Wiener’s proposal is among dozens of AI bills California lawmakers proposed this year to build public trust, fight algorithmic discrimination and outlaw deepfakes that involve elections or pornography. With AI increasingly affecting the daily lives of Americans, state legislators have tried to strike a balance of reigning in the technology and its potential risks without stifling the booming homegrown industry.
California, home of 35 of the world’s top 50 AI companies, has been an early adopter of AI technologies and could soon deploy generative AI tools to address highway congestion and road safety, among other things.
Elon Musk, owner of Twitter/X, and founder of xAI, threw his support behind the proposal this week, though he said it was a “tough call”. X operates its own chatbot and image generator, Grok, that has fewer safeguards in place than other prominent AI models.
“For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk to the public,” Musk tweeted.
A group of several California house members also opposed the bill, with former House speaker Nancy Pelosi calling it “well-intentioned but ill informed”.
Chamber of Progress, a left-leaning Silicon Valley-funded industry group, said the bill is “based on science fiction fantasies of what AI could look like”.
“This bill has more in common with Blade Runner or The Terminator than the real world,” senior tech policy director Todd O’Boyle said in a statement after the Wednesday vote. “We shouldn’t hamstring California’s leading economic sector over a theoretical scenario.”
The legislation is also supported by Anthropic, an AI startup backed by Amazon and Google, after Wiener adjusted the bill earlier this month to include some of the company’s suggestions. The current bill removed a penalty of perjury provision, limited the state attorney general’s power to sue violators and narrowed the responsibilities of a new AI regulatory agency.
Anthropic said in a letter to Newsom that the bill is crucial to prevent catastrophic misuse of powerful AI systems and that “its benefits likely outweigh its costs”.
He also slammed critics earlier this week for dismissing potential catastrophic risks from powerful AI models as unrealistic: “If they really think the risks are fake, then the bill should present no issue whatsoever.”
I hope you appreciated this article. Before you move on, I wanted to ask if you would consider supporting the Guardian’s journalism as we enter one of the most consequential news cycles of our lifetimes in 2024.
We have never been more passionate about exposing the multiplying threats to our democracy and holding power to account in America. In the heat of a tumultuous presidential race, with the threat of a more extreme second Trump presidency looming, there is an urgent need for free, trustworthy journalism that foregrounds the stakes of November’s election for our country and planet.
Yet, from Elon Musk to the Murdochs, a small number of billionaire owners have a powerful hold on so much of the information that reaches the public about what’s happening in the world. The Guardian is different. We have no billionaire owner or shareholders to consider. Our journalism is produced to serve the public interest – not profit motives.
And we avoid the trap that befalls much US media: the tendency, born of a desire to please all sides, to engage in false equivalence in the name of neutrality. We always strive to be fair. But sometimes that means calling out the lies of powerful people and institutions – and making clear how misinformation and demagoguery can damage democracy.
From threats to election integrity, to the spiraling climate crisis, to complex foreign conflicts, our journalists contextualize, investigate and illuminate the critical stories of our time. As a global news organization with a robust US reporting staff, we’re able to provide a fresh, outsider perspective – one so often missing in the American media bubble.
Around the world, readers can access the Guardian’s paywall-free journalism because of our unique reader-supported model. That’s because of people like you. Our readers keep us independent, beholden to no outside influence and accessible to everyone – whether they can afford to pay for news, or not.
https://www.theguardian.com/technology/article/2024/aug/29/california-ai-regulation-bill