Skip to main content

California Gov. Newsom vetoes bill SB 1047 that aims to prevent AI disasters

California Gov. Gavin Newsom has vetoed bill SB 1047, which aims to prevent bad actors from using AI to cause "critical harm" to humans. The California state assembly passed the legislation by a margin of 41-9 on August 28, but several organizations including the Chamber of Commerce had urged Newsom to veto the bill. In his veto message on Sept. 29, Newsom said the bill is "well-intentioned" but "does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions - so long as a large system deploys it." 

SB 1047 would have made the developers of AI models liable for adopting safety protocols that would stop catastrophic uses of their technology. That includes preventive measures such as testing and outside risk assessment, as well as an "emergency stop" that would completely shut down the AI model. A first violation would cost a minimum of $10 million and $30 million for subsequent infractions. However, the bill was revised to eliminate the state attorney general's ability to sue AI companies with negligent practices if a catastrophic event does not occur. Companies would only be subject to injunctive relief and could be sued if their model caused critical harm.

This law would apply to AI models that cost at least $100 million to use and 10^26 FLOPS for training. It also would have covered derivative projects in instances where a third party has invested $10 million or more in developing or modifying the original model. Any company doing business in California would be subject to the rules if it meets the other requirements. Addressing the bill's focus on large-scale systems, Newsom said, "I do not believe this is the best approach to protecting the public from real threats posed by the technology." The veto message adds:

By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 - at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.

The earlier version of SB 1047 would have created a new department called the Frontier Model Division to oversee and enforce the rules. Instead, the bill was altered ahead of a committee vote to place governance at the hands of a Board of Frontier Models within the Government Operations Agency. The nine members would be appointed by the state's governor and legislature.

The bill faced a complicated path to the final vote. SB 1047 was authored by California State Sen. Scott Wiener, who told TechCrunch: "We have a history with technology of waiting for harms to happen, and then wringing our hands. Let’s not wait for something bad to happen. Let’s just get out ahead of it." Notable AI researchers Geoffrey Hinton and Yoshua Bengio backed the legislation, as did the Center for AI Safety, which has been raising the alarm about AI's risks over the past year.

"Let me be clear - I agree with the author - we cannot afford to wait for a major catastrophe to occur before taking action to protect the public," Newsom said in the veto message. The statement continues:

California will not abandon its responsibility. Safety protocols must be adopted. Proactive guardrails should be implemented, and severe consequences for bad actors must be clear and enforceable. I do not agree, however, that to keep the public safe, we must settle for a solution that is not informed by an empirical trajectory analysis of Al systems and capabilities. Ultimately, any framework for effectively regulating Al needs to keep pace with the technology itself.

SB 1047 drew heavy-hitting opposition from across the tech space. Researcher Fei-Fei Li critiqued the bill, as did Meta Chief AI Scientist Yann LeCun, for limiting the potential to explore new uses of AI. The trade group repping tech giants such as Amazon, Apple and Google said SB 1047 would limit new developments in the state's tech sector. Venture capital firm Andreeson Horowitz and several startups also questioned whether the bill placed unnecessary financial burdens on AI innovators. Anthropic and other opponents of the original bill pushed for amendments that were adopted in the version of SB 1047 that passed California's Appropriations Committee on August 15. 

This article originally appeared on Engadget at https://ift.tt/OUAf71i

from Engadget is a web magazine with obsessive daily coverage of everything new in gadgets and consumer electronics https://ift.tt/OUAf71i
via IFTTT

Comments

Popular posts from this blog

Instagram accidentally reinstated Pornhub’s banned account

After years of on-and-off temporary suspensions, Instagram permanently banned Pornhub’s account in September. Then, for a short period of time this weekend, the account was reinstated. By Tuesday, it was permanently banned again. “This was done in error,” an Instagram spokesperson told TechCrunch. “As we’ve said previously, we permanently disabled this Instagram account for repeatedly violating our policies.” Instagram’s content guidelines prohibit  nudity and sexual solicitation . A Pornhub spokesperson told TechCrunch, though, that they believe the adult streaming platform’s account did not violate any guidelines. Instagram has not commented on the exact reasoning for the ban, or which policies the account violated. It’s worrying from a moderation perspective if a permanently banned Instagram account can accidentally get switched back on. Pornhub told TechCrunch that its account even received a notice from Instagram, stating that its ban had been a mistake (that message itse...

Colorado police identified the serial killer who murdered 4 women 40 years ago after exhuming his body to analyze a DNA sample

A scientist examines computer images of DNA models. Getty Images Police in Colorado have cracked the cold cases of four women killed 40 years ago. Denver PD said genetic genealogy and DNA analysis helped them identify the serial killer. He had died by suicide in jail in 1981. DNA from his exhumed body matched evidence from the murders. Police in Colorado have cracked the code on four murder cases that went unsolved for 40 years, using DNA from the killer's exhumed body. The cases pertain to four women killed in the Denver metro area between 1978 and 1981. They were 33-year-old Madeleine Furey-Livaudais, 53-year-old Dolores Barajas, 27-year-old Gwendolyn Harris, and 17-year-old Antoinette Parks. The four women were stabbed to death. Denver Police Commander Matt Clark said in a press conference Friday that there was an "underlying sexual component" to the murders but didn't elaborate further. In 2009, a detective reviewed Parks' case and picked several p...

Gemini vs. ChatGPT: Which one planned my wedding better?

I was all about the wedding bells after getting engaged in June, but after seeing some of these wedding venue quotes, it’s more like alarm bells. "Ding-dong" has been remixed to "cha-ching" – and I need help. I don’t even know how to begin wedding planning. What are the first steps? What do I need to prioritize first? Which tasks are pressing – and which can wait a year or two? I decided to enlist the help of an AI assistant. Taking it one step further, I thought it’d be interesting to see which chatbot – Gemini Advanced or ChatGPT Plus (i.e., ChatGPT 4o) – is the better wedding planner. Gemini vs ChatGPT: Create a to-do list I’m planning on have my wedding in the summer of 2026 – sometime between August and September. Besides that, I don’t have anything else nailed down, so I asked both Gemini and ChatGPT to give me a to-do list based on the following prompt: “My wedding is between August 2026 and September 2026. Give me a to-do list of things to do for the...