In a moment that may define California's relationship with artificial intelligence for a generation, both chambers of the state legislature have passed Senate Bill 1047 — the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act — and sent it to Governor Gavin Newsom for his signature. The bill, authored by State Senator Scott Wiener (D-San Francisco), is the most ambitious AI safety legislation proposed anywhere in the United States. Whether you work in tech, worry about the future, or simply live in a state that is home to the companies building these systems, this bill affects you.

This article is a comprehensive, plain-language guide to what SB 1047 actually says — how it works, who it covers, what it demands, and why Californians should view it as a meaningful step in the right direction.

What Is SB 1047?

The Core Idea: Responsibility Before Deployment

SB 1047 begins with a straightforward premise: when a company builds one of the most powerful AI systems in human history, it should be legally required to verify that system will not cause catastrophic harm before releasing it to the world. This is not a radical concept. We require car manufacturers to install seat belts before selling vehicles. We require pharmaceutical companies to run clinical trials before selling drugs. SB 1047 applies the same logic to frontier artificial intelligence.

The bill does not regulate AI broadly. It does not govern chatbots for customer service, AI-generated photo filters, or algorithmic content recommendations. Its scope is deliberately narrow: it targets only the handful of “covered models” — the most expensive, most powerful AI systems on the planet — where the potential for catastrophic misuse is real and documented.

SB 1047 At a Glance
Bill Number
SB 1047
Author
Sen. Scott Wiener (D-SF)
Senate Vote
32–1 (May 21, 2024)
Status
Awaiting Governor's Signature
Training Compute Threshold
10²⁶ FLOPs
Training Cost Threshold
$100 Million+

A "FLOP" is a floating point operation — a unit of computational work. 10²⁶ FLOPs is an almost incomprehensibly large number, equivalent to roughly the entire known computing capacity of humanity running for several months.

Who Does It Cover?

The “Covered Model” Threshold

The bill introduces the concept of a “covered model” — an AI system whose training involved at least $100 million in compute costs and used more than 10²⁶ integer or floating-point operations. If a company subsequently fine-tunes or modifies such a model using more than $10 million in additional compute, that derivative version is also covered.

To understand the scale involved, consider that OpenAI CEO Sam Altman acknowledged GPT-4 cost approximately this much to train. Mark Zuckerberg has stated that the next generation of Meta’s flagship Llama model will require ten times more compute than its predecessor — which would put it squarely within SB 1047’s reach. Very few AI systems today meet these thresholds, but the companies currently racing toward them — OpenAI, Google, Meta, Anthropic, and Microsoft — are all headquartered or operating in California.

Critically, the bill applies to any developer doing business in California, regardless of where the company is incorporated. This prevents a simple workaround of relocating to Nevada or Texas to escape California’s reach — if you sell into California’s market of nearly 40 million consumers, the rules apply to you.

Model TypeCompute ThresholdCost Threshold
New (non-derivative) covered model10²⁶ FLOPs$100 million+
Fine-tuned or derivative modelFine-tune threshold$10 million+
Small / consumer AI toolsNot covered

What Does It Require?

Safety and Security Protocols: Planning for the Worst

Before a developer begins training a covered model, SB 1047 requires them to develop a written Safety and Security Protocol (SSP) — a document that spells out how the company will identify and address potential hazards. The protocol must include specific cybersecurity measures to prevent the model from being stolen or manipulated by malicious actors, detailed testing procedures to probe the model for dangerous capabilities, and concrete safeguards against those capabilities being used for harm.

Importantly, the bill gives developers flexibility in how they design their SSPs. The law does not dictate exact engineering choices — it requires that companies exercise “reasonable care” and document their approach, in the same way that other industries are held to a professional standard of care. Developers can set the specifics of their own protocols, which allows compliance to be technically informed and agile rather than rigidly prescriptive.

SB 1047 largely clarifies what reasonable care means for frontier AI development, as opposed to creating new, potentially burdensome compliance standards.

— Brookings Institution Analysis, September 2024

The Kill Switch: A Power to Pull the Plug

One of the bill’s most discussed provisions requires developers to implement what is colloquially called a “kill switch” — more formally, the capability to promptly enact a full shutdown of any covered model under their control. This means that if a covered AI system is found to be actively causing harm or posing imminent danger, the company retains — and must be able to exercise — the ability to take it offline immediately.

Critics have called this provision alarming, but the logic is impeccable: if you build a system capable of catastrophic harm, you should be able to stop it. This is not different in principle from requiring industrial facilities to have emergency shutoffs, or requiring power plants to have SCRAM systems. The ability to stop a dangerous process is a basic engineering responsibility, not a radical imposition.

Third-Party Audits: Independent Eyes on the Work

Beginning January 1, 2026, SB 1047 requires developers of covered models to retain an independent, third-party auditor annually to assess compliance with the bill’s requirements. The auditor must produce a written audit report. Developers are required to keep an unredacted copy of that report for as long as their model is publicly or commercially available, plus an additional five years.

A redacted version of the safety protocol must also be made publicly available, allowing researchers, journalists, and regulators to understand — in general terms — what safeguards a developer has put in place, even if specific proprietary details are withheld. This balance between transparency and the protection of trade secrets is deliberate and reasonable.

Pre-Deployment Compliance Statements

Before a covered model is released for commercial or public use, a developer must submit a formal statement of compliance to the state, confirming that the company has taken reasonable care to prevent the model from posing an unreasonable risk of “critical harm.” A company’s Chief Technology Officer must sign an annual certification to the new Board of Frontier Models assessing the model’s potential risks and the effectiveness of its safety protocols. Knowingly submitting a false compliance statement exposes the signatory to perjury consequences — a provision that ensures the certification is not a mere formality.

Incident Reporting: 72-Hour Notification Requirement

If a covered AI model is involved in a safety incident — defined as an event that suggests the model may pose a risk of critical harm — the developer must report it to the relevant state authority within 72 hours of becoming aware of it. This mirrors the breach notification requirements that have long applied to cybersecurity incidents, and for good reason: early awareness enables faster intervention.

Defining "Critical Harm"

The Bill Is Not About Every Bad AI Outcome

It is essential for Californians to understand exactly what SB 1047 does — and does not — consider a “critical harm.” The definition is deliberately narrow, focused on genuinely catastrophic scenarios. The four categories of critical harm under the bill are as follows:

☣️
CBRN Weapons

Providing meaningful assistance in creating, deploying, or enhancing chemical, biological, radiological, or nuclear weapons capable of mass casualties.

Infrastructure Attacks

Enabling cyberattacks on critical infrastructure — power grids, water systems, financial networks — causing mass casualties or at least $500 million in damage.

🤖
Autonomous Crimes

Attacks conducted by an AI system acting with limited human oversight, causing mass casualties or at least $500 million in property damage or loss.

⚠️
Comparable Grave Harms

Other harms to public safety and security of comparable severity — a "catch-all" provision for genuinely catastrophic outcomes not otherwise enumerated.

Important exclusion: The bill explicitly does not hold developers liable for critical harm caused by information their model outputs if that information is already reasonably publicly accessible to an ordinary person from non-AI sources. AI companies cannot be blamed for a harm they did not meaningfully enable.

The $500 million damage threshold, while large, puts the standard in perspective: the CrowdStrike software outage of 2024 is estimated to have caused upwards of $5 billion in economic damage. A cyberattack on a water system or hospital network enabled by AI could easily exceed this threshold while causing irreversible harm to human lives. The bar is high, but it is not hypothetical.

Enforcement and Oversight

The Board of Frontier Models

SB 1047 creates a new state body, the Board of Frontier Models, composed of nine members representing the AI industry, the open-source community, and academia, appointed by California’s governor and legislature. The Board advises California’s Attorney General on potential violations, reviews the results of safety tests and incident reports, and issues guidance and best practices to developers.

This is not a powerful enforcement agency — the Board has no independent power to fine or prosecute developers. Its role is advisory, educational, and coordinative. This design reflects a deliberate “light touch” that the bill’s author, Sen. Wiener, has consistently emphasized.

The Attorney General’s Authority

Enforcement authority ultimately rests with California’s Attorney General. Crucially, following a significant amendment to the bill in August 2024, the AG can now only bring suit when critical harm is imminent or has already occurred — a change that narrowed the bill’s protective scope but made it more targeted. For a first violation involving a model that cost $100 million to train, penalties can reach up to $10 million; subsequent violations can reach $30 million. The AG can also seek injunctive relief, potentially ordering a company to cease operating or training a covered model if its safety measures are found to be dangerously insufficient.

Whistleblower Protections

Recognizing that the people best positioned to know when a covered model is dangerous are often the engineers building it, SB 1047 includes robust whistleblower protections for employees and contractors at AI developers. This is a vital provision. Internal dissent is often the first line of defense against corporate negligence, and ensuring that employees can raise concerns without fear of retaliation strengthens the entire safety ecosystem.

CalCompute: A Public Benefit

Making AI Research Accessible to Everyone

SB 1047 is not only about constraining large corporations — it also contains a proactive public investment. The bill establishes CalCompute, a public cloud computing cluster associated with the University of California system. CalCompute would provide computing resources to startups, academic researchers, and community groups that lack access to the billion-dollar infrastructure of large tech companies.

This provision addresses a real equity gap in AI research. Today, meaningful work on frontier AI systems is largely the province of a handful of corporations with vast capital. CalCompute would democratize access, enabling researchers at UC Berkeley, UCLA, or a promising startup in Fresno to study and develop AI with the same tools available to Silicon Valley giants — and to do so in a publicly accountable framework rather than behind proprietary walls.

Why This Bill Matters for Californians

California Has Done This Before

This is not the first time California has stepped in to protect its residents from the unguarded deployment of powerful technology in the absence of federal action. California led the nation on consumer data privacy with the California Consumer Privacy Act. It led on net neutrality protections. It established vehicle emissions standards that became the national template. In each case, the state acted not to stifle innovation but to ensure that innovation served the public interest — and the private sector ultimately adapted.

SB 1047 follows this tradition. As Sen. Wiener has noted, the bill draws heavily on the framework established by President Biden’s October 2023 executive order on AI safety, and is motivated in large part by the continued absence of comprehensive federal legislation. California is not acting in isolation — it is filling a vacuum that Washington has yet to address.

The Stakes Are Not Hypothetical

The harms SB 1047 seeks to prevent — the use of AI to design biological weapons, coordinate cyberattacks on hospitals and power grids, or launch autonomous attacks on infrastructure — may sound like science fiction. They are not. AI systems have already been shown capable of providing meaningful assistance in designing dangerous pathogens. The acceleration of AI capability means that systems capable of these tasks may be only years, not decades, away.

In September 2024, at least 113 current and former employees of OpenAI, Google DeepMind, Anthropic, Meta, and xAI signed an open letter to Governor Newsom in support of SB 1047. These are not alarmists on the fringe — they are the people who build these systems every day. When the engineers most intimately familiar with the technology tell us we need safeguards, we should listen.

The new SB 1047 is substantially improved, to the point where we believe its benefits likely outweigh its costs.

— Dario Amodei, CEO of Anthropic, August 2024

What the Bill Does Not Do

Much of the opposition to SB 1047 has rested on mischaracterizations of its scope. The bill does not regulate small AI models, open-source hobbyist projects, or AI tools used in everyday applications. It does not mandate specific engineering choices or micromanage how a company builds its systems. It does not prevent AI development — it requires that the most powerful AI systems be developed thoughtfully, with documented safeguards in place.

Critics have also raised concerns about open-source AI models. The bill addresses this: the original developer of a covered open-source model bears responsibility for safety unless another party subsequently fine-tunes the model at a cost exceeding $10 million, in which case liability transfers. This is a pragmatic and reasonable approach to a genuinely complex question about responsibility in a distributed development ecosystem.

Conclusion

A Reasonable Bill for an Unreasonable Moment

We are living through an unprecedented moment in technological history. The systems being built today — at costs of hundreds of millions of dollars, with capabilities their own creators sometimes struggle to fully understand — represent a genuinely new category of technology. No prior regulatory framework was designed with them in mind.

SB 1047 does not pretend to solve every problem posed by artificial intelligence. It is targeted, modest in its mandates, and careful to avoid stifling the innovation that has made California the technological capital of the world. What it does do is establish that the companies building the most powerful AI systems in existence have a legal responsibility to take that power seriously — to plan for failure, to submit to independent scrutiny, and to maintain the ability to stop what they have started.

That is not a radical proposition. It is the minimum that Californians should expect of companies whose products may, in time, affect the safety and security of every person on earth. Governor Newsom now holds the decision. The case for signing SB 1047 is clear, and the cost of inaction may prove far higher than the cost of reasonable caution.