As the calendar flips to late September 2025, a significant piece of legislation, SB 53, officially takes effect in California. Known formally as the "Transparency in Frontier AI Act," this law isn't just another regulatory hurdle; it's a deliberate step towards fostering accountability, competition, and public trust in the development of the world's most advanced AI models. It's a conversation starter, really, about how we manage the immense power of these technologies.
California, being a global hub for AI innovation and boasting an economy that rivals entire nations, is uniquely positioned to set this precedent. With a substantial portion of the US AI market and a thriving tech sector, the state's regulatory direction carries considerable weight. SB 53 aims to apply to any AI developer seeking to generate revenue within the state, regardless of their physical presence or incorporation within the US. This broad reach underscores the interconnected nature of the AI landscape.
This law didn't emerge overnight. It's the result of over a year of careful consideration, a refinement from an earlier, more stringent proposal, SB 1047, which was vetoed in September 2024. That earlier bill faced criticism for introducing new legal liabilities for AI-induced harm, mandating "shutdown capabilities" for models, and requiring cloud providers to implement extensive "know-your-customer" checks. SB 53, by contrast, adopts a more measured approach, building on insights from expert reports and offering a flexible framework that allows the industry to collaboratively shape best practices.
Crucially, SB 53 steers clear of imposing direct legal liability for AI-caused harm. Instead, the focus is squarely on transparency and procedural compliance. This distinction is vital for understanding the law's immediate impact.
Defining the 'Frontier' and the 'Developers'
At its core, SB 53 is designed to target the largest players and the most potentially catastrophic models, while aiming to avoid overburdening smaller teams or less capable systems. The law defines a "large frontier developer" as a company with over $500 million in revenue in the preceding year. A "frontier model" is characterized by the immense computational power used in its training – exceeding 10^26 FLOPs, a metric that encompasses not just initial pre-training but also subsequent fine-tuning and reinforcement learning.
Most of the law's requirements are triggered only when both these thresholds are met. This means the bulk of the regulations apply to models trained by the biggest companies, utilizing vast computational resources. Interestingly, external analyses suggest that prior to 2025, no model had reached this 10^26 FLOP threshold. While a few advanced systems from companies like OpenAI and xAI have since crossed it, and no Chinese models currently meet this specific benchmark, many large developers are likely to reach it soon.
It's important to note that a model not meeting the FLOP threshold doesn't automatically mean it's risk-free. The law acknowledges that systems below this computational ceiling can still pose significant risks, as demonstrated by vulnerabilities to sophisticated "jailbreaking" attacks in some models. Furthermore, what constitutes "frontier" today will undoubtedly evolve. Advances in algorithms and efficiency mean that more powerful, riskier models might be trainable with less computation in the future. To address this, SB 53 mandates that the California Department of Technology report annually starting in 2027 on whether the thresholds for "frontier models" and "large frontier developers" need updating.
Addressing Catastrophic Risks
A central theme of SB 53 is its focus on the most severe potential risks associated with AI. "Catastrophic risk" is defined as a situation where the development, storage, or use of a frontier model could substantially lead to over 50 deaths or serious injuries, or cause over $1 billion in economic damage. These risks can stem from AI assisting in the creation or release of chemical, biological, radiological, or nuclear (CBRN) weapons, enabling sophisticated cyberattacks without effective human oversight, or AI systems acting beyond the control of their developers or users.
This focus on catastrophic risks mirrors concerns voiced in policy discussions globally, including in China, where frameworks have also highlighted the potential for AI to accelerate the proliferation of CBRN weapons and facilitate complex cyber threats.
The Mandate for Frontier AI Frameworks
SB 53 requires major developers to publish "safety frameworks." These frameworks must detail how companies plan to test for catastrophic risks, implement safeguards, respond to dangerous incidents, and prevent unauthorized access to their systems. While the law doesn't prescribe specific standards or methodologies, once a company establishes and publishes a framework, it must adhere to it. Any modifications require a public release of the updated framework with justifications within thirty days, and frameworks must be reviewed and updated at least annually.
This approach aligns with emerging industry practices, where leading developers are already accustomed to creating, maintaining, and disclosing their safety methodologies. These frameworks often cover key areas like risk measurement, mitigation, and incident response.
- Risk Measurement: Companies typically set thresholds and test for dangerous capabilities before model deployment. This includes assessing risks related to CBRN weapon development (through benchmarks like ChemBench and specialized tests for biological and nuclear capabilities) and agentic misuse (evaluating a model's ability to perform multi-step tasks or engage in simulated cyberattacks). The law also considers the risk of losing human control over AI systems, looking at behaviors like "specification gaming" or "sandbagging."
- Risk Mitigation: Common industry practices include adopting "defense-in-depth" strategies, implementing tiered access controls, continuously patching vulnerabilities, and establishing internal oversight teams. Practical measures involve input/output filtering, automated monitoring, limiting model autonomy, and alignment training. These layered approaches aim to counter both external misuse and internal control failures.
- Incident Response: The law reflects a growing consensus on the need for proactive monitoring, rapid containment, and structured reporting mechanisms. Developers are expected to maintain 24/7 security logging, implement tiered incident response protocols, and potentially offer bug bounty programs to identify vulnerabilities in real-world use. In high-risk scenarios, frameworks may allow for temporary system shutdowns or access restrictions, with provisions for notifying or cooperating with government agencies if public safety is at stake.
It's worth noting that SB 53's requirement for public safety frameworks extends to internal use cases. The risks aren't confined to publicly accessible models; AI systems used within organizations can also present significant challenges.
As SB 53 takes effect, it marks a pivotal moment, signaling California's commitment to guiding the development of advanced AI responsibly. It's a framework designed to evolve alongside the technology itself, fostering a more transparent and accountable future for frontier AI.
