Senator Wiener, joined by Senator Rubio, advances a Transparency in Frontier Artificial Intelligence Act that codifies a state-wide transparency and risk-management regime for frontier AI while launching a state cloud initiative to support safe, equitable AI deployment and governance. The measure creates a comprehensive framework governing the development and use of frontier AI models, defining key terms such as frontier models, frontier developers, and catastrophic risk, and sets a framework for public-facing disclosures, risk assessments, and oversight mechanisms.
Under the act, large frontier developers—defined by substantial annual revenue—would be required to write, publish, and continuously maintain a detailed frontier AI framework on their public website. The framework must describe how national and international standards and industry practices are incorporated, how thresholds identifying potential catastrophic capabilities are defined and assessed, and how mitigations are applied based on those assessments. It also requires plans for independent third-party assessments, ongoing updates in response to substantial changes, cybersecurity measures to protect unreleased model weights, procedures to identify and respond to critical safety incidents, and internal governance structures to ensure proper implementation. Before deploying a frontier model or a substantially modified version, the developer must publish a transparency report detailing the model’s release date, languages, outputs, intended uses, and any restrictions, along with summaries of catastrophic-risk assessments and third-party involvement.
The bill also establishes a mechanism for incident reporting and oversight through the Office of Emergency Services, including a public-facing process for reporting critical safety incidents and a confidential channel for internal risk assessments. Reports of critical safety incidents discovered by frontier developers must be filed with OES within 15 days, with rapid notification (within 24 hours) for imminent risk. The OES would compile anonymized, aggregated annual reports for the Legislature and Governor, while preserving trade secrets and national security concerns. Public records exemptions shield certain reports from disclosure, and the act provides for penalties of up to a million dollars per violation for large frontier developers that fail to comply, with enforcement by the Attorney General.
In addition, the act creates CalCompute, a state public cloud computing framework overseen by the Government Operations Agency and supported by a 14-member consortium, including representation from the University of California and other academic bodies, labor organizations, public-interest stakeholders, and AI experts. CalCompute is designed to advance safe, ethical, and equitable AI deployment and expand access to computational resources, with a report due to the Legislature by early 2027 that analyzes landscape, costs, governance, eligibility, workforce implications, and potential partnerships. The consortium’s establishment and CalCompute operations hinge on budgetary appropriation, and the measure preempts local rules enacted after 2025 that regulate frontier developers’ management of catastrophic risk, consolidating policy authority at the state level. Additionally, the Department of Technology would annually refine core definitions of frontier model and related terms and issue recommendations to align with evolving standards, while the Labor Code provisions create whistleblower protections for covered employees who raise concerns about catastrophic risks, including an internal disclosure process, remedies, and potential attorney’s fees for successful actions.
![]() Scott WienerD Senator | Bill Author | Not Contacted | |
![]() Susan RubioD Senator | Bill Author | Not Contacted |
Email the authors or create an email template to send to all relevant legislators.
Senator Wiener, joined by Senator Rubio, advances a Transparency in Frontier Artificial Intelligence Act that codifies a state-wide transparency and risk-management regime for frontier AI while launching a state cloud initiative to support safe, equitable AI deployment and governance. The measure creates a comprehensive framework governing the development and use of frontier AI models, defining key terms such as frontier models, frontier developers, and catastrophic risk, and sets a framework for public-facing disclosures, risk assessments, and oversight mechanisms.
Under the act, large frontier developers—defined by substantial annual revenue—would be required to write, publish, and continuously maintain a detailed frontier AI framework on their public website. The framework must describe how national and international standards and industry practices are incorporated, how thresholds identifying potential catastrophic capabilities are defined and assessed, and how mitigations are applied based on those assessments. It also requires plans for independent third-party assessments, ongoing updates in response to substantial changes, cybersecurity measures to protect unreleased model weights, procedures to identify and respond to critical safety incidents, and internal governance structures to ensure proper implementation. Before deploying a frontier model or a substantially modified version, the developer must publish a transparency report detailing the model’s release date, languages, outputs, intended uses, and any restrictions, along with summaries of catastrophic-risk assessments and third-party involvement.
The bill also establishes a mechanism for incident reporting and oversight through the Office of Emergency Services, including a public-facing process for reporting critical safety incidents and a confidential channel for internal risk assessments. Reports of critical safety incidents discovered by frontier developers must be filed with OES within 15 days, with rapid notification (within 24 hours) for imminent risk. The OES would compile anonymized, aggregated annual reports for the Legislature and Governor, while preserving trade secrets and national security concerns. Public records exemptions shield certain reports from disclosure, and the act provides for penalties of up to a million dollars per violation for large frontier developers that fail to comply, with enforcement by the Attorney General.
In addition, the act creates CalCompute, a state public cloud computing framework overseen by the Government Operations Agency and supported by a 14-member consortium, including representation from the University of California and other academic bodies, labor organizations, public-interest stakeholders, and AI experts. CalCompute is designed to advance safe, ethical, and equitable AI deployment and expand access to computational resources, with a report due to the Legislature by early 2027 that analyzes landscape, costs, governance, eligibility, workforce implications, and potential partnerships. The consortium’s establishment and CalCompute operations hinge on budgetary appropriation, and the measure preempts local rules enacted after 2025 that regulate frontier developers’ management of catastrophic risk, consolidating policy authority at the state level. Additionally, the Department of Technology would annually refine core definitions of frontier model and related terms and issue recommendations to align with evolving standards, while the Labor Code provisions create whistleblower protections for covered employees who raise concerns about catastrophic risks, including an internal disclosure process, remedies, and potential attorney’s fees for successful actions.
Ayes | Noes | NVR | Total | Result |
---|---|---|---|---|
29 | 8 | 3 | 40 | PASS |
![]() Scott WienerD Senator | Bill Author | Not Contacted | |
![]() Susan RubioD Senator | Bill Author | Not Contacted |