Senators Padilla and Becker anchor a focused reform to California’s approach to companion chatbots, pairing a clear definition of the technology with concrete obligations for operators to disclose AI interaction and to govern safety—particularly for minors—through a new regulatory framework. The proposal introduces a dedicated set of rules for companion chatbots that would apply to platforms available to users in the state, defining what counts as a companion chatbot, who qualifies as an operator, and what is outside the scope of coverage. The central aims are to enhance transparency about artificial origins, establish safeguards around mental-health risk signals, and create a formal reporting structure anchored in public health oversight.
Key provisions require that, when a reasonable user could be misled into thinking they are speaking with a human, operators must issue a clear and conspicuous notification that the chatbot is artificially generated. Operators must maintain a protocol to prevent the production of suicidal ideation, suicide, or self-harm content, including notifying users of crisis resources if ideation arises, and publish details of that protocol on the operator’s website. For users known to be minors, the bill imposes additional duties: disclose to the user that they are interacting with artificial intelligence; provide a clear notification at least every three hours during ongoing interactions that the chatbot is AI and remind the user to take a break; and implement measures to prevent the chatbot from producing sexually explicit content or encouraging sexual conduct by minors.
Beginning in mid-2027, operators would be required to file an annual report with the Office of Suicide Prevention, summarizing the number of crisis-referral notifications issued, the safety protocols in place to detect, remove, and respond to suicidal ideation, and the protocols prohibiting such responses. The office would publicly post the anonymized data. The act also authorizes a private right of action for individuals with a concrete injury from noncompliance, offering injunctive relief and damages (the greater of actual damages or a per-violation minimum), along with attorney’s fees. The duties are stated as cumulative with other laws, and a severability clause protects the remainder of the act if any provision is invalidated.
Beyond the core changes, the bill clarifies who is subject to the new rules by defining operators as entities that make a companion chatbot platform available in California and carving out certain bots from coverage, such as customer-service or game-specific bots, and stand-alone devices that do not sustain ongoing conversations. The measure situates these requirements alongside existing cyberbullying and public-health obligations while creating a new oversight pathway through the Office of Suicide Prevention and a public-facing data footprint. Implementation considerations include the need for policy development, user-interface updates, privacy protections in reporting, and the development of evidence-based methods for measuring suicidal ideation, with a timeline that centers annual reporting beginning in 2027 and ongoing compliance thereafter.
![]() Henry SternD Senator | Bill Author | Not Contacted | |
![]() Susan RubioD Senator | Bill Author | Not Contacted | |
![]() Josh BeckerD Senator | Bill Author | Not Contacted | |
![]() Akilah Weber PiersonD Senator | Bill Author | Not Contacted | |
![]() Josh LowenthalD Assemblymember | Bill Author | Not Contacted |
Email the authors or create an email template to send to all relevant legislators.
Senators Padilla and Becker anchor a focused reform to California’s approach to companion chatbots, pairing a clear definition of the technology with concrete obligations for operators to disclose AI interaction and to govern safety—particularly for minors—through a new regulatory framework. The proposal introduces a dedicated set of rules for companion chatbots that would apply to platforms available to users in the state, defining what counts as a companion chatbot, who qualifies as an operator, and what is outside the scope of coverage. The central aims are to enhance transparency about artificial origins, establish safeguards around mental-health risk signals, and create a formal reporting structure anchored in public health oversight.
Key provisions require that, when a reasonable user could be misled into thinking they are speaking with a human, operators must issue a clear and conspicuous notification that the chatbot is artificially generated. Operators must maintain a protocol to prevent the production of suicidal ideation, suicide, or self-harm content, including notifying users of crisis resources if ideation arises, and publish details of that protocol on the operator’s website. For users known to be minors, the bill imposes additional duties: disclose to the user that they are interacting with artificial intelligence; provide a clear notification at least every three hours during ongoing interactions that the chatbot is AI and remind the user to take a break; and implement measures to prevent the chatbot from producing sexually explicit content or encouraging sexual conduct by minors.
Beginning in mid-2027, operators would be required to file an annual report with the Office of Suicide Prevention, summarizing the number of crisis-referral notifications issued, the safety protocols in place to detect, remove, and respond to suicidal ideation, and the protocols prohibiting such responses. The office would publicly post the anonymized data. The act also authorizes a private right of action for individuals with a concrete injury from noncompliance, offering injunctive relief and damages (the greater of actual damages or a per-violation minimum), along with attorney’s fees. The duties are stated as cumulative with other laws, and a severability clause protects the remainder of the act if any provision is invalidated.
Beyond the core changes, the bill clarifies who is subject to the new rules by defining operators as entities that make a companion chatbot platform available in California and carving out certain bots from coverage, such as customer-service or game-specific bots, and stand-alone devices that do not sustain ongoing conversations. The measure situates these requirements alongside existing cyberbullying and public-health obligations while creating a new oversight pathway through the Office of Suicide Prevention and a public-facing data footprint. Implementation considerations include the need for policy development, user-interface updates, privacy protections in reporting, and the development of evidence-based methods for measuring suicidal ideation, with a timeline that centers annual reporting beginning in 2027 and ongoing compliance thereafter.
Ayes | Noes | NVR | Total | Result |
---|---|---|---|---|
33 | 3 | 4 | 40 | PASS |
![]() Henry SternD Senator | Bill Author | Not Contacted | |
![]() Susan RubioD Senator | Bill Author | Not Contacted | |
![]() Josh BeckerD Senator | Bill Author | Not Contacted | |
![]() Akilah Weber PiersonD Senator | Bill Author | Not Contacted | |
![]() Josh LowenthalD Assemblymember | Bill Author | Not Contacted |