Senator Weber Pierson's artificial intelligence oversight legislation establishes new requirements for health care AI systems in California, mandating developers and deployers to identify and address potential biases in clinical decision-making and resource allocation tools. The bill creates an ongoing duty for these entities to monitor AI systems for discriminatory impacts on protected characteristics and implement reasonable mitigation measures when bias risks are detected.
The legislation institutes mandatory third-party auditing of health care AI systems beginning January 1, 2030. Developers must undergo annual independent assessments of their bias identification and mitigation practices, with audit summaries to be posted publicly on their websites. The requirements apply to a broad range of health care settings, including hospitals, clinics, physician offices, and group practices that utilize AI for clinical or resource decisions.
The bill defines key roles and responsibilities, specifying that organizations can serve as both developers and deployers of health care AI systems. It preserves existing discrimination protections by explicitly stating that compliance with the new requirements does not shield entities from bias-related legal claims. The provisions supplement, rather than replace, current state regulations governing artificial intelligence and automated decision systems in health care settings.
![]() Mike GipsonD Assemblymember | Bill Author | Not Contacted | |
![]() Joaquin ArambulaD Assemblymember | Committee Member | Not Contacted | |
![]() Buffy WicksD Assemblymember | Committee Member | Not Contacted | |
![]() Lisa CalderonD Assemblymember | Committee Member | Not Contacted | |
![]() Akilah Weber PiersonD Senator | Bill Author | Not Contacted |
Email the authors or create an email template to send to all relevant legislators.
Senator Weber Pierson's artificial intelligence oversight legislation establishes new requirements for health care AI systems in California, mandating developers and deployers to identify and address potential biases in clinical decision-making and resource allocation tools. The bill creates an ongoing duty for these entities to monitor AI systems for discriminatory impacts on protected characteristics and implement reasonable mitigation measures when bias risks are detected.
The legislation institutes mandatory third-party auditing of health care AI systems beginning January 1, 2030. Developers must undergo annual independent assessments of their bias identification and mitigation practices, with audit summaries to be posted publicly on their websites. The requirements apply to a broad range of health care settings, including hospitals, clinics, physician offices, and group practices that utilize AI for clinical or resource decisions.
The bill defines key roles and responsibilities, specifying that organizations can serve as both developers and deployers of health care AI systems. It preserves existing discrimination protections by explicitly stating that compliance with the new requirements does not shield entities from bias-related legal claims. The provisions supplement, rather than replace, current state regulations governing artificial intelligence and automated decision systems in health care settings.
Ayes | Noes | NVR | Total | Result |
---|---|---|---|---|
13 | 0 | 2 | 15 | PASS |
![]() Mike GipsonD Assemblymember | Bill Author | Not Contacted | |
![]() Joaquin ArambulaD Assemblymember | Committee Member | Not Contacted | |
![]() Buffy WicksD Assemblymember | Committee Member | Not Contacted | |
![]() Lisa CalderonD Assemblymember | Committee Member | Not Contacted | |
![]() Akilah Weber PiersonD Senator | Bill Author | Not Contacted |