Senator Weber Pierson's artificial intelligence oversight measure establishes new requirements for AI systems used in clinical decisionmaking and healthcare resource allocation, requiring developers and deployers to identify and address potential bias in these systems.
The legislation creates distinct obligations for entities that develop and deploy healthcare AI systems. Developers must assess their systems for potential bias before making them commercially available, while healthcare facilities and medical practices using these systems must conduct regular monitoring and take steps to mitigate any bias detected. Beginning January 1, 2027, developers must submit compliance reports to the State Department of Public Health before releasing new systems or substantial updates, while deployers must file annual reports documenting their monitoring efforts.
The measure defines bias as unintended adverse impacts on individuals' healthcare access, quality, or outcomes based on protected characteristics under state law. While the Department of Public Health will maintain a public database of compliance reports, it is not required to independently evaluate AI systems. The legislation operates alongside existing state regulations on AI in healthcare, including current requirements for disclosure of AI-generated patient communications.
![]() Al MuratsuchiD Assemblymember | Floor Vote | Not Contacted | |
![]() Sharon Quirk-SilvaD Assemblymember | Floor Vote | Not Contacted | |
![]() James GallagherR Assemblymember | Floor Vote | Not Contacted | |
![]() Mike GipsonD Assemblymember | Bill Author | Not Contacted | |
![]() Jacqui IrwinD Assemblymember | Floor Vote | Not Contacted |
Email the authors or create an email template to send to all relevant legislators.
Senator Weber Pierson's artificial intelligence oversight measure establishes new requirements for AI systems used in clinical decisionmaking and healthcare resource allocation, requiring developers and deployers to identify and address potential bias in these systems.
The legislation creates distinct obligations for entities that develop and deploy healthcare AI systems. Developers must assess their systems for potential bias before making them commercially available, while healthcare facilities and medical practices using these systems must conduct regular monitoring and take steps to mitigate any bias detected. Beginning January 1, 2027, developers must submit compliance reports to the State Department of Public Health before releasing new systems or substantial updates, while deployers must file annual reports documenting their monitoring efforts.
The measure defines bias as unintended adverse impacts on individuals' healthcare access, quality, or outcomes based on protected characteristics under state law. While the Department of Public Health will maintain a public database of compliance reports, it is not required to independently evaluate AI systems. The legislation operates alongside existing state regulations on AI in healthcare, including current requirements for disclosure of AI-generated patient communications.
Ayes | Noes | NVR | Total | Result |
---|---|---|---|---|
15 | 0 | 0 | 15 | PASS |
![]() Al MuratsuchiD Assemblymember | Floor Vote | Not Contacted | |
![]() Sharon Quirk-SilvaD Assemblymember | Floor Vote | Not Contacted | |
![]() James GallagherR Assemblymember | Floor Vote | Not Contacted | |
![]() Mike GipsonD Assemblymember | Bill Author | Not Contacted | |
![]() Jacqui IrwinD Assemblymember | Floor Vote | Not Contacted |