Senator Weber Pierson's artificial intelligence oversight measure establishes new requirements for health care providers and AI developers to identify and address potential bias in clinical decision-making systems. The legislation creates ongoing obligations for entities that develop or deploy AI systems used in medical settings to actively monitor for discriminatory impacts on patients based on protected characteristics.
Under the measure, both developers and health care facilities must implement reasonable procedures to detect AI systems that could produce biased outcomes in clinical decisions or resource allocation. Organizations deploying these technologies must regularly evaluate system performance and take proportionate steps to mitigate any identified bias. The bill allows entities to serve as both developers and deployers of AI systems, while maintaining distinct responsibilities in each role.
The provisions supplement existing state regulations on AI and automated systems in health care without replacing current requirements. The bill specifies that compliance with its bias monitoring mandates does not provide a defense against discrimination claims. This maintains established legal protections while adding new oversight mechanisms specifically focused on preventing discriminatory impacts from AI deployment in medical settings.
![]() Mike GipsonD Assemblymember | Bill Author | Not Contacted | |
![]() Jacqui IrwinD Assemblymember | Committee Member | Not Contacted | |
![]() Rebecca Bauer-KahanD Assemblymember | Committee Member | Not Contacted | |
![]() Cottie Petrie-NorrisD Assemblymember | Committee Member | Not Contacted | |
![]() Buffy WicksD Assemblymember | Committee Member | Not Contacted |
Email the authors or create an email template to send to all relevant legislators.
Senator Weber Pierson's artificial intelligence oversight measure establishes new requirements for health care providers and AI developers to identify and address potential bias in clinical decision-making systems. The legislation creates ongoing obligations for entities that develop or deploy AI systems used in medical settings to actively monitor for discriminatory impacts on patients based on protected characteristics.
Under the measure, both developers and health care facilities must implement reasonable procedures to detect AI systems that could produce biased outcomes in clinical decisions or resource allocation. Organizations deploying these technologies must regularly evaluate system performance and take proportionate steps to mitigate any identified bias. The bill allows entities to serve as both developers and deployers of AI systems, while maintaining distinct responsibilities in each role.
The provisions supplement existing state regulations on AI and automated systems in health care without replacing current requirements. The bill specifies that compliance with its bias monitoring mandates does not provide a defense against discrimination claims. This maintains established legal protections while adding new oversight mechanisms specifically focused on preventing discriminatory impacts from AI deployment in medical settings.
Ayes | Noes | NVR | Total | Result |
---|---|---|---|---|
16 | 0 | 0 | 16 | PASS |
![]() Mike GipsonD Assemblymember | Bill Author | Not Contacted | |
![]() Jacqui IrwinD Assemblymember | Committee Member | Not Contacted | |
![]() Rebecca Bauer-KahanD Assemblymember | Committee Member | Not Contacted | |
![]() Cottie Petrie-NorrisD Assemblymember | Committee Member | Not Contacted | |
![]() Buffy WicksD Assemblymember | Committee Member | Not Contacted |