Assembly Member Krell frames accountability for AI-related harm within California’s civil-liability framework by advancing a measure that bars the defense that an artificial intelligence acted autonomously. In actions where a defendant who developed, modified, or used artificial intelligence is alleged to have caused harm, the defendant may not assert that the AI autonomously caused the injury. The measure preserves other defenses, including evidence relevant to causation or foreseeability and evidence of comparative fault by other persons or entities.
The core provision adds a new Civil Code section defining artificial intelligence as an engineered or machine-based system that varies in its level of autonomy and can infer from input how to generate outputs that can influence physical or virtual environments. It prohibits the defense that the AI acted independently of human input in such cases, while allowing other affirmative defenses and evidence related to causation, foreseeability, and comparative fault. The text does not specify penalties or a regulatory enforcement mechanism and does not include an explicit effective date; enforcement occurs through the existing civil-litigation process. The measure interacts with a preexisting requirement noted in the legislative digest for AI developers to post training-data documentation, but it does not restate or modify that requirement.
Scope and interpretation focus on civil actions alleging harm caused by AI, with coverage shaped by the broad definition of AI. The prohibition targets only the autonomous-caused defense for defendants who developed, modified, or used AI, and does not preclude other defenses, including joint or comparative fault considerations. Courts will need to interpret “autonomous” in context, including how to distinguish autonomous action from human supervision or input. The measure implications concern litigation strategy and potential considerations for product-design risk management, while remaining within the framework of existing tort-law principles and the current regulatory landscape around AI transparency.
![]() Maggy KrellD Assemblymember | Bill Author | Not Contacted |
Email the authors or create an email template to send to all relevant legislators.
Assembly Member Krell frames accountability for AI-related harm within California’s civil-liability framework by advancing a measure that bars the defense that an artificial intelligence acted autonomously. In actions where a defendant who developed, modified, or used artificial intelligence is alleged to have caused harm, the defendant may not assert that the AI autonomously caused the injury. The measure preserves other defenses, including evidence relevant to causation or foreseeability and evidence of comparative fault by other persons or entities.
The core provision adds a new Civil Code section defining artificial intelligence as an engineered or machine-based system that varies in its level of autonomy and can infer from input how to generate outputs that can influence physical or virtual environments. It prohibits the defense that the AI acted independently of human input in such cases, while allowing other affirmative defenses and evidence related to causation, foreseeability, and comparative fault. The text does not specify penalties or a regulatory enforcement mechanism and does not include an explicit effective date; enforcement occurs through the existing civil-litigation process. The measure interacts with a preexisting requirement noted in the legislative digest for AI developers to post training-data documentation, but it does not restate or modify that requirement.
Scope and interpretation focus on civil actions alleging harm caused by AI, with coverage shaped by the broad definition of AI. The prohibition targets only the autonomous-caused defense for defendants who developed, modified, or used AI, and does not preclude other defenses, including joint or comparative fault considerations. Courts will need to interpret “autonomous” in context, including how to distinguish autonomous action from human supervision or input. The measure implications concern litigation strategy and potential considerations for product-design risk management, while remaining within the framework of existing tort-law principles and the current regulatory landscape around AI transparency.
Ayes | Noes | NVR | Total | Result |
---|---|---|---|---|
78 | 0 | 2 | 80 | PASS |
![]() Maggy KrellD Assemblymember | Bill Author | Not Contacted |