Senator Ashby’s proposal would create a cohesive framework to govern AI-generated digital replicas by pairing provider warnings with civil liability, evidentiary guidelines, and criminal liability, all centered on how digital representations of a person may be used. The bill introduces a new regulatory chapter for artificial intelligence technology providers, expands civil remedies to cover digital replicas, establishes a procedural pathway for handling AI-generated evidence in court, and adds criminal provisions related to false impersonation involving digital replicas. It also includes a contingent administrative provision that ties certain civil-code changes to the enactment and sequencing of another measure.
Under the provider regime, any entity that makes AI technology available to consumers that can generate a digital replica would be required to issue a warning about potential civil or criminal liability for unlawful use, with the warning hyperlinked on any prompt page and included in the terms and conditions. The warning must be clear and conspicuous, and noncompliance could incur a civil penalty up to ten thousand dollars per day, enforceable by a public prosecutor. The provision contains a carve-out for digital replicas created and used solely within a video game and not distributed outside the game. Definitions drawn from existing civil-code provisions anchor terms like “artificial intelligence,” “digital replica,” and “video game.”
The civil-liability framework expands protection for a person’s name, voice, signature, photograph, or likeness to include a digital replica, enabling damages ranging from actual damages to profits attributable to the unauthorized use, plus injunctive relief and other remedies. The regime preserves existing categories such as news or political uses and includes provisions for how a defendant’s liability is calculated, with additional emphasis on where injunctive relief can be sought to remove or cease use within a two-business-day timeframe after an order is served. The amendments rely on established definitions of “photograph,” “readily identifiable,” and related concepts, and cross-reference the digital-replica framework for consistency.
Separately, the bill adds a procedural track in the Evidence Code requiring the Judicial Council to review AI’s impact on the admissibility of proffered evidence and to develop rules addressing when AI-generated or manipulated content may be admissible, with a deadline for rulemaking by January 1, 2027. The Penal Code is likewise expanded through a new offense chapter that defines AI and digital replicas for purposes of false impersonation, extending the reach of existing offenses that use impersonation as an element to encompass digital replicas when the intent is to impersonate another. The bill also contains an administrative provision that makes the civil-code amendments contingent on sequencing with a companion measure and other timing conditions, potentially altering operative status depending on enactment order.
Taken together, the measure seeks to calibrate a multi-layer approach to AI content: provider warnings and penalties to deter unlawful use; expanded civil remedies and injunctive relief for rights holders and individuals depicted; a court-guided evidentiary framework to address AI-generated material; and criminal liability where digital replicas are used to impersonate. The proposal’s timing envisions warning implementation by late 2026, judicial-rule development by early 2027, and contingent operative status for civil-code changes tied to the enactment sequence with a companion bill, shaping how these provisions interact with existing statutes and enforcement practices.
Angelique AshbyD Senator | Bill Author | Not Contacted |
| Bill Number | Title | Introduced Date | Status | Link to Bill |
|---|---|---|---|---|
SB-970 | Artificial intelligence technology. | January 2024 | Failed |
Email the authors or create an email template to send to all relevant legislators.
Senator Ashby’s proposal would create a cohesive framework to govern AI-generated digital replicas by pairing provider warnings with civil liability, evidentiary guidelines, and criminal liability, all centered on how digital representations of a person may be used. The bill introduces a new regulatory chapter for artificial intelligence technology providers, expands civil remedies to cover digital replicas, establishes a procedural pathway for handling AI-generated evidence in court, and adds criminal provisions related to false impersonation involving digital replicas. It also includes a contingent administrative provision that ties certain civil-code changes to the enactment and sequencing of another measure.
Under the provider regime, any entity that makes AI technology available to consumers that can generate a digital replica would be required to issue a warning about potential civil or criminal liability for unlawful use, with the warning hyperlinked on any prompt page and included in the terms and conditions. The warning must be clear and conspicuous, and noncompliance could incur a civil penalty up to ten thousand dollars per day, enforceable by a public prosecutor. The provision contains a carve-out for digital replicas created and used solely within a video game and not distributed outside the game. Definitions drawn from existing civil-code provisions anchor terms like “artificial intelligence,” “digital replica,” and “video game.”
The civil-liability framework expands protection for a person’s name, voice, signature, photograph, or likeness to include a digital replica, enabling damages ranging from actual damages to profits attributable to the unauthorized use, plus injunctive relief and other remedies. The regime preserves existing categories such as news or political uses and includes provisions for how a defendant’s liability is calculated, with additional emphasis on where injunctive relief can be sought to remove or cease use within a two-business-day timeframe after an order is served. The amendments rely on established definitions of “photograph,” “readily identifiable,” and related concepts, and cross-reference the digital-replica framework for consistency.
Separately, the bill adds a procedural track in the Evidence Code requiring the Judicial Council to review AI’s impact on the admissibility of proffered evidence and to develop rules addressing when AI-generated or manipulated content may be admissible, with a deadline for rulemaking by January 1, 2027. The Penal Code is likewise expanded through a new offense chapter that defines AI and digital replicas for purposes of false impersonation, extending the reach of existing offenses that use impersonation as an element to encompass digital replicas when the intent is to impersonate another. The bill also contains an administrative provision that makes the civil-code amendments contingent on sequencing with a companion measure and other timing conditions, potentially altering operative status depending on enactment order.
Taken together, the measure seeks to calibrate a multi-layer approach to AI content: provider warnings and penalties to deter unlawful use; expanded civil remedies and injunctive relief for rights holders and individuals depicted; a court-guided evidentiary framework to address AI-generated material; and criminal liability where digital replicas are used to impersonate. The proposal’s timing envisions warning implementation by late 2026, judicial-rule development by early 2027, and contingent operative status for civil-code changes tied to the enactment sequence with a companion bill, shaping how these provisions interact with existing statutes and enforcement practices.
| Ayes | Noes | NVR | Total | Result |
|---|---|---|---|---|
| 37 | 0 | 3 | 40 | PASS |
Angelique AshbyD Senator | Bill Author | Not Contacted |
| Bill Number | Title | Introduced Date | Status | Link to Bill |
|---|---|---|---|---|
SB-970 | Artificial intelligence technology. | January 2024 | Failed |