Skip to main content

AI Governance in the States: September 2025 Update

Written by: Rob Noble and Juliana Cipolla

Current Status Overview

As of September 2025, state-led AI regulation continues to accelerate. Over half of U.S. states have enacted at least one AI or algorithmic accountability law, with numerous others considering similar measures. This surge in state-level activity follows the collapse of a federal proposal that would have imposed a 10-year moratorium on new state AI regulations, ensuring that states remain at the forefront of AI governance.

Recent legislative developments in California, Texas, Tennessee, Michigan, New York, and Colorado illustrate the diverse approaches states are taking to address AI-related challenges.

Newly Signed Laws

California: Court System Adopts Generative AI Policies

Applying to the California Superior Courts, Courts of Appeal, and Supreme Court, California’s court system requires any court to either prohibit generative AI entirely or adopt a formal usage policy by December 15, 2025. These policies must address confidentiality, bias, human oversight, and include disclosure when AI-generated content is submitted. This makes California the first state to set specific operational standards for generative AI use by court staff or judicial officers.

Texas: Responsible AI Governance Act (HB 149)

Signed into law on June 22, 2025, the Texas Responsible Artificial Intelligence Governance Act establishes rules for AI use in state government to facilitate and advance the responsible development and use of AI while providing transparency and protections from known and reasonably foreseeable risks associated with AI systems, and the creation of an AI advisory council. The law takes effect on January 1, 2026.

Tennessee: Expanded Deepfake Remedies

As of July 1, 2025, Tennessee’s Preventing Deepfake Images Act (SB 1346/HB 1299) allows individuals to bring civil actions against anyone who discloses AI-generated intimate images without consent. The law includes damages for profits made by defendant from the creation, development, or disclosure of such digital depiction, attorney’s fees, and provides criminal penalties for the disclosure, threatening of disclosure, or solicitation of disclosure of such digital depictions ranging from Class E to Class C felonies. Consent to create content does not equal consent to share it, and courts can issue injunctions to prevent further disclosure while protecting victims’ identities. This law reflects growing state-level efforts to regulate AI harms and protect privacy.

Key Updates on Pending Legislation

California

California lawmakers remain active on multiple AI fronts:

  • SB 11 (Digital Replicas & False Impersonation): Expands protections against unauthorized AI-generated digital replicas. The Assembly concurred in amendments and ordered the bill to engrossing and enrolling on September 13.
  • SB 833 (AI in Critical Infrastructure): Requires human oversight whenever AI is used in critical infrastructure sectors, including, but not limited to, energy, transportation, and communications. Hearings were held in August, but the bill was postponed in late August for further consideration.
  • AB 853 (AI Transparency Act): Requires an AI detection tool be available for large platforms producing a generative AI system. Further requires large platforms to detect whether any provenance data is compliant with specifications adopted by an established standards-setting body. The bill remains in Senate Appropriations as of late August.

Colorado

Colorado’s Artificial Intelligence Act, originally passed in May 2024, set a compliance date of February 2026. In August, Governor Jared Polis signed legislation delaying the initial compliance date to June 30, 2026, giving organizations more time to develop governance programs and risk-management frameworks. This reflects an emerging theme of phased implementation.

Michigan

The legislature is considering HB 4668, which would impose safety obligations on developers of large AI systems to manage critical risks, defined as systems that could endanger 100 or more people or cause $1 billion or more in damages, and provide protections. The bill remains in committee, but its introduction reflects growing attention to developer-side accountability.

New York

New York continues to weigh the RAISE Act, a proposal requiring major AI developers to establish safety and security protocols. While still pending, it represents one of the most ambitious attempts at regulating “frontier” AI models at the state level.

Legislative Trends & Observations

1. From Principles to Enforcement

California’s court policies and Texas’ TRAIGA demonstrate a shift from broad AI principles to operational and enforceable requirements. This signals that more states will move beyond “framework” bills into detailed mandates.

2. Phased Compliance

Colorado’s delay highlights a trend toward staggered compliance timelines. Lawmakers recognize the burden of implementation and are providing businesses more lead time, while still maintaining accountability.

3. Focus on Developers and Frontier Models

Bills in Michigan and New York emphasize developer-level safety obligations and incident reporting for advanced systems. This mirrors global discussions about how to regulate frontier models.

4. Federal Preemption Rejected

Congress declined to adopt a proposed 10-year moratorium on new state AI laws. Without federal preemption, states will continue to drive practical AI governance, and companies should expect a patchwork to remain the norm.

What This Means for Businesses

The state-led approach to AI regulation is entering a new phase: implementation and enforcement. Businesses should prepare by:

  • Mapping state-by-state obligations for AI systems
  • Prioritizing the strictest applicable standard in multi-state operations
  • Developing disclosure, provenance, and risk-management protocols aligned with upcoming deadlines (especially Colorado and Texas)
  • Monitoring agency rulemaking in states with newly enacted laws

Conclusion

AI governance in the U.S. is rapidly evolving. With California courts enforcing operational policies, Colorado delaying but not abandoning compliance deadlines, and Texas enacting a broad governance framework, the landscape is becoming more structured and enforceable. Meanwhile, targeted bills in Michigan, New York, and Tennessee reflect continuing innovation in state approaches.

Until Congress adopts a comprehensive national framework, the states will remain the laboratories of AI regulation. For businesses, the challenge is not only keeping pace with these developments but also building flexible compliance programs that anticipate where the law is heading.

The Beckage Firm advises organizations on AI governance, compliance, and risk management across multiple jurisdictions. Reach out to our team for guidance on building an AI compliance roadmap tailored to your business.

 

**Attorney Advertisement**

AI Laws, AI Compliance & AI Governance in Buffalo, NY

AI Governance in Buffalo, NY | AI Compliance

AI LawsAI ComplianceAI Governance ∴ Buffalo, NY

Buffalo, NY