Skip to main content

AI GOVERNANCE IN THE STATES: FEBRUARY 2026 UPDATE

Written by: Lee Merreot, Esq., CIPM, CIPP/US, CIPP/E, CDPO

 

One Year After the Executive Order Shift — Are States Falling in Line or Forging Ahead?

I. Federal Posture: A Year Into “Removing Barriers to American Leadership in AI”

In January 2025, the Trump Administration issued Executive Order 14179, “Removing Barriers to American Leadership in AI,” rescinding the 2023 Biden AI Order and initiating a consolidated, competitiveness‑first federal AI strategy.[1]

By July 2025, the White House published America’s AI Action Plan, outlining more than 90 actions to support infrastructure, innovation, and international competitiveness.

Federal‑state tension escalated in December 2025, when the White House released, “Ensuring a National Policy Framework for Artificial Intelligence,” arguing that conflicting state‑level AI rules risk creating a harmful compliance patchwork.[2]

A corresponding fact sheet suggested the Administration may consider litigation and funding mechanisms to counter state AI regimes seen as obstructive.[3]

Federal takeaway: The White House has not softened its approach. Its objective remains clear—streamline federal AI oversight and curb diverging state obligations.

What This Federal Shift Actually Signals for AI Governance
From a practical legal standpoint, this isn’t really about loosening the rules. It’s more about who gets to set them. By pulling back the prior Executive Order and talking about “removing barriers,” the Administration isn’t getting rid of compliance requirements per se. Instead, it’s trying to shift regulatory authority back to the federal level and reshape oversight to support national competitiveness goals.

What stands out most isn’t the details of the AI Action Plan, but the change in tone of the December 2025 framework. The Administration isn’t just asking states to align anymore—it’s suggesting that state‑driven AI rules could get in the way of national goals. For organizations, this creates a tricky situation: state‑level rules keep piling up, while federal messaging casts doubt on whether those rules should exist at all.

II. State Activity Since the Executive Order: Delay, Expansion, and Diversification—Not Deference

Despite strong federal messaging, states have not stepped aside. Instead, legislatures are delaying, recalibrating, or expanding their AI regimes—sometimes in direct contrast to federal goals.

States Are Not Ignoring the Federal Signal—They Are Interpreting It Differently

Instead of dropping AI regulation altogether, state lawmakers are moving cautiously. They’re rolling things out more slowly, limiting how far the rules go, or breaking requirements into smaller pieces—keeping control while avoiding immediate political pushback.

We’ve seen this before with privacy and cybersecurity laws. When federal override is possible but uncertain, states dig in by putting rules in place that are defensible and difficult to undo after the fact.

1. Colorado: Delayed, Not Deterred

Colorado’s landmark AI Act, SB 24‑205, did not retract under federal pressure. Instead, SB25B-004, enacted in August 2025, postponed the implementation date to June 30, 2026, allowing time for refinement without abandoning the framework.[4]

Why Colorado’s Delay Is a Strategic Move, Not a Retreat

Colorado’s decision to delay enforcement without dismantling its AI Act is particularly instructive. From a governance standpoint, this reflects the notion and hope that as time passes the pressure from the federal government may lessen. By pushing the effective date while preserving the statutory framework, Colorado maintains its regulatory leverage while buying time to observe federal developments and industry response.

For regulated entities, this should not be interpreted as relief. Delays of this nature historically lead to more refined, not weaker, obligations. Organizations that treat the postponement as a pause rather than a preparation window risk finding themselves unready when enforcement resumes—often with clearer expect
ations and less tolerance for immaturity.

2. California: Comprehensive Automated Decision-Making Technology (ADMT) Regulations Move Forward

In September 2025, the California Privacy Protection Agency (CPPA) finalized new regulations addressing cybersecurity audits, risk assessments, insurance data use, and ADMT. These take effect:

  • January 1, 2026 — core CCPA‑related updates
  • January 1, 2027 — ADMT‑specific obligations[5]

California Is Quietly Setting the National Baseline for AI Risk Management

California’s ADMT regulations represent more than another compliance layer—they operationalize AI governance concepts that many organizations still treat as aspirational. Cybersecurity audits, risk assessments, and decision transparency are no longer abstract best practices; they are becoming enforceable system design requirements.

Importantly, California’s phased effective dates signal an expectation that organizations will need time to engineer governance into their AI lifecycle, not simply draft policies. From our experience, this is where many organizations struggle: translating legal obligations into technical controls, audit trails, and model governance processes that can withstand regulatory

Even for organizations without a California presence, these requirements are likely to influence internal standards, vendor expectations, and downstream contracting norms nationwide.

3. Washington State: Multi‑Bill Expansion

Washington continued expanding its AI governance approach across multiple bills, including:

  • HB 2157 (high‑risk AI)
  • HB 1168 (transparency)
  • HB 2503 (training data regulation)
  • SB 6284 (consumer protection in automated systems)[6]

This diversification shows an expansionist, rather than a deferential, posture.

Washington’s Fragmented Approach Reflects AI’s Risk Profile Across Various Use Cases

Washington’s multi-bill strategy emphasizes a key regulatory reality: AI risk does not fit neatly into a single statutory box. By addressing high-risk uses, transparency, training data, and consumer protection through separate legislative vehicles, the state is implicitly acknowledging that AI governance intersects with privacy, consumer protection, labor, and civil rights law simultaneously.

For organizations, this fragmentation increases compliance complexity but also offers insight into regulatory priorities. Regulators are less focused on model frameworks and more concerned with impact: how AI systems are trained, deployed, and challenged when they affect individuals.

4. New York City: Automated Employment Decision Tool (AEDT) Rules Still in Force

New York City’s Local Law 144, requiring bias audits and notices for automated employment decision tools, continues active enforcement by the Department of Consumer and Worker Protection (DCWP).[7]

NYC Enforcement Shows Where AI Regulation Becomes Operational

New York City’s continued enforcement of AEDT requirements provides a real world preview of how AI laws are enforced once they leave the legislative phase. Bias audits, notice obligations, and documentation requirements are not being treated as symbolic, they are being examined as operational controls.

What we see consistently is that organizations struggle less with the concept of fairness and more with the mechanics of proving it. Enforcement mechanisms like NYC’s expose gaps between legal intent and technical execution, particularly where third-party tools or legacy hiring systems are involved.

5. National Trends: Widespread Legislative Momentum

According to the National Conference of State Legislatures (NCSL), all 50 states, Washington D.C., and U.S. territories introduced AI legislation in 2025, with more than 1,000 AI‑related bills introduced.[8]

State takeaway: Legislatures are not following federal preemption momentum—they’re building out their own regulatory structures.

III. Newly Signed and Emerging State Laws (January–February 2026)

Texas: Responsible AI Governance Act (HB 149)

Effective January 1, 2026, HB 149 governs AI use in state government, prohibits certain high‑risk uses, and establishes an advisory council.

Michigan

HB 4668 (critical‑risk AI controls) remains under committee review but continues to gain legislative attention.

New York (State)

The Responsible AI Safety and Education Act (RAISE Act), aimed at frontier model safety, remains pending while legislative staff draft revisions.

IV. Broader Trends: What 2026 Means for Businesses

What These Trends Mean in Practice

Taken together, these developments point to a regulatory environment where AI governance is no longer optional, experimental, or purely ethical. It is becoming a core operational risk discipline, akin to privacy, cybersecurity, and financial controls.

Organizations that approach AI compliance as a series of jurisdiction-specific checklists will struggle to scale. Those that invest in adaptable governance frameworks—capable of mapping risk assessments, documentation, and accountability across multiple areas—will be better positioned to respond as federal and state priorities continue to collide.

AI compliance should be approached using the same frameworks and disciplines that organizations already apply to privacy compliance. As organizations adopted privacy by design, they moved away from reactive, after‑the‑fact controls and instead embedded privacy considerations into the design and operation of systems from the outset. AI governance should follow a similar trajectory. Organizations that treat AI compliance similar to privacy governance will be better positioned to manage risk, demonstrate defensibility, and adapt to evolving regulatory and stakeholder expectations.

 

  1. Expect Divergence, Not Uniformity

Federal ambitions for a unified national AI policy remain untested, while states maintain or expand their own mandates—making multi‑jurisdictional governance programs essential.

  1. Effective Dates Are Stacking Up

Organizations must track key timelines:

  • Colorado: June 30, 2026
  • California ADMT: January 1, 2027 (with related CCPA updates effective January 1, 2026)
  • NYC AEDT: ongoing enforcement
  1. Sector‑Specific Requirements Are Growing

Expect obligations around:

  • AI impact assessments
  • Dataset documentation
  • Bias audits
  • Appeal rights
  1. Federal–State Conflict May Escalate

As discussed above, the Administration’s actions taken in December 2025 may preview potential preemption arguments through litigation or grant‑related leverage.

V. Final Assessment: Is Anyone “Following” the Federal Executive Order?

  • State Legislatures: No, at least not those at the forefront of AI legislation. Their actions show recalibration (Colorado), expansion (California), diversification (Washington), and active enforcement (NYC).
  • The White House: Yes. Its messaging remains consistent and increasingly assertive about limiting diverging state policies.

The Beckage Firm Perspective

From our vantage point, the question is no longer whether AI regulation will stabilize, but whether organizations will operationalize their AI policies before regulators finalize, implement, and ultimately enforce relevant legislation. The current moment rewards proactive governance: understanding where AI is used, how decisions are made, and how risk is identified and mitigated across the system lifecycle.

The organizations that treat 2026 as a planning year rather than a waiting period will be the ones best positioned to navigate both enforcement and innovation in the years ahead.

References

[1] The White House, Executive Order 14179: Removing Barriers to American Leadership in Artificial Intelligence (Jan. 23, 2025), https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence/

[2]The White House, Ensuring a National Policy Framework for Artificial Intelligence (Dec. 11, 2025), https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/

[3]The White House, Fact Sheet: President Donald J. Trump Ensures a National Policy Framework for Artificial Intelligence (Dec. 11, 2025), https://www.whitehouse.gov/fact-sheets/2025/12/fact-sheet-president-donald-j-trump-ensures-a-national-policy-framework-for-artificial-intelligence/

[4]Colorado General Assembly, SB25B004, Colorado Legislature (2025), https://leg.colorado.gov/bills/sb25b-004

[5]California Privacy Protection Agency, Final Regulations on CCPA Updates, Cybersecurity Audits, Risk Assessments, and Automated Decisionmaking Technology (Sept. 2025), https://cppa.ca.gov/announcements/2025/20250923.html

[6]Washington Legislature, HB 2157, HB 1168, HB 2503, SB 6284, Washington Legislature (2025–26), https://app.leg.wa.gov/billsummary?BillNumber=2157&Year=2025, https://app.leg.wa.gov/BillSummary/?BillNumber=1168&Year=2025, https://app.leg.wa.gov/BillSummary/?BillNumber=2503&Year=2025, https://app.leg.wa.gov/billsummary?BillNumber=6284&Year=2025

[7]New York City Department of Consumer and Worker Protection, Automated Employment Decision Tools Enforcement Materials, https://www.nyc.gov/site/dca/about/automated-employment-decision-tools.page

[8]National Conference of State Legislatures, Artificial Intelligence 2025 Legislation, https://www.ncsl.org/technology-and-communication/artificial-intelligence-2025-legislation

 

Data Breach Lawyer, Privacy Law Firm, Data Due Diligence Law Firm, Data Security Law Firm & Incident Response Consultant in Buffalo, NY

Data Security Law Firm in Buffalo, NY | Data Breach Lawyer

Data Security Law FirmData Due Diligence Law FirmIncident Response ConsultantData Breach LawyerCryptocurrency Law Firm ∴ Buffalo, NY

Buffalo, NY