Skip to main content

Executive Order on AI: Unifying Policy or Undermining Oversight?

Written by: Lee Merreot, Esq., CIPM, CIPP/US, CIPP/E, CDPO

The Problem as Seen by the Trump Administration

The Trump Administration views the current landscape of AI regulation as fragmented and inefficient. With over 1,000 AI-related bills introduced across all 50 states in 2025, businesses face a patchwork of compliance obligations that vary widely by jurisdiction.[1] These state-level rules—ranging from algorithmic bias checks to disclosure mandates—are seen as costly, confusing, and innovation-stifling, particularly for startups and smaller firms. The concern is that this regulatory maze could slow America’s progress in the global AI race, especially against competitors like China, which operates under a single centralized framework.[2]

Purpose and Language of the Executive Order

Signed on December 11, 2025, the Executive Order entitled, “Eliminating State Law Obstruction of National Artificial Intelligence Policy” sets forth a clear objective: “to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI.” Key provisions include the creation of an AI Litigation Task Force within the Department of Justice to challenge state laws deemed “onerous” or inconsistent with federal policy; evaluation of state AI laws by the Commerce Department; funding leverage for states enforcing restrictive AI laws; agency directives for the FTC and FCC; and a legislative roadmap for a national AI framework.[3]

Why Do We Have This Patchwork?

The roots of today’s fragmented regulatory environment lie in the absence of comprehensive federal AI legislation. After generative AI tools like ChatGPT captured global attention in late 2022, states moved quickly to fill the gap. By mid-2025, 28 states had enacted targeted AI laws addressing issues such as algorithmic bias, transparency, and consumer protection.[4] California requires safety testing for large AI models, while Colorado mandates safeguards against “algorithmic discrimination.” These measures reflect legitimate concerns but create compliance headaches for businesses operating nationally.[5]

The Broader Patchwork: Data Privacy Laws in the U.S.

While the Executive Order focuses squarely on AI, it’s important to recognize that fragmentation isn’t unique to AI regulation. The U.S. also faces a growing patchwork of state-level data privacy laws, creating parallel compliance challenges for businesses.

  • 19 states now have comprehensive privacy statutes, each with unique definitions, rights, and enforcement mechanisms. These include California’s CCPA, Virginia’s VCDPA, Colorado’s CPA, and others—with no federal privacy law on the horizon.[6]
  • Beyond general privacy laws, states have enacted specialized statutes targeting subsets of data:

o   Children’s data: Laws like California’s Age-Appropriate Design Code impose heightened obligations for minors.[7]

o   Health-related data outside HIPAA: Washington’s My Health My Data Act expands protections for non-PHI health information.[8]

o   Biometric data: Illinois’ BIPA and similar laws in Texas and Washington regulate facial recognition, fingerprints, and other biometric identifiers.[9]

  • This mosaic means companies operating nationally must navigate multiple overlapping frameworks, each with distinct notice, consent, and security requirements.

Why this matters for AI governance

often process personal data—including sensitive categories covered by these laws. If state privacy statutes remain intact while AI rules are preempted, businesses could face dual compliance obligations: lighter AI governance federally but stringent privacy mandates at the state level. This tension underscores the need for integrated compliance strategies that address both algorithmic accountability and data protection.

Risks and Downsides of the Executive Order

  • Legal durability risk: Preempting state AI laws via executive action invites constitutional challenges. Broad displacement of state consumer protection statutes typically requires congressional enactment. Litigation will likely test whether the order’s directives exceed executive authority, potentially creating multi-year uncertainty for businesses planning compliance.
  • Spending Clause and Broadband Equity, Access, and Deployment (BEAD) funding leverage: Conditioning access to federal broadband funds on state AI policy alignment may spur suits by states claiming coercion or lack of a sufficiently related purpose. If funding conditions are enjoined or narrowed, companies counting on faster infrastructure build-outs to support AI growth may see project timelines wobble.
  • Regulatory whiplash at agencies: If the FCC pursues a federal disclosure standard for AI outputs (e.g., in ads), expect First Amendment and statutory authority challenges, which occurred in other disclosure contexts. Interim guidance or rescissions can fluctuate with litigation outcomes, further complicating compliance.
  • Erosion of state algorithmic accountability: Many enacted state regimes impose specific obligations, such as risk management, impact assessments, consumer notices, and the right to appeal/human review, which help organizations detect harmful model behavior. Undermining these guardrails increases exposure under existing civil rights and consumer laws.
  • FTC enforcement persists: The FTC has warned AI builders to honor privacy commitments and avoid deceptive claims; it has required deletion/disgorgement of models trained on unlawfully obtained data. Weakening state AI safeguards does not immunize firms from FTC enforcement or private suits.[10]
  • Enterprise planning risk: Businesses that prematurely roll back impact assessments, proper disclosures, or governance protocols risk rework and liability when courts or a future Congress swing the pendulum back. Maintain “no-regrets” controls (e.g., bias testing, traceability, human-in-the-loop) even if a national standard temporarily relaxes them.

Data Security and Privacy Risks

  • Training data governance: Light-touch federal policy could embolden the ingest of sensitive or proprietary data into foundation models without clear consent or provenance controls. This heightens breach response and “algorithmic disgorgement” risk if regulators find data use violates promises to consumers or the law.
  • Cross-functional data leakage: Reduced disclosure obligations increase the chance that chatbots or copilots will exfiltrate sensitive information. Businesses should apply purpose limitation, retention, transparency, and compartmentalization regardless of the EO’s deregulatory tilt.
  • Sector-specific liability: Colorado’s AI Act (SB24-205) mandates risk management programs, impact assessments, consumer notices, and human appeal for “consequential decisions.” If such frameworks are chilled or delayed, firms should voluntarily keep equivalent controls to reduce exposure under ECOA/FHA/ADA and state UDAP laws.[11]
  • Advertising and political content risk: South Dakota SB 164 requires disclosures for deepfake election content; the FCC may add federal overlays. Companies producing creative political ads should engineer watermarking and disclosure pipelines now.[12]
  • Healthcare and mental health chatbots: Utah’s 2025 AI amendments impose disclosures and data use limits for mental health chatbots. Scaling such tools nationally without comparable safeguards invites enforcement and reputational harm.[13]

Concrete Use Cases Affected if State Laws Are Curbed

  • Credit underwriting: Under Colorado SB24-205, deployers making “consequential decisions” must run impact assessments, notify consumers when AI substantially factors into the decision, correct bad data, and provide human appeal. Without these, lenders risk disparate impact claims even in a preempted environment.[11]
  • Hiring and promotion tools: State frameworks often require transparency and bias testing for automated employment decisions. Companies should preserve audit trails, bias metrics, and candidate notices.
  • Healthcare utilization management: Removing state guardrails increases wrongful-denial risk and class action exposure. Maintain clinical review and appeal pathways.
  • Political advertising workflows: South Dakota’s deepfake law requires disclosures within 90 days of an election. Engineering time-stamped labels and provenance logs is prudent nationwide to avoid election law pitfalls and prepare for potential FCC overlays.[12]

Impact on Businesses Using AI

A national standard could lower the compliance burden for businesses and harmonize requirements, but the transition period will be litigious and uneven. Our guidance:

  • design to the strictest current state controls (e.g., Colorado SB24-205),
  • preserve documentation and assessment pipelines,
  • align privacy commitments and vendor terms with FTC guidance, and
  • monitor FCC/FTC proceedings that may introduce sector-specific disclosures or enforcement priorities.

Call to Action

The Beckage Firm is actively monitoring changes and reactions to this Executive Order. If you have questions about how this new EO could affect your business and how you should specifically react and adjust your strategy, please reach out to The Beckage Firm.

**Attorney Advertisement**

References

[1] Justine Gluck et al., Legislative Approaches to AI in 2025, Future of Privacy Forum (Oct. 2025), https://fpf.org/wp-content/uploads/2025/10/The-State-of-State-AI-2025-SUPPLEMENTAL.pdf

[2] The White House, Ensuring a National Policy Framework for Artificial Intelligence, The White House (Dec. 11, 2025), https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/

[3] The White House, Fact Sheet: President Donald J. Trump Ensures a National Policy Framework for Artificial Intelligence, The White House (Dec. 11, 2025), https://www.whitehouse.gov/fact-sheets/2025/12/fact-sheet-president-donald-j-trump-ensures-a-national-policy-framework-for-artificial-intelligence/

[4] National Conference of State Legislatures, Artificial Intelligence 2025 Legislation, NCSL (July 10, 2025), https://www.ncsl.org/technology-and-communication/artificial-intelligence-2025-legislation

[5] Laurie Harris, Regulating Artificial Intelligence: U.S. and International Approaches and Considerations for Congress, Congress.gov (June 4, 2025), https://www.congress.gov/crs-product/R48555

[6] Caroline Kibby, Emerging trends, insights from public enforcement of US state privacy laws, IAPP (June 30, 2025), https://iapp.org/news/a/emerging-trends-insights-from-public-enforcement-of-us-state-privacy-laws/

[7] Office of Governor Gavin Newsom, Governor Gavin Newsom Signs First-in-Nation Bill Protecting Children’s Online Data and Privacy, California Governor (Sept. 15, 2022), https://www.gov.ca.gov/2022/09/15/governor-newsom-signs-first-in-nation-bill-protecting-childrens-online-data-and-privacy/

[8] Washington State Legislature, Chapter 19.373 RCW: Washington My Health My Data Act, Washington State Legislature (2023), https://apps.leg.wa.gov/rcw/default.aspx?cite=19.373

[9] Illinois General Assembly, Public Act 095-0994 — Biometric Information Privacy Act, Illinois General Assembly (2008), https://www.ilga.gov/legislation/publicacts/fulltext.asp?Name=095-0994
Texas Legislature, Business & Commerce Code Chapter 503 — Capture or Use of Biometric Identifiers Act, Texas Legislature Online (2025), https://statutes.capitol.texas.gov/Docs/BC/htm/BC.503.htm
Washington State Legislature, Chapter 19.375 RCW — Biometric Identifiers, Washington State Legislature (2017), https://apps.leg.wa.gov/rcw/default.aspx?cite=19.375

[10] Federal Trade Commission, AI Companies: Uphold Your Privacy and Confidentiality Commitments, FTC (Jan. 9, 2024), https://www.ftc.gov/policy/advocacy-research/tech-at-ftc/2024/01/ai-companies-uphold-your-privacy-confidentiality-commitments

[11] Colorado General Assembly, SB24-205 — Consumer Protections for Artificial Intelligence, Colorado General Assembly (2024), https://leg.colorado.gov/bills/sb24-205

[12] South Dakota Legislature, SB 164 — Prohibit the Use of a Deepfake to Influence an Election, South Dakota Legislature (2025), https://sdlegislature.gov/Session/Bill/26046/282943, Federal Communications Commission, FCC Proposes Disclosure Rules for the Use of AI in Political Ads, FCC (Sept. 4, 2024), https://www.fcc.gov/document/fcc-proposes-disclosure-rules-use-ai-political-ads

[13] Utah Legislature, HB 452 — Artificial Intelligence Amendments (Enrolled), Utah State Legislature (2025), https://le.utah.gov/Session/2025/bills/enrolled/HB0452.pdf

Incident Response Consultant, Data Due Diligence Law Firm, Data Breach Lawyer, Privacy Law Firm & Cryptocurrency Law Firm in Buffalo, NY

Data Breach Lawyer in Buffalo, NY | Data Security Law Firm

Data Breach LawyerCryptocurrency Law FirmPrivacy Law FirmData Security Law FirmData Due Diligence Law Firm ∴ Buffalo, NY

Buffalo, NY