Ten days from now, the United States Department of Commerce will publish a document that could reshape the AI regulatory landscape for the next decade. Mandated by President Trump's December 11, 2025 executive order "Ensuring a National Policy Framework for Artificial Intelligence," the report will name specific state AI laws that the federal government considers burdensome, conflicting with federal policy, and therefore eligible for legal challenge by the newly activated DOJ AI Litigation Task Force.
This is not a procedural formality. It is the federal government's official litigation roadmap — a list that will determine which state AI statutes face immediate federal lawsuits, which compliance obligations may be suspended, and which states will find themselves in a constitutional standoff with Washington over who gets to govern artificial intelligence. For any company operating under California's, Colorado's, Texas's, or Illinois's AI laws, the next ten days represent a critical window for governance preparation.
How the 90-Day Clock Was Set
Trump's December 2025 executive order set three simultaneous deadlines at the 90-day mark — all converging on March 11, 2026. The Commerce Department evaluation is the centerpiece, but it arrives alongside a BEAD (Broadband Equity, Access, and Deployment) policy notice that could make states with aggressive AI laws ineligible for federal broadband funding, and an FTC policy statement clarifying how federal deception law applies to AI outputs — and where it preempts state rules.
Each of these three instruments is designed to pressure states through a different lever. The Commerce evaluation activates DOJ litigation. The BEAD notice imposes financial penalties. The FTC statement signals that some state AI transparency mandates may conflict with federal interpretations of what constitutes a "deceptive" output. Together, they constitute the most coordinated federal assault on state-level tech regulation since the early days of internet commerce law in the 1990s.
The Laws Most Likely to Be Named
The executive order identifies two categories of state laws as targets. Understanding both is essential for any compliance officer working through the current landscape.
Category 1: "Altered Truthful Outputs"
The administration takes the position that some state laws effectively require AI models to modify their outputs in ways that introduce inaccuracies — or, in the EO's framing, to produce "false" results. Colorado's SB 24-205, the Colorado AI Act, is explicitly named in the executive order as an example of exactly this kind of problem. The law requires developers and deployers of high-risk AI systems to take "reasonable care" to protect consumers from algorithmic discrimination, and it imposes documentation, impact assessment, and disclosure requirements. The administration argues these requirements would force AI systems to produce outputs shaped by anti-discrimination mandates in ways that distort factual content.
Colorado already delayed the law's effective date from February 1 to June 30, 2026 — a signal that the state was attempting to buy political runway. The delay may not save the law. Legal analysts widely expect Colorado's SB 24-205 to be among the first items on the March 11 list.
Category 2: Compelled Disclosures
A broader category targets state transparency and disclosure requirements, which the administration suggests may raise First Amendment concerns. The potential targets here are numerous:
- California's AB 2013 (Generative AI Training Data Transparency Act) — effective January 1, 2026 — requires developers of public-use generative AI to publish high-level training data information. It is among the most sweeping transparency mandates currently in force in the United States.
- California's TFAIA (Transparency in Frontier Artificial Intelligence Act) — applies to developers of frontier models trained using more than 10²⁶ integer or floating-point operations, requiring detailed "Frontier AI Frameworks" and critical safety incident reporting.
- California's SB 942 (California AI Transparency Act) — now effective August 2, 2026 after a delay — mandates free AI content detection tools and watermarks in AI-generated content.
- Illinois HB 3773 — amends the Illinois Human Rights Act to prohibit AI-driven employment discrimination against protected classes.
- Texas's RAIGA (Responsible AI Governance Act) — prohibits developers from creating AI systems for "restricted purposes," including encouragement of violence, impersonation of minors in explicit contexts, and unlawful discrimination.
The Texas law is interesting precisely because it covers areas where federal and state interests might seem aligned. But the administration appears to view any state-level AI governance framework as a threat to the uniform national policy it is trying to establish — regardless of the law's specific content.
The Constitutional Strategy
The DOJ AI Litigation Task Force — which formally activated on January 10, 2026 — has two primary legal theories available to it when challenging state AI laws.
The first is the Dormant Commerce Clause. This is a constitutional doctrine derived from the Commerce Clause that prohibits states from enacting laws that unduly burden or discriminate against interstate commerce, even in the absence of federal legislation. The argument, applied here, is that AI systems are inherently interstate commercial products — a California disclosure mandate effectively regulates how an AI model built in one state can operate in every other state. Courts have historically applied this doctrine to invalidate state internet regulations, and the DOJ is betting it will work again in the AI context.
The second theory is explicit federal preemption — the argument that existing federal statutes already occupy the field, and state AI laws are therefore unconstitutional under the Supremacy Clause. This strategy is harder, because there is no comprehensive federal AI law. The administration appears to be arguing that a combination of existing statutes (FTC Act, various federal anti-discrimination laws) plus the executive order policy framework collectively preempt state action. Legal scholars are skeptical that executive orders can create preemptive federal law in the absence of Congressional action, and this theory is likely to face immediate judicial pushback.
What the March 11 List Actually Does — and Doesn't Do
Here is the governance point that is most widely misunderstood: appearing on the Commerce Department's March 11 list does not immediately invalidate a state AI law. It does not suspend compliance obligations. It does not create a safe harbor for companies that stop following state rules.
What the list does is trigger the DOJ Litigation Task Force to file federal lawsuits challenging the named laws. Those lawsuits will take months — probably years — to resolve. Courts may grant preliminary injunctions that temporarily pause enforcement of specific laws while litigation proceeds, but that is a high bar and not guaranteed.
This means companies face a genuinely difficult governance situation. State laws named on the March 11 list remain legally enforceable unless a court issues an injunction. State attorneys general retain enforcement authority. California's AB 2013 is in effect now. Colorado's SB 24-205 takes effect June 30 whether or not it has been named as a federal target. Companies that assume the federal challenge makes state compliance optional will face enforcement risk from state regulators who have no obligation to stand down while federal litigation proceeds.
The Financial Pressure Mechanism: BEAD Broadband Funding
Alongside the litigation track, the executive order creates a parallel financial pressure mechanism that has received far less attention. The BEAD program — a $42.5 billion federal broadband buildout initiative administered by the Commerce Department — will be subject to new conditions that could make states with "burdensome" AI laws ineligible for funding.
The March 11 BEAD policy notice will specify exactly what this means in practice. States that have enacted laws on the Commerce evaluation's target list may find their broadband grant applications deprioritized or denied. For states like California and Colorado, this represents billions of dollars in federal infrastructure funding — a powerful incentive to negotiate or modify their AI statutes rather than fight in court for years.
This funding conditionality mechanism is likely to face its own legal challenges, particularly given recent Supreme Court precedent on unconstitutional conditions in federal spending programs. But in the short term, it creates enormous political pressure on state legislatures that may not have anticipated their AI governance decisions affecting broadband access for rural communities.
The Three-Deadline Confluence: FTC's Role
The third instrument converging on March 11 is the FTC policy statement on AI output requirements. This document will clarify how the FTC Act's prohibition on "deceptive acts or practices" applies to AI systems — and whether it preempts state AI transparency and disclosure mandates.
The FTC's involvement is significant for several reasons. First, unlike DOJ litigation, an FTC policy statement can provide immediate regulatory clarity without waiting for court rulings. Second, if the FTC claims its authority preempts state rules, companies have a stronger argument for why they cannot simultaneously comply with conflicting federal and state mandates. Third, the FTC has historically had a nuanced relationship with state consumer protection law — any federal policy statement that aggressively preempts state authority will be controversial within the agency itself and may face internal resistance.
The Governance Gap: Companies Must Navigate Both Worlds
The essential challenge for any company operating AI systems in the United States is this: the federal government is declaring war on state AI laws, but that war will take years to win — or lose. In the meantime, state laws are in effect. State regulators are watching. And the legal landscape is moving faster than most compliance teams can track.
The practical governance response breaks down into four tracks:
Track 1: Inventory your state obligations now. Before March 11, every company operating AI systems that reach consumers in California, Colorado, Illinois, or Texas should have a complete map of which state laws apply to their specific operations. California's TFAIA applies only to large frontier model developers. Texas's RAIGA applies to any company doing business with Texas residents. Illinois's employment AI rules apply specifically to hiring-related AI tools. These are not interchangeable, and the compliance requirements differ substantially.
Track 2: Watch for injunctions, not just the list. If a court grants a preliminary injunction against a specific state law following a DOJ challenge, that injunction changes your compliance obligations immediately. Legal teams should be monitoring DOJ litigation filings in real time and have a protocol for rapid compliance adjustments when courts act.
Track 3: Document your governance decisions. In an environment where the same AI system may be simultaneously regulated by conflicting federal and state frameworks, contemporaneous documentation of why compliance decisions were made — and what legal authority they relied on — is invaluable. If enforcement actions follow, boards and executives who can demonstrate thoughtful governance will be substantially better positioned than those who cannot.
Track 4: Prepare for the FCC proceeding. At approximately the six-month mark from the December 2025 EO — around June 2026 — the FCC is directed to initiate a proceeding on a federal AI disclosure standard with explicit preemptive effect. If that standard passes, it will directly supersede state watermarking and content labeling requirements. Companies with strong views on AI disclosure architecture should begin preparing positions for the comment process now.
The Long Game: Congress Is Still the Only Permanent Solution
The Trump administration's regulatory strategy is aggressive and may succeed in neutralizing specific state laws through the courts. But executive orders can be reversed by future administrations. DOJ litigation produces court-specific rulings, not durable national policy. FTC and FCC statements can be withdrawn. Even successful preemption doctrine arguments produce a legal patchwork that resolves individual conflicts without establishing a coherent national framework.
The only mechanism capable of creating stable, durable AI governance in the United States is Congressional legislation. The absence of a federal AI Act — which has been debated but never enacted — is the root cause of the current regulatory chaos. Without it, companies will spend the next several years trying to navigate a shifting maze of state obligations, federal challenges, court injunctions, and agency policy statements that conflict with each other in unpredictable ways.
Washington insiders suggest that the March 11 list, and the litigation that follows, may paradoxically increase pressure for Congressional action. If federal courts begin enjoining major state AI laws — particularly California's, which covers the largest AI market in the country — the political pressure to establish a federal statutory framework may finally overcome the legislative gridlock that has prevented it for years.
What to Watch on March 11
When the Commerce evaluation publishes on March 11, the critical questions will be: Which specific laws are named, and which are not? How does the Commerce Department characterize the legal theories — Dormant Commerce Clause, preemption, or both? Does the document include safe harbors or compliance windows that give named states time to modify their laws before DOJ files suit? And what does the simultaneous FTC statement say about the federal government's theory of preemption?
The answers to those questions will determine whether March 11, 2026 is remembered as the day the federal government brought clarity to AI governance — or the day it declared a prolonged constitutional war on every state that tried to get ahead of Washington on AI regulation.
Either way, the AI policy landscape after March 11 will look very different from what it looks like today. Companies, compliance teams, and state regulators have 10 days to prepare.