Seven days from now, the Commerce Department will publish its list of state AI laws the federal government intends to fight. But that deadline — long circled on policy calendars — is no longer the main event. In the past 48 hours, analysts and industry insiders have grown increasingly convinced that the White House is preparing to follow Commerce's March 11 report with a formal legislative proposal: America's first-ever national AI law. That framework, expected within weeks, would preempt hundreds of conflicting state-level AI statutes, establish a uniform federal standard, and reshape the entire landscape of AI governance in the United States. Here's what it will actually do — and what it won't.
The Problem That Made a National Law Inevitable
For the past three years, state legislatures have moved faster on AI regulation than Congress has. The result is a patchwork that now includes laws on the books in more than 40 states, ranging from algorithmic bias assessments in Colorado to frontier model safety filings in California to chatbot disclosure mandates across more than a dozen jurisdictions. For AI companies operating nationally, the compliance burden has become genuinely untenable: different definitions of "high-risk AI," different audit timelines, different liability triggers.
The tech industry has lobbied for a federal preemption standard since at least 2023, arguing that AI — unlike, say, restaurant inspections — does not stop at state borders. A model trained in California and deployed across all 50 states faces every state's rules simultaneously. The Trump administration agrees with that framing, and used it as the legal and political rationale for the December 2025 executive order that set this whole process in motion.
That executive order, "Ensuring a National Policy Framework for Artificial Intelligence," directed the Justice Department to sue states whose AI laws "unconstitutionally burden interstate commerce," instructed Commerce to compile a hit list of targeted statutes by March 11, and tasked the FTC with issuing a policy statement classifying state-mandated bias mitigation as a deceptive trade practice — also by March 11. But the executive order's most consequential directive was buried in its closing sections: it told David Sacks and Michael Kratsios to draft a legislative recommendation for a national AI framework.
That recommendation is now reportedly nearly complete.
What the National Framework Is Expected to Include
Based on reporting by Roll Call, analysis from the Paul Hastings law firm, and comments from industry group NetChoice and the R Street Institute, the framework is expected to contain five core pillars:
1. Explicit Federal Preemption. The proposal will ask Congress to formally preempt state AI laws that conflict with federal policy — particularly those that impose bias testing, algorithmic impact assessments, or transparency and disclosure mandates on model developers. This is the critical piece the executive order could not deliver on its own: the White House can sue states, but only Congress can actually preempt them. Amy Bos of NetChoice told Roll Call that the industry's position is unambiguous: "AI doesn't stop at state borders and AI policies shouldn't either." The expected preemption language will explicitly preserve state authority over child safety, AI compute infrastructure, and state government procurement of AI systems.
2. Standards-Based, Not Prescriptive Regulation. Rather than mandating specific technical requirements — minimum compute thresholds, mandatory red-teaming protocols, or prescribed output filtering — the framework is expected to lean on performance-based and risk-tiered standards. That means companies would need to demonstrate their AI systems don't cause specified harms, without being told exactly how. This aligns with the administration's broader deregulatory philosophy and with existing federal guidance frameworks like the NIST AI Risk Management Framework.
3. FTC Jurisdiction Over AI Deception and Bias Claims. Separately from the legislative track, the FTC is preparing — also by March 11 — a policy statement that would classify state-mandated bias mitigation as a "per se deceptive trade practice." The legal theory: if a model is trained on real-world data and states compel developers to alter its outputs to reduce demographic disparities, those developers are producing results less faithful to underlying data — and therefore deceptive under federal consumer protection law. Legal experts at Paul Hastings note that this theory is novel and courts may reject the premise entirely, but if upheld, it would effectively preempt an entire class of state AI equity laws through antitrust and consumer protection doctrine rather than direct Congressional action.
4. Kids' Online Safety as the Political Bridge. The framework is expected to carve out and actively address child safety in AI — the one area with genuine bipartisan support. Sen. Marsha Blackburn's Kids' Online Safety Act (KOSA) has 75 co-sponsors across both parties, and Sen. Josh Hawley's companion bill targeting AI companions for children has 13. Experts predict the White House will use kids' safety provisions as a vehicle to bring reluctant Democrats on board with an otherwise industry-friendly framework — effectively trading child protection guardrails for broad preemption of state AI regulation.
5. Federal Reporting and Disclosure Standard via the FCC. The executive order also directed the Federal Communications Commission to initiate a proceeding — within 90 days of the Commerce report — on whether to establish a federal disclosure standard for AI models that would supersede inconsistent state requirements. This is the most legally uncertain piece: the FCC has historically interpreted the Communications Act as covering physical transmission infrastructure, not software applications. Any FCC rulemaking here would almost certainly face legal challenge from states and civil liberties groups.
The Congressional Obstacle Course
The White House can propose a national AI law, but it cannot pass one. And getting a broad AI preemption framework through Congress in 2026 is a genuinely difficult political ask.
The administration already tried once. The "One Big Beautiful Bill Act" included a 10-year moratorium on new state AI regulations, which passed the House but was killed in the Senate amid bipartisan concern about eliminating traditional state consumer protection authority. Senate Commerce Chair Ted Cruz, who championed preemption, also fell short on attaching it to the FY2026 defense policy law. The lesson from those failures: Republicans have the votes to want preemption but not the consensus to achieve it.
The margins matter enormously here. Republican majorities in both chambers are narrow — any defection that prioritizes state sovereignty over federal uniformity can tank the effort. Adam Thierer of the R Street Institute put it plainly to Roll Call: "The No. 1 pushback against the moratorium was, you can't preempt something with nothing." The national framework proposal is designed to answer that objection — but whether it's specific enough to satisfy skeptics while broad enough to satisfy the industry is an open question.
There's also a timing problem. 2026 is a midterm election year. Legislative bandwidth is constrained. Complex, multi-stakeholder AI legislation competing with tax, immigration, and defense priorities faces long odds of passage before Congress recesses.
The Stakes for Colorado, California, and the Rest
The laws most directly at risk from the federal preemption push are those that impose the heaviest compliance burdens on model developers. Colorado's AI Act (SB 24-205) — explicitly named in the Trump executive order — requires developers to conduct impact assessments and implement bias mitigation for high-risk AI systems. It's slated to take effect June 30, 2026, and is already the subject of state legislative debate over whether to amend it before it goes live.
California's SB 53, the Frontier Model Safety and Transparency Act, requires large AI developers to publish safety frameworks and file risk summaries. California's AB 2013 requires training data disclosures. Both are on Commerce's expected target list.
Governors in California, Colorado, and New York have already signaled they will not quietly stand down. According to Alston's Consumer Finance Law Bulletin, all three issued statements after the December executive order indicating the order would not stop them from passing or enforcing their local AI statutes. That sets up a collision: federal litigation on one side, state enforcement on the other, with companies caught in the middle.
For businesses, the legal uncertainty itself is a cost. Every dollar spent on compliance with state laws that may be preempted within months is a dollar that might have been spent on R&D. But every dollar withheld from compliance planning on the assumption preemption will succeed is a dollar of regulatory risk if the federal effort stalls in Congress or loses in court.
What the Framework Won't Do
Just as important as what the national AI law will contain is what it will deliberately exclude. The White House has been explicit: the framework will not preempt state authority over AI compute and data center infrastructure, state government procurement of AI systems, or — crucially — child safety protections. Those carve-outs were baked into the original executive order and are expected to survive into the legislative text.
The framework is also not expected to address autonomous weapons or national security AI. The Defense Department operates under a separate set of directives — including the DoD's Responsible AI implementation pathways — that report to different principals and different legal authority. Military AI governance is likely to remain a parallel, not integrated, track.
And the framework is not a safety bill in the Anthropic or frontier-AI sense. It won't establish capability thresholds for training runs, require mandatory government evaluations of frontier models, or create a federal AI safety institute with enforcement power. The Biden-era AI Safety Institute at NIST has already been substantially defunded and restructured under the Trump administration. What emerges from this process will be a market-access and consumer-protection framework — not a technology safety framework.
Why the Next Seven Days Are the Real Inflection Point
March 11 is when the Commerce Department publishes its list of state AI laws targeted for challenge. That list — which legal analysts expect to include Colorado's AI Act, California's SB 53, and AI bias mitigation laws in Illinois and New York — will trigger the DOJ's AI Litigation Task Force to begin formal legal proceedings. It will also fire the starting gun on the FTC's bias-as-deception policy statement. And it will almost certainly accelerate the White House's timeline for releasing its legislative proposal, because the political pressure to show a positive vision — not just litigation — will intensify dramatically.
"You can't preempt something with nothing," Thierer said. The national AI law framework is the administration's answer to that challenge. Whether it's enough — and whether Congress can act on it before the midterms reshape the political calculus — will determine whether America ends up with a coherent federal AI governance regime or years of legal combat between the federal government and the states it's trying to override.
Either outcome will reshape the environment in which every AI company in the country operates. The uncertainty itself is already doing that.
This article is part of TTN's ongoing series on the federal-state AI regulation conflict. Previous coverage: March 11 Is the AI Governance Reckoning | DOJ's AI Litigation Task Force Is Now Active | Trump vs. States: The AI Regulation Showdown.




