Five days. That's how long state legislatures have before the U.S. Commerce Department publishes its long-awaited list of AI laws it intends to challenge — a move that could cost noncompliant states billions in federal broadband funding and invite Department of Justice litigation. The rational response might be to slow down. Instead, states are doing the opposite. This week alone, Oregon's chatbot safety bill passed both chambers unanimously, Florida's Senate approved an AI Bill of Rights 35-2, and Utah and Washington sent multiple AI measures across their finish lines. The legislative sprint isn't panic — it's strategy.
The Clock Started Ticking in December
When President Trump signed "Ensuring a National Policy Framework for Artificial Intelligence" on December 11, 2025, the executive order set in motion a precise 90-day countdown. By March 11, the Secretary of Commerce must publish a comprehensive review of existing state AI laws — specifically identifying those deemed "overly burdensome or in conflict with the federal policy," according to an analysis by Paul Hastings LLP. Laws flagged in that report will be referred to the DOJ's newly formed AI Litigation Task Force, which has been active since January 10.
The financial leverage is not subtle. The executive order conditions $42 billion in previously allocated Broadband Equity, Access and Deployment (BEAD) program funding on states repealing or declining to enforce AI laws deemed inconsistent with federal policy. For states that rely on BEAD grants for rural broadband deployment — a key economic development tool — the threat is real. Colorado, whose AI Act (SB 24-205) is the only law explicitly named in the executive order, is already under the microscope.
And yet the legislative calendar this week tells a different story.
Oregon: The First Major Chatbot Safety Bill of 2026
On March 5, the Oregon legislature gave final approval to SB 1546, a chatbot safety bill sponsored by Sen. Lisa Reynolds. The final votes were not close: 26-1 in the Senate, 52-0 in the House. The measure now sits on the desk of Gov. Tina Kotek, who has five business days to sign or veto it.
The bill is among the most detailed state-level AI measures passed anywhere in the country. Under SB 1546, operators of AI chatbots must disclose upfront that users are interacting with an AI system — and repeat that disclosure at least once per hour when interacting with a minor. Chatbot operators must implement protocols to prevent outputs that could encourage suicidal ideation or self-harm. Minors cannot receive sexually explicit content or be subjected to algorithmic nudging designed to sustain addictive engagement. Operators who fail to comply face a private right of action: harmed users can sue for damages and injunctive relief.
The bill follows California's SB 243, the Companion Chatbots Act, which Gov. Gavin Newsom signed into law last October — making Oregon the second state to enact major chatbot safety legislation. The Transparency Coalition, which has lobbied for the bill in Salem, called the passage "a significant win for kids, parents, and all consumers."
What makes Oregon's timing significant is not just the bill itself but its near-unanimous margins. In a year when AI legislation is supposed to be freezing in place, a 52-0 House vote signals that some categories of AI law are far less politically contested than the executive order's architects may have anticipated.
Florida: Senate Muscle, House Resistance
The dynamics in Tallahassee this week capture the precise tension now playing out in Republican-led states. On Wednesday, the Florida Senate approved Gov. Ron DeSantis's AI Bill of Rights (SB 482) on a 35-2 vote — a resounding endorsement from a Republican supermajority.
The bill is ambitious. It declares that Floridians have a right to know when they're communicating with an AI system rather than a human, establishes rules against the unauthorized use of names, images, and likenesses, bans companion chatbots from engaging minors without parental consent, and requires disclosures on AI-generated political advertisements. It would also prohibit state agencies from contracting with AI firms tied to China or Russia.
The Senate sponsor, Sen. Tom Leek (R-Ormond Beach), described the stakes clearly: "These are not the bots that you may run into to answer a routine question on a website, but instead, they are created to sustain a relationship with a user that may seem real."
But the bill has gone nowhere in the House. Speaker Daniel Perez has been explicit about why: he believes AI regulation belongs at the federal level, full stop. "The White House position on AI and the House's position on AI have both been pretty clear publicly," Perez told reporters on Wednesday. "We do believe that the federal government should take care of AI."
The House version of SB 482 (HB 1395) has not appeared before any of its four scheduled committee stops. With the session ending March 13, it is effectively dead for this year — unless leadership reconsiders in the final days. The Computer & Communications Industry Association, a national tech industry group, has lobbied against the bill, arguing it "would create a standalone state framework that increases compliance burdens without delivering clear safety benefits."
Utah and Washington: The Weekly Wave
Oregon and Florida are the high-profile stories, but they're not the only ones. According to the Transparency Coalition's March 6 legislative update, multiple states have pushed AI bills across their finish lines this week as session deadlines approach.
In Utah — where the legislature adjourns today, March 6 — lawmakers passed SB 73 (online age verification) and HB 276 (deepfakes). Final votes are still pending on HB 438 (AI disclosure and kids safety) and HB 289 (AI and digital CSAM).
Washington state, with its session running until March 12, has been even more active. SB 5105 on deepfakes saw final passage this week. HB 1170 (AI disclosure) and SB 5395 (AI use in health insurance) are "nearly buttoned up," per the Transparency Coalition, while chatbot safety bill SB 5984 still awaits a final vote. Arizona's Senate approved a bill requiring provenance data in AI-generated video, image, or audio content on March 3, and Florida's Senate separately passed SB 484 addressing data center infrastructure impacts.
The picture that emerges is not one of states retreating. It is a picture of states making their moves before the window closes.
The Carve-Out Strategy: Child Safety as a Federal Shield
The concentrated activity in chatbot safety, deepfakes, age verification, and AI disclosure is not coincidental. These are the categories explicitly protected from federal preemption in the Trump executive order itself.
The executive order's text is clear: the federal preemption push will not apply to "child safety" protections, state government procurement of AI systems, or AI compute and data center infrastructure. The Paul Hastings analysis confirms this reading, noting that these carve-outs were "baked into the original executive order and are expected to survive into the legislative text."
Oregon's SB 1546 is essentially designed from the ground up to live inside that carve-out. So, arguably, is Florida's SB 482 — its chatbot and minor-protection provisions closely mirror California's SB 243, which the federal government has not targeted. By framing AI regulation primarily as a child safety and consumer transparency issue rather than an AI safety or bias mitigation issue, states are threading a needle that lets them claim regulatory authority without walking directly into the DOJ's crosshairs.
The laws the Commerce Department is most likely to flag are the ones targeting frontier model development, algorithmic bias assessment, and required impact assessments for high-risk AI systems — the kind of compliance-heavy rules that companies like Google, OpenAI, and Meta have lobbied hard against at the state level. Colorado's SB 24-205 is the canonical example. Oregon's SB 1546 is not that kind of law.
The $42 Billion Stick and Its Limits
The BEAD funding leverage is real, but its reach may be more limited than the executive order implies. The Orrick analysis of the executive order notes that the Commerce Department's March 11 notice will make states with "onerous AI laws" ineligible for non-deployment BEAD funds — but the specific legal mechanism for enforcing that ineligibility through existing broadband legislation is contested. Several state attorneys general have already signaled they intend to challenge the BEAD conditions in court if Commerce attempts to enforce them.
The deeper problem for the Trump administration's strategy is that child safety laws are politically untouchable. A federal government that attempts to use BEAD funding leverage against Oregon's unanimously-passed chatbot safety bill would be making a very difficult political calculation. The same logic applies to deepfake laws and age verification requirements — both of which have strong bipartisan support and are designed to protect categories of people (children, victims of non-consensual intimate imagery) that are hard to argue against in public.
This may be why the executive order's carve-outs are so broad. They reflect an implicit acknowledgment that states will not stop legislating on AI — and that the federal government's authority to stop them has real limits.
What March 11 Actually Changes
When the Commerce Department publishes its report next Tuesday, the most immediate consequence will be clarity: companies and states will finally have a definitive list of laws the federal government views as targets. Legal analysts expect Colorado's SB 24-205, California's SB 53, and AI bias mitigation laws in Illinois and New York to appear on that list, based on the executive order's language about laws requiring AI systems to alter "truthful outputs."
The DOJ's AI Litigation Task Force will then have a roadmap for which state laws to challenge in federal court. The primary legal theory is the Dormant Commerce Clause — the argument that a patchwork of state regulations places an unconstitutional burden on interstate commerce. That theory has had mixed success in court historically, and legal scholars disagree on how strong a case the federal government can make.
Meanwhile, the Manatt analysis notes that March 11 also triggers the FTC deadline to issue a policy statement classifying state-mandated AI bias mitigation requirements as potentially deceptive — a creative administrative maneuver that could preempt state laws through an FTC policy interpretation rather than legislation.
For states, the strategic calculation is now clearer than it has been at any point in the past six months. Laws focused on child safety, consumer transparency, and deepfake prohibition appear to have a genuine legal and political path to survival. Laws that mandate bias audits, training data disclosures for large frontier models, or algorithmic impact assessments for high-risk systems face a more uncertain future.
The states that are still passing laws this week — and doing so with overwhelming bipartisan margins — have made their read of that calculation. They believe the carve-outs are real, the child safety lane is defensible, and the political cost of standing down is higher than the risk of federal litigation. Whether they're right will become clear in the weeks after March 11, when the Commerce report lands and the DOJ begins its work.
Five days.




