Trump vs. States: The AI Regulation Showdown Threatening $42.5B in Broadband Funding

The Trump administration's threat to sue states over AI laws has sparked a constitutional battle. With 78 chatbot bills alive in 27 states and billions in federal funding at stake, the fight over who regulates AI is heating up.

Visual representation of federal-state conflict over AI regulation with state capitol buildings and scales of justice

A dramatic confrontation is unfolding between the Trump administration and state governments over who has the authority to regulate artificial intelligence—a battle with implications far beyond technology policy that could reshape the balance of federal and state power for the digital age.

In a December 11, 2025 executive order, President Donald Trump directed the Department of Justice to sue states with AI laws the administration deems "burdensome" to the industry. The order specifically threatens to withhold $42.5 billion in federal broadband funding from states that refuse to back down—a move that legal experts are calling constitutionally questionable and politically explosive.

The Federal Hammer: Lawsuits and Funding Threats

The executive order tasks the Commerce Department and David O. Sacks, White House special adviser for AI and crypto, with publishing within 90 days "an evaluation of existing state AI laws that identifies onerous laws." States identified as having such laws could face federal lawsuits and the withholding of grants under the Broadband Equity, Access and Deployment (BEAD) program, meant to increase access to high-speed internet in underserved communities.

"The federal government is limited in the way that it can unilaterally change the conditions on federal grants to states," said Cody Venzke, senior policy counsel for the American Civil Liberties Union. "Programs like BEAD were established by Congress."

Venzke called the administration's plan "a hodgepodge of faulty legal theories" and expressed skepticism that it would be effective in stopping states from regulating AI to protect their citizens.

The executive order also questions the authority of the Federal Communications Commission and Federal Trade Commission—both involved in the order—to regulate AI or preempt state laws. Legal scholars note that federal preemption typically requires either explicit congressional authorization or a clear conflict between state and federal law, neither of which clearly exists in the AI regulation space.

State Resistance: "We'll See You in Court"

Far from backing down, states with AI regulation on the books are digging in for a fight. California, Colorado, and Texas—states with diverse political leadership but shared concerns about AI safety—have made clear they intend to defend their laws.

Colorado's Stand: Colorado's AI Act, scheduled to go into effect this summer, requires developers and deployers of "high-risk" AI systems to exercise "reasonable care" to protect users from risks of discrimination. The law uses a "disparate impact" standard, which Trump's executive order explicitly criticized as "requiring entities to embed ideological bias within models."

Loren Furman, CEO of the Colorado Chamber of Commerce, noted that in a state with Democratic control, "the legislature is gonna make the decisions and move forward." She predicted that if the federal government sued Colorado, state Attorney General Phil Weiser—who is also running for governor—would vigorously defend the law. "Attorney General Weiser has been filing lawsuits almost daily, so I certainly expect that would be the case," Furman said.

California's Defiance: California has passed the Transparency in Frontier Artificial Intelligence Act (SB 53), which requires large frontier AI developers to publish frameworks explaining how they incorporate safety standards and file summaries of catastrophic risk assessments.

Teri Olle, vice president for Economic Security California Action, called the executive order a "harassment scheme" and predicted California would fight any lawsuits. "I have no indication that California would allow its rights to be trampled," Olle said.

Olle expressed surprise at the relative ease with which California's law passed, noting that while tech CEOs "did not like the fact that they were being curtailed in any way," public opinion strongly supports AI regulation. A Gallup poll found that 80% of Americans support "maintaining rules for AI safety and data security, even if it means developing AI capabilities at a slower rate."

Texas's Measured Approach: Even in Republican-led Texas, there was disappointment with the executive order. The Texas Responsible Artificial Intelligence Governance Act (HB 149) outlaws developing or deploying AI "with the intent to unlawfully discriminate" and requires government agencies deploying AI to disclose when interactions are AI-generated.

"There's a sense that states are being punished for stepping up and leading on, in the case of Texas, a really good piece of legislation that's thoughtful and intentional," said David Dunmoyer of the conservative Texas Public Policy Foundation.

However, Dunmoyer acknowledged the political calculus changes with BEAD funding on the line. Texas was approved for $1.27 billion in broadband deployment. "If it came down to, you pick, keep the AI law or connect the disconnected in vulnerable and rural communities, that's a tremendously hard political decision to make," Dunmoyer said.

The Utah Flash Point: White House Intervention

The administration has already begun wielding informal influence. Last week, the White House Office of Intergovernmental Affairs sent a memo in opposition to Utah HB 286, which would require developers of large frontier AI models to publish public safety and child protection plans.

"We are categorically opposed to Utah HB 286 and view it as an unfixable bill that goes against the Administration's AI Agenda," the memo stated, though it offered no legal justifications for the opposition.

The Utah bill, which passed out of a state House committee, would also require risk assessments for frontier models, mandate reporting of certain safety incidents to the state government, and establish whistleblower protections for employees of large frontier AI developers.

The bill's sponsor, Republican state Rep. Doug Fiefia, expressed his opposition to the executive order in a TikTok post: "This executive order goes too far. I support the idea of a national AI framework, but it should come through Congress, where there's transparency, debate, collaboration. That's how you build trust and lasting policy. Don't forget about states' rights and the 10th Amendment."

The Legislative Surge: 78 Bills in 27 States

The federal pushback comes as state legislatures are experiencing an unprecedented surge in AI-related legislation. According to the Transparency Coalition, 78 chatbot bills are currently alive in 27 states, reflecting growing nationwide concern over the dangers of powerful new AI technologies.

These bills cover a wide range of concerns:

  • Child Safety: Requirements for age verification, parental consent, and safety protocols preventing AI chatbots from encouraging self-harm, suicidal ideation, or inappropriate content for minors.
  • Mental Health: Prohibitions on AI systems posing as licensed therapists or providing mental health services without professional oversight.
  • Transparency: Requirements for disclosure when consumers are interacting with AI rather than humans, and when AI is used in consequential decisions.
  • Healthcare: Restrictions on using AI as the sole basis for insurance coverage decisions or medical utilization reviews.
  • Discrimination: Protections against AI systems that perpetuate or amplify discriminatory outcomes in housing, employment, and other consequential domains.

States including Washington, Oregon, Florida, Virginia, and Hawaii have made significant progress on comprehensive AI safety legislation this session.

Washington State's Momentum

In Washington, Governor Bob Ferguson has backed chatbot safety bills that have been approved by their chamber of origin and are now moving toward reconciliation. The bills focus on safety features for minors using AI chatbots and disclosure requirements.

Oregon's Strong Vote

Oregon's Senate approved SB 1546 with a commanding 26-1 vote. The bill requires AI companies to better notify users when they're interacting with chatbots, connect users to human support during mental health crises, and add safety measures for minors. It now moves to the House.

Florida's AI Bill of Rights

Florida Governor Ron DeSantis has endorsed a sweeping "AI Bill of Rights" (SB 482), filed by Sen. Tom Leek, which prohibits governmental entities from contracting with specified entities, creates rights for Floridians relating to AI use, requires parental consent for minors to use companion chatbots, and prohibits AI technology companies from selling users' personal information unless it's deidentified.

The Constitutional Questions

The Trump administration's approach raises fundamental questions about the scope of federal power and the relationship between state and federal regulation.

Interstate Commerce vs. State Police Power: The executive order threatens lawsuits on interstate commerce grounds, arguing that state AI laws burden companies operating across state lines. However, states have traditionally exercised broad "police power" to protect the health, safety, and welfare of their residents—powers that courts have historically been reluctant to preempt absent clear congressional direction.

Conditional Spending: The threat to withhold BEAD funding raises questions about unconstitutional coercion. The Supreme Court has held that while Congress can attach conditions to federal grants, those conditions must be clearly stated, related to the federal interest in the program, and not so coercive as to turn voluntary cooperation into compulsion.

BEAD was established by Congress to expand broadband access, not to regulate AI. Using it as leverage to force states to abandon AI laws may run afoul of the "germaneness" requirement. As Venzke noted, Congress established BEAD—not the executive branch—which limits the administration's ability to unilaterally change grant conditions.

Administrative Authority: The executive order assigns regulatory roles to the FCC and FTC in AI oversight. However, neither agency has clear statutory authority to comprehensively regulate AI or preempt state laws in this domain. Under the Supreme Court's recent major questions doctrine, agencies cannot claim sweeping regulatory power over issues of vast economic and political significance without clear congressional authorization.

The Path Forward: Carve-Outs and Compromises?

The executive order does include potential off-ramps for states. It lays out a pathway for states to access federal funding by "entering into a binding agreement with the relevant agency not to enforce any such laws during the performance period" of a grant. It also preserves state laws governing child safety, data center infrastructure, and state government procurement and use of AI.

Some state laws may fit within these carve-outs. Texas's prohibition on government use of AI for "social scoring" and its provisions against child sexual abuse material may be protected. Virginia and other states with laws focused on state government AI use might also escape federal ire.

However, comprehensive state laws addressing AI discrimination, transparency, and consumer protection appear to be squarely in the administration's crosshairs.

The Industry Divide

The tech industry itself is divided on state regulation. Large frontier AI developers have generally opposed state-level regulation, preferring either no regulation or a uniform federal framework that would prevent a patchwork of state laws.

However, smaller AI companies and startups have sometimes supported reasonable state regulations, viewing clear rules as preferable to legal uncertainty. Consumer advocates, civil liberties organizations, and AI safety researchers have generally backed state efforts to establish guardrails while Congress has failed to act.

Olle noted that tech CEOs have "put hundreds of millions of dollars into PACs to try to defeat candidates" who support AI regulation—a reminder that political power, not just legal arguments, will shape the outcome of this battle.

What's at Stake

This confrontation over AI regulation is about more than artificial intelligence. It's a test case for how the United States will govern emerging technologies in an era of rapid innovation and political polarization.

For States: The ability to protect residents from AI harms without waiting for federal action hangs in the balance. States have historically served as "laboratories of democracy," experimenting with regulations that later inform federal policy. A federal preemption campaign could shut down that experimentation precisely when it's needed most.

For Industry: A patchwork of state laws creates compliance challenges, but a race to the bottom—where states are intimidated into inaction—could leave serious AI risks unaddressed until a catastrophic failure forces reactive, poorly designed federal intervention.

For Users: The outcome will determine whether basic protections—age verification for chatbots interacting with children, disclosure when AI makes consequential decisions, safety protocols to prevent chatbots from encouraging self-harm—become standard or remain patchwork and voluntary.

For Federalism: The precedent set here will reverberate beyond AI. If the executive branch can use funding threats to compel states to abandon laws in areas where Congress has not acted, the balance of power between federal and state governments shifts dramatically.

The Coming Battles

Legal challenges appear inevitable. If the administration follows through on its threat to sue states or withhold BEAD funding, litigation will test the executive order's legal theories. States like California and Colorado seem eager for that fight, viewing it as both a defense of their laws and a broader assertion of state sovereignty.

Meanwhile, state legislatures are showing no signs of slowing down. The 78 active chatbot bills represent just a fraction of AI-related legislation under consideration. As AI capabilities expand and AI harms become more documented, political pressure for regulation will only intensify.

Congress remains the wild card. A comprehensive federal AI framework could potentially resolve the conflict by establishing clear national standards while preserving state authority in defined areas. However, Congress has struggled for years to advance AI legislation, hampered by partisan divisions and industry lobbying.

The Trump administration's aggressive stance may actually accelerate congressional action by forcing lawmakers to clarify federal policy or risk seeing state innovations crushed without a federal alternative in place.

Conclusion: A Defining Moment

The showdown between the Trump administration and state governments over AI regulation is shaping up to be one of the defining technology policy battles of 2026. It pits legitimate concerns about regulatory fragmentation against equally legitimate fears of leaving powerful technologies unregulated while waiting for federal consensus that may never come.

For states that have invested significant political capital in crafting AI laws—often after extensive stakeholder input and difficult political compromises—the threat of federal preemption without a substitute federal framework is unacceptable. For an administration that views state AI laws as impediments to American AI leadership, forcing states to stand down is a priority.

Both sides are preparing for a protracted fight. States are mobilizing legal teams and building coalitions. The administration is finalizing its list of "onerous" laws and preparing legal theories. Industry is lobbying furiously on both sides.

And caught in the middle are the millions of Americans—especially children and vulnerable users—whose interactions with increasingly powerful AI systems will be shaped by the outcome of this battle.

The next few months will reveal whether compromise is possible or whether this conflict heads to the courts—and potentially the Supreme Court—for a resolution that could reshape American federalism for the AI age.

One thing is certain: the days when AI could develop in a regulatory vacuum are over. The only question is who will write the rules.