The New Reality: AI Model Weights Are Now Export-Controlled
On January 13, 2025, the U.S. Department of Commerce's Bureau of Industry and Security (BIS) did something unprecedented: it classified AI model weights as military technology. Not semiconductors. Not data. The actual parameters and weights that make advanced AI models work.
The Framework for Artificial Intelligence Diffusion—buried in Federal Register Volume 90, Issue 10—restructures how the world's AI companies distribute their most valuable assets. Model weights for "advanced closed-weight dual-use AI models" now require an export license to leave the United States, reexport to allied nations, or even transfer between U.S. subsidiaries in certain cases.
This is a seismic shift in how the AI industry operates. And as of February 24, 2026, the compliance deadlines have passed.
Breaking It Down: What Actually Got Controlled?
The rule applies specifically to:
- Advanced AI model weights: The numerical parameters that define how a generative AI model processes information
- Closed-weight models: Models not published as open-source (proprietary models like GPT-4, Claude, Gemini)
- Dual-use models: AI systems with civilian and military applications
- Advanced models: Defined by computational training requirements and capability thresholds
What wasn't controlled before 2025? Literally any AI company could publish model weights to a GitHub repository or allow direct downloads. Meta published LLaMA. Anyone could use it. Anthropic published smaller Claude models. No questions asked.
That world is over.
The Compliance Timeline: Where We Are Now
The rule became effective January 13, 2025, but implementation was staggered:
- May 15, 2025: General compliance deadline for most export controls
- January 25, 2026: Deadline for specific security-related provisions (supplements 14, 15, 18 to part 748)
- February 24, 2026: We are here. Both major deadlines have passed.
If your company needed a license to export advanced AI model weights and didn't secure one by January 25, you're now operating outside legal frameworks. For large tech companies, this means compliance officers are scrambling to document existing exports and reassess distribution strategies.
Impact on Major AI Companies: The Strategic Shift
OpenAI: Must now license any exports of GPT-4 weights or derivatives. This affects partnerships with international enterprises and cloud providers. OpenAI's strategy has likely shifted toward API-only distribution for overseas customers rather than weight exports.
Anthropic: Smaller than OpenAI but equally affected. Claude model weights for enterprise deployments require licensing. The company has focused on U.S. and allied-nation partnerships to minimize friction.
Google & Meta: These companies have existing international subsidiaries, so weight transfers within the company now require compliance review. Google's export of Gemini weights to cloud infrastructure in Europe needs documentation. Meta's internal use of its own models across international offices requires licensing consideration.
Smaller startups: Chinese, European, and Indian AI companies building local models are now insulated from direct U.S. weight exports. This creates incentives to build independent AI stacks—which is the entire point of the framework.
What This Really Means: A Tale of Two AI Markets
The framework is deliberately designed to create what Washington calls "secure ecosystems." Translation: allied nations get preferential access. Neutral or hostile nations get lockout.
The rule includes new license exceptions for countries that meet "national security and foreign policy" criteria, primarily U.S. allies (NATO, Japan, South Korea, Australia, etc.). These countries get faster approvals and higher export quotas.
For others? The licensing process is opaque, slow, and often results in denial.
This effectively formalizes what was previously informal: the U.S. views advanced AI as a strategic asset equivalent to nuclear technology or advanced fighter jets.
The Open-Source Question: The Framework's Loophole
Here's where it gets interesting. The rule includes an exemption for "published" open-source models. If Meta publishes LLaMA weights with an open license on GitHub, anyone in any country can download them. They're not "exported"—they're published.
But the framework defines "published" narrowly: it must be under an open license, available to the public, and not retrievable by license agreement. This creates a perverse incentive structure:
- Keep models proprietary: Full export control, restricted access
- Publish under open license: Zero restrictions, anyone can use it
- Restrict with licensing: Export controlled, expensive for companies
We may see more Meta-style fully open-source models, or conversely, more API-only distribution models. The middle ground of semi-open licensing becomes legally risky.
Enforcement: Who's Actually Checking?
The BIS is still ramping up enforcement. Early 2026 has seen:
- Guidance documents: BIS released clarification memos on what constitutes "advanced" models
- Industry briefings: Multiple Fortune 500 tech companies have had compliance reviews
- No major public enforcement yet: This is Phase 1 of implementation
Companies are betting that early compliance efforts will be treated leniently. But this is also an election year, and AI policy remains politically charged. Enforcement could accelerate if there's pressure to demonstrate "toughness" on China or strategic competitors.
What Companies Should Do (And Why Most Won't Act Fast Enough)
Audit all model distributions: Which exports happened pre-January 2025? Are they grandfathered? Most companies are still figuring this out.
Classify models by advancement level: Not all AI models are "advanced." Smaller models may slip through. But classifying your own models creates liability if classification is later challenged.
Establish licensing workflows: For companies wanting to export advanced models, the BIS approval process is slow (weeks to months). Companies need legal infrastructure in place.
Consider API-only distribution: This is becoming the path of least resistance. Instead of exporting weights, export API access from U.S. servers. More profitable, more controllable, fewer regulatory headaches.
The Geopolitical Endgame
This framework is fundamentally about AI dominance. The U.S. is trying to prevent advanced AI models from flowing to China, Russia, and other non-allied states while maintaining advantageous access for allies.
It's succeeding. Chinese AI companies are investing heavily in domestic model development. European companies are building their own stacks. India is launching its own sovereign AI initiatives. The framework is creating exactly the effect it intended: fragmented, regional AI ecosystems instead of global open standards.
Whether this is good policy is a different question. But it's effective strategy.
The Bottom Line
America's AI chokehold is now legally formalized. If you're an AI company with advanced models, export strategy is no longer optional. If you're a company depending on access to cutting-edge U.S. AI models, you're either paying API premiums or building your own.
The deadline has passed. Compliance is not forthcoming—it's here.