On March 20, 2026, the Trump Administration released a new National AI Legislative Framework. While the announcement focuses on national security and the global AI race, there is a very specific tug-of-war happening between federal authority and state-level regulation that you need to be aware of.

Here is the breakdown of what is happening, what is at risk, and what remains under your control.

The Bottom Line: Two Different Tracks

The federal government is attempting to split AI policy into two distinct lanes:

  • The Federal Lane (How AI is Built): The feds want to control the "private sector" side of the house. They are targeting state laws that try to regulate AI model development, training data disclosure, and developer liability.
  • The State/Local Lane (How AI is Used): This is your lane. The new framework—and the Executive Orders leading up to it—explicitly carves out state and local government procurement and internal AI usage.

The takeaway: You are still 100% responsible for how your employees use AI and how your agency secures its data.

Current Enforcements vs. Future Recommendations

It is important to separate a wishlist from enforceable law.

Legislative Framework
Status: Recommendation
This is not law. It’s a blueprint signaling where Congress may go, but it has no immediate legal effect.

DOJ Task Force
Status: Active now
The Department of Justice can already challenge state AI laws in federal court, particularly those considered overly restrictive.

BEAD Grant Funding
Status: Active risk
Federal broadband funding may be used as leverage. States with aggressive AI regulations could see funding withheld.

State AI Laws (e.g., TRAIGA)
Status: Enforceable
Existing laws in states like Texas, California, Colorado, and Utah remain fully in effect unless Congress acts.

Operational Reality: The Shadow AI Problem

While the lawyers fight over legislation, your biggest risk is likely already inside your building. We see that roughly 62% of AI adoption in the organizations we work with is Shadow AI—tools used by employees without IT’s knowledge.If you are only focusing on a single tool like 

Microsoft Copilot, you are missing the dozens of other applications your staff is using behind the scenes. You cannot manage or secure what you cannot see.

What You Should Do Now

You don’t need to wait for federal clarity to start governing AI. Here is where to focus:

  1. Don’t over-regulate the models. Avoid procurement questions that target controversial areas like training data disclosure or underlying model drift. The feds are looking to challenge those specific types of rules.
  2. Focus on internal guardrails. Use your existing Acceptable Use Policies and Data Security Policies as a baseline. Make the rules simple so employees don't feel they have to bypass IT to get work done.
  3. Align with NIST. The NIST AI Risk Management Framework (RMF) is becoming the gold standard. Follow it to ensure your agency is moving toward responsible adoption.
  4. Prepare for the "Agentic Era." We are seeing the rise of AI agents that can automate tasks like open records requests. Your systems are about to be flooded with bot-to-government communication, and your policies need to reflect that reality.

The Net-Net: The federal government is trying to make it easier for companies to build AI, but they aren't taking away your responsibility to use it safely.

Need help setting up a visible, defensible AI policy? Reach out to the Darwin team. We’re here to help you navigate the policy side and the governance side without the guesswork

Related articles

Interested in Learning More?