Back to Federal Policy Watch

Executive Order to challenge or deter state laws that would impact artificial intelligence (AI)

On December 11, 2025, President Trump signed an executive order that attempts to use federal authority to force states to drop certain regulations that may be targeted as “barriers” to technological development of AI and related systems.  

The EO: 

  • Directs the Attorney General to establish an AI Litigation Task Force, tasked with challenging (suing) states over their AI laws  
  • Directs the Secretary of Commerce to publish an “evaluation” of existing state AI laws that the administration believes may conflict with their goals of freeing AI from regulatory restrictions, or with First Amendment or other rights 
  • Blocks states with any of the targeted regulations on the books from receiving Broadband Equity Access and Deployment (BEAD) funding – a massive federal grant program established by the 2021 Bipartisan Infrastructure Law to improve high-speed internet access in remote or underserved communities across the U.S.  
  • Directs all federal agencies to identify whether they have any grant programs that can restrict eligibility to states based on whether the state has AI regulations on the books or being enforced 
  • Directs the Federal Trade Commission (FTC) to determine whether their regulations restricting deceptive practices are violated by any state regulations of AI 
  • Directs different agencies and executive branch offices to develop recommendations for a federal policy framework on AI 

The EO also specifically excludes certain types of laws from being targeted for legal challenges, including: laws related to child safety and AI, data center infrastructure to support AI computing, and state government procurement (purchasing or using) of AI. 

In general, state laws and regulations can go above and beyond a federal standard, but not below it: in other words, they can be more protective than the federal regulation, but the federal regulation sets the floor. There are no federal laws in place governing the development or use of AI systems. So, any regulation currently affecting AI systems is a state level measure (though there may be non-AI-specific federal protections which AI systems can make it easier for corporations, employers, and other actors to violate). This EO represents a major attempt at federal preemption – using the higher authority of the federal government to prevent states from adopting higher standards.  EPI has long documented the harms of state-level preemption of stronger local-level laws and regulations.  

The EO specifically names recent laws passed in California on data transparency and reporting for generative AI systems, and in Colorado to address algorithmic discrimination. The EO implies (without additional detail) that laws like Colorado’s, that attempt to address whether AI systems are replicating or making existing inequities even worse, may require AI systems to “produce false results.” The EO also raises whether any of the state laws impacting AI may “imping[e] on interstate commerce.”

Impact: 

The EO does not automatically preempt states from passing current or potential future legislation. It is not yet clear on what legal grounds the Department of Justice’s “AI Litigation Task Force” may challenge state-level AI regulation. Some have indicated they believe the White House or agencies may face their own legal challenges. Prior to the EO’s final signing, California Attorney General Rob Bonta said that the state will be prepared to challenge the legality or “potential illegality” of the EO. Florida Governor Ron DeSantis (a Republican) similarly commented that he would consider an EO attempting to override state laws to be unlawful.  

But the threat of either legal challenge or withholding significant infrastructure funding may have a strong chilling effect on states considering regulation that could be perceived as “burdensome” to AI (or, more accurately, perceived as burdensome by AI-developing companies). States that may be considering reasonable and popular laws related to data privacy protections, transparency, or ensuring that technology systems are not used to help real people escape liability for discrimination, wage theft, or other labor/consumer protection violations may be wary of bringing down the administration’s legal attacks.  

This E.O. will also likely further bolster the lobbying and public persuasion campaigns of the tech and AI industries. These companies have been pushing to pass federal preemption of state legislation that affects AI at multiple other points this year. Open A.I. and other AI industry players have increasingly called for a national framework to regulate AI. While in many cases, a national regulatory framework could be considered a positive policy development, this only works if the federal floor sets a high bar. These industry groups are likely hoping for a Trump administration-led national framework that is relatively weak in terms of putting a real check on their activities, has little to no enforcement mandate, and preempts state action or more targeted policies that may be more protective.  

It is also important to note that the EO provides no concrete definition of “artificial intelligence.” This definition is needed for any serious conversation on AI policy. “AI” is often used to describe not only artificial intelligence systems, but may also include automated systems, algorithmic management tools, and similar tools. While there has been an acceleration in capabilities and access to these tools among the public in recent years, many of these tools have been in use in workplaces for decades. The extremely broad language in the EO means that even those state laws or regulations that are not explicitly about AI systems could be swept up and targeted for a legal challenge from the Trump administration.