The White House proposes new AI policy framework that supersedes state laws

The White House announced a new AI policy framework that calls on Congress to develop federal regulations that override state AI laws. The Trump administration has repeatedly attempted to roll back more restrictive AI regulation at the state level, but has failed so far, most notably with the passage of the “One Big Beautiful Bill.”
The framework focuses on a variety of topics, from children’s privacy to the use of AI in the job market. “It is important to note that this framework can only be successful if it is applied uniformly across the United States,” the White House wrote. “A patchwork of conflicting national laws would harm American innovation and our ability to lead the global AI race. »
In terms of protecting children’s privacy, the framework calls on Congress to require companies to provide tools such as “screen time, content exposure, and account controls,” while also affirming that “existing children’s privacy protections apply to AI systems,” including limits on how data is collected and used for AI training. The framework also says that excluded states should be allowed to enforce “their own generally applicable laws protecting children, such as prohibiting child pornography content, even when such content is generated by AI.”
Energy consumption and the environmental impact of AI infrastructure are an ongoing concern, but the White House’s policy proposals are primarily concerned with the cost of data centers. The framework suggests that federal AI regulations should ensure that higher electricity costs are not passed on to people living near data centers, while streamlining the process of permitting the construction of AI infrastructure, so that companies can continue “on-site and behind-the-meter electricity generation.” The framework also calls for fewer restrictions on the software side of AI development, proposing “regulatory sandboxes for AI applications” and asking Congress to “provide resources to make federal data sets available to industry and academia in AI-ready formats.”
While a recent AI bill from Sen. Marsha Blackburn (R-Ten.) attempts to eliminate Section 230, a piece of a broader law that says platforms cannot be held liable for speech they host, the framework appears to propose the opposite. “Congress should prevent the U.S. government from compelling technology providers, including AI providers, to ban, coerce, or edit content based on partisan or ideological agendas,” the White House wrote. The framework is also neutral regarding copyright and the use of intellectual property to train AI. “Although the Administration believes that training AI models on copyrighted material does not violate copyright laws,” the White House writes, it supports that the issue be resolved in the courts rather than through legislation. However, the White House believes Congress should “consider authorizing licensing frameworks” so that intellectual property rights holders can negotiate compensation with AI providers.
The highlight of the White House proposal is the idea that federal regulation should take precedence over state law, specifically so that states do not “regulate the development of AI,” do not “unduly burden Americans’ use of AI for activities that would be lawful if carried out without AI” and do not punish AI companies “for illegal third-party conduct involving their models.” The idea that AI companies are not responsible for illegal or harmful uses of their products is particularly problematic because it is at the heart of multiple intersecting issues with AI today, including its use to generate sexually explicit images of children and its alleged role in user suicide.
Ultimately, however, the framework might be too contradictory to be useful, writes Samir Jain, vice president of policy at the Center for Democracy and Technology, in a statement to Engadget:
The White House’s high-level AI framework contains strong statements of principles, but its usefulness to lawmakers is limited by its internal contradictions and failure to resolve key tensions between various approaches on important topics like children’s online safety. He rightly asserts that the government should not force AI companies to ban or modify content based on “partisan or ideological agendas,” and yet the administration’s executive order on “Woke AI” does just that this summer. Regarding preemption, the framework asserts that states should not be allowed to regulate AI development, but at the same time correctly notes that federal law should not infringe on states’ traditional powers to enforce their own laws against AI developers. States are currently leading the fight to protect Americans from the harm that AI systems can create, and Congress has twice rightly decided not to pursue broad preemption.
President Donald Trump has attempted to take an active role in how AI is developed and regulated in the United States, with mixed results, mainly because, as Jain notes, Congress is unwilling to give up states’ rights to regulate the technology on their own terms. Without this, it is difficult to say to what extent the framework will actually be incorporated into federal law.




