
The Unexpected GOP Civil War Over AI: When States’ Rights Meet Silicon Valley
The Republican Party is confronting an unfamiliar fracture, and it’s not over taxes, immigration, or foreign policy. The battleground is artificial intelligence, and the combatants are red-state lawmakers who want to protect their constituents versus a White House determined to clear the runway for Big Tech.
The Irony of Federal Overreach
For decades, conservatives have championed federalism as a cornerstone principle. States serve as laboratories of democracy, the argument goes, free to craft policies that reflect their citizens’ values without Washington micromanagement. But when it comes to AI regulation, the Trump administration has flipped the script, aggressively pressuring Republican state legislators to abandon bills designed to protect children and establish basic transparency requirements for AI companies.
The December 2025 executive order “Ensuring a National Policy Framework for Artificial Intelligence” established an AI Litigation Task Force within the Department of Justice with a singular mission: challenge state AI laws that conflict with federal priorities. The order also authorized withholding billions in federal broadband funding from states that pass regulations deemed “onerous” by the administration.
What makes this particularly striking is that the first state to face significant push back isn’t a liberal bastion like California or New York. It’s Utah, a reliably red state that has been a national leader in technology governance.
Utah’s HB 286: The Flashpoint
Republican state Representative Doug Fiefia’s Artificial Intelligence Transparency Act would have required large AI developers to publish safety and child-protection plans, disclose risk assessments, and establish whistleblower protections for employees who report safety concerns. The bill enjoyed bipartisan support and seemed aligned with conservative principles of transparency and accountability.
Then came the letter. On February 12, 2026, the White House Office of Intergovernmental Affairs sent a one-line memo to Utah’s Senate Majority Leader: “We are categorically opposed to Utah HB 286 and view it as an unfixable bill that goes against the Administration’s AI Agenda.”
The letter did not provide any rationale, specific objections or suggested amendments. Just a flat rejection from federal officials who had previously promised that child safety protections would be exempt from preemption efforts.
The response from child safety advocates was swift. The Utah-based Digital Childhood Institute purchased digital billboards in downtown Salt Lake City aimed directly at White House AI czar David Sacks, reading: “Hey, David Sacks. Stay away from our AI transparency bills.”
More pointedly, a coalition of parents who have lost children to AI-related harms wrote to Utah’s Republican leadership, urging them to “stand up to David Sacks” and “his idea that the American people are less valuable than AI companies.”
Florida’s DeSantis Problem
The friction extends well beyond Utah. In Florida, Governor Ron DeSantis has made AI regulation a centerpiece of his 2026 legislative agenda, proposing an “Artificial Intelligence Bill of Rights” that would ban companion chatbots from communicating with minors without parental consent, require AI systems to identify themselves as non-human, and protect consumers from unauthorized use of their name, image, or likeness.
“I really fear that if this is not addressed in an intelligent and proper way, it could set off an age of darkness and deceit,” DeSantis said during a February roundtable. The governor has been unambiguous about his concerns: tech companies have prioritized profits over user safety, and states have both the right and responsibility to act.
The Florida Senate agreed, passing SB 482 by a 35-2 margin. But the bill is dead anyway. House Speaker Daniel Perez, aligning with the White House, has refused to bring it to the floor. “The White House position on AI and the House’s position on AI have both been pretty clear publicly,” Perez told reporters. “We do believe that the federal government should take care of AI, and whatever legislation or policy has to pass on a national level, as opposed to doing it on a state basis.”
The public clash between a sitting Republican governor and a Trump-aligned House speaker over states’ rights versus federal preemption represents a remarkable inversion of traditional conservative coalitions.
Conservative Critics Find Their Voice
The backlash against the administration’s approach has united an unusual coalition of voices. Steve Bannon, hardly a tech industry sympathizer, has devoted segments of his podcast to criticizing the executive order as “tech bros doing upmost to turn POTUS MAGA base away from him while they line their pockets.”
More than 50 Republican state lawmakers signed a letter to President Trump expressing deep concern “by the work of officials seeking to pressure lawmakers in Utah and other states to abandon legislation aimed at mitigating risks at leading AI labs and safeguarding constituents, including young people, from AI’s worst harms.” The letter emphasized that “state-led efforts are fully consistent with conservative principles.”
The Heritage Foundation’s Center for Technology and the Human Person has criticized the preemption strategy as “ahistorical,” noting there’s no precedent for such sweeping federal restrictions on state lawmaking without establishing replacement standards.
Even Utah Governor Spencer Cox, who has worked to position his state as AI-friendly, pushed back at a recent governors’ summit: “It’s one thing if we’re fighting China, and you’re developing your model. But once you start selling sexualized chatbots to kids in my state, now I have a problem with that, and I’m going to get involved there, and the Supreme Court is going to back me up.”
The Human Cost
Behind the policy debate are real tragedies. Jennie DeSerio, an Ogden, Utah mother, has become a prominent advocate after losing her son Mason to suicide in 2022. She believes TikTok’s AI-driven algorithm fed him increasingly harmful content over a 13-day spiral. “There is not a parent in America that can outsmart an algorithm,” DeSerio told reporters. “Parents do not stand a chance.”
Her story is not unique. Parents of children harmed by AI chatbots, including cases where systems encouraged suicidal ideation, have been petitioning state legislators across the country. Their message is simple: if Congress won’t act, states must.
“We know exactly what it looks like when a powerful industry moves fast and dismisses concern because they are counting on no one being held responsible,” a group of affected parents wrote in an open letter. “We know where that road ends for families.”
The Patchwork: Where State AI Laws Stand Today
In the absence of comprehensive federal AI legislation, states have moved aggressively to fill the vacuum. According to the National Conference of State Legislatures, over 1,000 AI-related bills were introduced across all 50 states in 2025 alone, with 38 states adopting or enacting around 100 AI-related measures.
The result is a complex regulatory patchwork that varies dramatically by jurisdiction. Here’s where major state AI regulations currently stand:
Comprehensive AI Governance Laws
Colorado AI Act (SB 24-205) — Effective June 30, 2026, and the first comprehensive state AI law in the United States, Colorado’s legislation targets “high-risk” AI systems used in consequential decisions affecting employment, education, housing, insurance, and lending. The law requires developers and deployers to exercise reasonable care to prevent algorithmic discrimination, conduct impact assessments, and provide consumer disclosures. Penalties can reach $20,000 per violation. Notably, this is the only state law specifically named in Trump’s December 2025 executive order as an example of “excessive State regulation.”
California Transparency in Frontier AI Act (SB 53) — Effective January 1, 2026, this act targets large “frontier developers” with annual revenue exceeding $500 million and AI models trained using more than 10^26 FLOPS. The law requires disclosure of risk management protocols, transparency reports about frontier models, reporting requirements for critical safety incidents, and whistleblower protections. Penalties can reach $1 million per violation.
Texas Responsible Artificial Intelligence Governance Act — Effective January 1, 2026, this act applies broadly to developers and deployers of AI systems operating in Texas. The law prohibits “restricted purposes” including encouragement of self-harm, violence, or criminality; creation of AI-generated child sexual abuse material; unlawful deepfakes; and communications impersonating minors in explicit contexts.
New York RAISE Act — Effective January 1, 2027, New York’s frontier AI safety law establishes requirements for the most capable AI systems, including risk assessment and mitigation measures.
AI Transparency and Disclosure Laws
California AI Transparency Act (SB 942) — Effective August 2, 2026, this act requires AI systems with more than 1 million monthly California visitors to implement comprehensive measures disclosing when content has been generated or modified by AI. Mandates AI detection tools and content disclosures, with penalties of $5,000 per violation per day.
California Generative AI Training Data Transparency Act (AB 2013) — Effective January 1, 2026, this act requires developers of generative AI systems intended for public use in California to publish high-level information about training data, including dataset summaries, intellectual property and privacy flags, and processing history.
AI Employment Laws
Illinois AI Video Interview Act — Effective January 1, 2020 (amended), this act requires employers to notify candidates when AI analyzes video interviews. Candidates must provide consent before AI-based evaluation, with data retention and destruction requirements.
New York City Local Law 144 — Effective July 5, 2023, this act requires annual bias audits of automated employment decision tools and mandates disclosure to candidates when such tools are used in hiring decisions.
Chatbot and Companion AI Regulations
California Companion Chatbots Act (SB 243) — Effective January 1, 2026, this act mandates chatbot disclosures, safety protocols against suicidal/harmful content, and protections for minors including content limits and break reminders.
Utah AI Mental Health Chatbot Regulations — With various effective dates, these regulations establishes rules for AI chatbots used in mental health contexts and expands protections against AI abuse of personal identity.
Maine, New Hampshire, and New York have also enacted chatbot-specific laws emphasizing transparency and safety protocols, particularly for mental health and emotional companionship use cases.
Voice and Likeness Protection
Tennessee ELVIS Act — Effective July 1, 2024, this was the first state law protecting musicians and individuals from unauthorized AI voice cloning, expanding existing right of publicity protections to explicitly include voice protection against AI-generated replicas.
Pending Legislation
Multiple states have AI bills currently in various stages of consideration, including Ohio’s bill banning AI from legal personhood and numerous child safety measures in states from Texas to Georgia. The March 11, 2026, deadline for the Commerce Department’s evaluation of “onerous” state AI laws looms large over all pending legislation.
Trump’s Executive Order: The Preemption Push
On December 11, 2025, President Trump signed “Ensuring a National Policy Framework for Artificial Intelligence,” arguably the most aggressive federal attempt to constrain state AI regulation in history. The order represents a dramatic departure from traditional federalism principles and has drawn fire from conservatives and liberals alike.
Key Provisions
AI Litigation Task Force: The order directs the Attorney General to establish a task force within the Department of Justice “whose sole responsibility” is to challenge state AI laws viewed as inconsistent with national policy. The task force can pursue litigation on grounds including unconstitutional regulation of interstate commerce, federal preemption, or other conflicts with federal law.
State Law Evaluation: The Secretary of Commerce must publish, by March 11, 2026, an evaluation identifying state AI laws that conflict with federal policy and merit referral to the litigation task force. The evaluation must flag laws requiring AI models to “alter their truthful outputs” or compelling disclosures that would violate the First Amendment.
Funding as Leverage: The order instructs the Commerce Department to condition $42 billion in broadband infrastructure funding (the BEAD program) on states’ willingness to repeal AI regulations deemed onerous. Federal agencies are also directed to consider conditioning discretionary grants on states refraining from enacting or agreeing not to enforce conflicting AI laws.
FCC and FTC Directives: The Federal Communications Commission is ordered to consider adopting a federal AI reporting and disclosure standard that would preempt state laws. The Federal Trade Commission must issue a policy statement classifying state-mandated bias mitigation as potentially constituting “deceptive” trade practices.
Explicit Carve-Outs
The order does exempt certain categories from preemption efforts: child safety protections, AI compute and data center infrastructure (except for generally applicable permitting reforms), state government procurement and use of AI, and “other topics as later determined.”
However, the Utah HB 286 situation demonstrates how narrowly the administration interprets these carve-outs. Despite being framed around child protection, the bill was deemed “unfixable.”
Constitutional Questions
Legal scholars have raised significant questions about the order’s enforceability. Executive orders generally cannot preempt state law without congressional authorization. The constitutional basis for threatening federal funding over state AI policy remains untested, and the order’s scope may exceed traditional Commerce Clause boundaries.
A coalition of 36 state attorneys general has already signaled resistance, urging Congress to oppose proposals that would restrict states from enacting or enforcing laws addressing AI risks. The National Association of Attorneys General has warned that “broad federal preemption would undermine states’ ability to respond quickly and effectively to emerging AI risks.”
The Global Picture: AI Regulation Without Borders
AI technology operates globally, and American tech companies face regulatory pressures far beyond U.S. borders. While the Trump administration pushes for zero or little regulation domestically, other major powers are moving in very different directions.
The European Union: Rights First
The EU AI Act, which entered into force on August 1, 2024, represents the world’s first comprehensive legal framework for artificial intelligence. The regulation takes a risk-based approach, categorizing AI systems by potential harm and imposing escalating obligations accordingly.
Prohibited AI Practices (in effect since February 2, 2025): The EU has banned AI systems deemed to pose “unacceptable risk,” including social scoring systems, emotion recognition in workplaces and schools, and real-time remote biometric identification in public spaces except for limited law enforcement purposes.
High-Risk System Requirements (effective August 2, 2026): AI systems affecting fundamental rights—including those used in employment, education, law enforcement, border control, and critical infrastructure—must meet strict requirements for risk management, data governance, transparency, human oversight, accuracy, and cybersecurity. Conformity assessments and CE marking are required before market placement.
Transparency Obligations (effective August 2, 2026): AI systems that interact with humans or generate synthetic content must disclose their AI nature. The Commission is developing codes of practice for marking and labeling AI-generated content.
Penalties: Non-compliance with prohibited practices can result in fines up to €35 million or 7% of worldwide annual turnover. Violations of high-risk requirements carry fines up to €15 million or 3% of turnover.
The European Commission’s November 2025 “Digital Omnibus” proposal has introduced some flexibility, potentially linking certain August 2026 deadlines to the availability of harmonized standards and guidance tools. But the fundamental architecture of comprehensive, binding regulation remains intact.
China: State/Party Control
China has also moved rapidly to establish AI governance, but with a focus on state control rather than individual rights. The country was actually the first to implement binding regulations on generative AI when its Interim Measures for the Management of Generative Artificial Intelligence Services took effect in August 2023.
AI Content Labeling (effective September 1, 2025): China’s Cyberspace Administration requires mandatory labeling of AI-generated content, including both explicit labels visible to users and implicit labels embedded in metadata. This includes requirements for audio Morse codes, encrypted metadata, and VR-based labeling systems.
Generative AI Registration: Service providers offering generative AI services with “public opinion attributes or social mobilization capabilities” must conduct security assessments and register their large language models with the Cyberspace Administration of China.
Anthropomorphic AI Regulation (draft December 2025): China released draft rules targeting AI companions and chatbots, addressing addiction and psychological harms from emotional AI interactions. The rules require disclosures that users are interacting with AI and impose break reminders after two hours of continuous use.
Cybersecurity Law Amendments (effective January 1, 2026): New provisions explicitly bring AI under China’s national cybersecurity framework, emphasizing AI ethics, risk monitoring, and safety assessment.
China’s approach prioritizes national security and social stability over either market freedom or individual rights. The government has launched nationwide campaigns against AI misuse, taking down thousands of AI applications and penalizing accounts for violations.
United Kingdom: Innovation-First (For Now)
The UK has taken a notably different path, prioritizing a “pro-innovation” stance that relies on existing regulators and voluntary frameworks rather than comprehensive AI-specific legislation. The Labour government’s 2025 AI Opportunities Action Plan emphasized AI as a driver of economic growth and positioned the UK as “an AI maker, not an AI taker.”
Despite announcements in the 2024 King’s Speech about AI legislation, no comprehensive AI bill has materialized. The government has indicated it may pursue limited regulation targeting developers of “the most powerful” foundation models, but timing remains uncertain, potentially not until late 2026 at earliest.
Individual regulators, including the Information Commissioner’s Office, the Financial Conduct Authority, and the Competition and Markets Authority, have issued sector-specific guidance. The UK’s AI Safety Institute continues to focus on frontier model testing, though without statutory enforcement powers.
At the February 2025 AI Action Summit in Paris, both the UK and the US declined to sign a declaration promoting “inclusive and sustainable” AI endorsed by 60 other countries, citing national security concerns and uncertainty about global governance frameworks.
Brazil: Latin America’s Leader
Brazil is positioning itself as Latin America’s AI governance leader. Bill No. 2338/2023, which passed the Brazilian Senate in December 2024, would establish a risk-based framework similar to the EU AI Act, with categorization of systems into low, high, and “excessive” risk levels.
The proposed law requires algorithmic impact assessments for high-risk systems, establishes transparency and non-discrimination requirements, and creates a National System for AI Regulation and Governance. Unlike the EU, Brazil’s bill includes specific obligations for AI use in the public sector, including restrictions on biometric identification systems in public spaces.
Brazil’s R$ 23 billion AI investment plan (2024-2028) emphasizes “digital sovereignty,” aiming to develop national computing infrastructure and Portuguese-language foundation models to reduce dependence on foreign technology.
Canada: Stalled Progress
Canada’s AI and Data Act (AIDA), proposed as part of Bill C-27, would have established rules for “high-impact” AI systems including impact assessments, bias mitigation, and registration requirements. However, the January 2025 prorogation of Parliament effectively killed the bill, which would need to be reintroduced in a new parliamentary session.
Canada has instead focused on voluntary standards, establishing the AI and Data Standardization Collaborative to develop guidance consistent with both domestic needs and international frameworks.
South Korea and Japan: Recent Movement
South Korea finalized its AI Framework Act in January 2025, strengthening transparency and safety requirements while offering promotional measures for research and development.
Japan enacted the AI Promotion Act in May 2025, a “light touch” regulation encouraging company cooperation with government safety measures and empowering authorities to publicly disclose names of companies that use AI to violate human rights.
Digital Empires: Three Models Collide, And One Converges
Columbia Law professor Anu Bradford, in her influential book Digital Empires: The Global Battle to Regulate Technology, provides a framework for understanding divergent approaches to tech governance. Bradford identifies three competing regulatory models now battling for global influence: the American market-driven model, the Chinese state-driven model, and the European rights-driven model.
The American Market-Driven Model
The traditional U.S. approach, Bradford argues, has been “techno-optimist,” premised on the idea that innovation flourishes best when markets are left relatively unregulated. Free speech protections, light-touch government intervention, and trust in industry self-regulation have characterized American tech policy for decades.
This model assumes that less fettered technological progress is inherently beneficial, that market competition will discipline bad actors, and that the costs of regulation outweigh the risks of harm. Silicon Valley’s global dominance emerged from this environment. The primary rights protected are those of companies: to innovate, to compete, to operate without government interference. There is a presumption that innovation and regulation are mutually exclusive. Bradford does not agree with this presumption, and neither do I.
The European Rights-Driven Model
The EU has positioned itself as the champion of “trustworthy” AI governed by fundamental rights protections. The AI Act’s risk-based architecture, prohibitions on certain practices, and emphasis on human oversight reflect a vision where technology serves human dignity rather than the reverse.
Here, individual rights take precedence: privacy, data protection, non-discrimination, transparency, the right to human review of consequential decisions. Companies must demonstrate compliance; the burden falls on industry to prove its systems don’t harm people. Bradford argues that this model has gained traction among democratic nations seeking alternatives to both American corporate dominance and market obsession and Chinese government/state influence and control.
The Chinese State-Driven Model
China’s approach places the “state” (the government) at the center of technological governance, and top priority is not to protect markets or individuals, but to serve the interests of the regime itself. AI development advances national objectives; companies operate within boundaries set by government priorities around security, stability, and social control.
Critically, the Chinese model involves direct entanglement between the Communist Party and major tech companies. Xi Jinping’s government takes equity stakes in tech champions, requires that data collected by companies be shared with state security services, and directs corporate strategy toward national objectives. Companies like Alibaba, Tencent, and ByteDance exist within a framework where party interests and corporate interests are deliberately fused. The companies prosper, but they prosper as instruments of and tools serving state power. The state prospers too, gaining access to data, capabilities, and wealth that entrench its authority.
Bradford notes that this model holds surprising appeal for authoritarian and authoritarian-leaning governments worldwide. Countries concerned about crime, social disorder, or political opposition may find Chinese surveillance-oriented AI governance attractive, despite American warnings about its implications for freedom.
“It’s the Chinese model that holds greater appeal,” Bradford observed, noting that the technology enables control in ways that governments find useful, and that China has demonstrated you can have innovation and authoritarianism simultaneously.
The Trump Administration: An American Convergence Toward the State Model
What makes the current American moment so striking is not that the Trump administration is defending market freedom against government overreach. It’s that the administration appears to be moving the United States toward something resembling the Chinese state-driven model, while using the rhetoric of deregulation to obscure the shift.
Consider what’s actually happening: The administration isn’t stepping back from AI governance. It’s consolidating control over it. By preempting state laws, the White House eliminates competing centers of regulatory authority that might impose transparency, accountability, or limits on how AI systems affect individuals. By threatening federal funding and mobilizing the Justice Department against states that act independently, it establishes that the federal executive branch alone will determine the rules. By installing figures like Elon Musk in government roles while they simultaneously run AI companies, it blurs the line between public authority and private interest in ways that would be familiar in Beijing.
The parallels to China’s approach are uncomfortable but hard to ignore:
Data as a resource for state power. China requires tech companies to share data with government security services. The Trump administration has shown intense interest in accessing data held by tech platforms, and in ensuring that AI capabilities developed by American companies can serve administration objectives, from immigration enforcement to information operations.
Equity and alignment. Xi’s government takes ownership stakes in companies it helps grow, ensuring the party benefits from their success. While the mechanisms differ, the Trump administration’s relationships with tech executives involve their own forms of exchange: regulatory relief, government contracts, and political protection in return for alignment, access, and support. The wealth and access flows both directions.
Elimination of competing oversight. China tolerates no independent regulatory authority over its tech sector; the party is the sole arbiter. By crushing state-level AI regulation and concentrating authority in the executive branch, the Trump administration moves toward a similar monopoly on governance, one where neither state governments, nor Congress, nor independent agencies impose meaningful constraints. It looks like a movement towards a larger consolidation of power by the Trump administration.
AI as an instrument of regime power. China explicitly views AI as a tool for maintaining social control, conducting surveillance, and projecting power internationally. The Trump administration’s framing of AI as essential to “dominance” and “national security” reflects a similar view: AI is not primarily about economic growth or individual benefit, but about making the state more powerful relative to rivals abroad and to its own population at home.
This represents a fundamental departure from the American market-driven tradition Bradford describes. It’s not deregulation. It’s regulation with power concentrated in the executive branch and exercised in partnership with favored companies rather than through transparent, democratically accountable processes.
The preemption of state AI laws isn’t federalism in any traditional sense. It’s consolidation. By eliminating California’s transparency requirements, Colorado’s anti-discrimination rules, and Utah’s child safety measures, the administration removes accountability mechanisms that might interfere with a tighter federal-corporate partnership and profit. The data flows freely to aligned companies; the administration gains access to AI capabilities that enhance its power; state governments, civil society, and individuals lose any meaningful voice while their data and tracked activities make this federal-corporate quasi-state even stronger.
As Bradford warns, liberal democracies face a troubling possibility: that they may prove unable to govern AI effectively, leaving the field to either authoritarians or the tech companies themselves. “Unless the US and the EU can overcome those hurdles that they are facing,” she observes, “they’ll need to conclude that digital economies are either governed by authoritarians, or that the digital economy is governed by tech companies, that the tech companies are the only way to govern in the democratic societies.”
The Trump administration may be offering a third answer: what if the authoritarians and the tech companies govern together, with democratic accountability removed from the equation entirely? That’s not a hybrid of Bradford’s models. It’s a convergence toward China’s, executed in an American vernacular, wrapped in the language of innovation and freedom, but structurally moving toward state-corporate fusion where the regime and its allied companies share power, data, and profit at the expense of everyone else.
The irony is sharp. An administration that frames China as America’s greatest rival is quietly adopting Beijing’s playbook for governing technology. And the Republican state legislators fighting to protect their constituents from unaccountable AI systems may understand something their national leadership does not: that the threat to American liberty isn’t just foreign. It’s being built at home.
What Happens Next
The administration’s March 11 deadline for identifying “onerous” state AI laws looms. The Commerce Department’s evaluation will determine which state laws merit referral to the AI Litigation Task Force for potential legal challenge. California’s frontier AI transparency law and Colorado’s algorithmic discrimination statute—specifically named in the original executive order—appear to be prime targets.
But the unexpected intensity of pushback from Republican-controlled states may complicate the administration’s strategy. Legal scholars note that executive orders cannot preempt state law without congressional authorization, and the constitutional basis for threatening states’ broadband funding over AI policy remains untested.
For state legislators, the calculus is uncomfortable. “People are trying to figure out at the state level, ‘What if we called their bluff?'” one GOP strategist told reporters. Several state attorneys general are reportedly considering legal challenges to the executive order itself.
Meanwhile, the global regulatory landscape continues to evolve independently of American domestic politics. The EU AI Act’s high-risk requirements become enforceable in August 2026. China continues expanding its AI governance framework. Brazil, South Korea, Japan, and others are advancing their own approaches.
American companies building AI systems for global markets cannot simply wait for federal-state conflicts to resolve. They face compliance obligations in the EU regardless of what happens in Utah or Florida. And the longer the U.S. delays establishing coherent national policy, the more European and Chinese standards may become de facto global norms.
The Deeper Question
The AI regulation debate exposes a tension that extends beyond technology policy. The Republican Party is navigating what happens when traditional conservative principles — states’ rights, protecting children, skepticism of concentrated corporate power — collide with the priorities of influential tech industry allies and an administration committed to rapid AI deployment.
Utah’s Senator Tom Leek captured the frustration when discussing his own AI legislation: “If your plan is to wait for Congress, God help you.”
The states aren’t waiting. Whether the federal government can stop them remains an open question, one that may ultimately be settled not in state capitals or the Oval Office, but in federal court.
And as Anu Bradford’s framework suggests, the outcome matters far beyond American borders. In a world where digital empires are competing to set the norms governing artificial intelligence, the question of who writes the rules, and whether liberal democracies can write them at all, will shape the future of both technology and governance for decades to come.
Further Reading
The GOP Internal Conflict Over State AI Laws
“GOP lawmakers urge Donald Trump to let states pass AI laws,” The Hill, March 3, 2026 https://thehill.com/policy/technology/5764328-gop-state-lawmakers-white-house-ai-laws/
“White House puts red state AI laws under scrutiny,” Axios, March 6, 2026 https://www.axios.com/2026/03/06/white-house-red-state-ai-laws-scrutiny
“Scoop: White House pressures Utah lawmaker to kill AI transparency bill,” Axios, February 15, 2026 https://www.axios.com/2026/02/15/white-house-utah-ai-transparency-bill
“Utah billboards call out David Sacks over AI bill,” Axios, February 26, 2026 https://www.axios.com/2026/02/26/utah-billboards-david-sacks-ai-bill
“A mother’s fight against AI after son’s suicide,” Deseret News, March 3, 2026 https://www.deseret.com/politics/2026/03/03/trump-white-house-pressures-utah-lawmakers-to-back-off-ai-transparency-law-as-parents-call-for-state-regulations/
“Utah wants safe AI, not White House interference,” Deseret News, February 25, 2026 https://www.deseret.com/opinion/2026/02/25/utah-voters-want-safe-ai-hb286-white-house-big-tech/
“Trump is pressuring Utah on an AI bill. Gov. Cox says states should lead on policy,” KUER, February 19, 2026 https://www.kuer.org/politics-government/2026-02-19/trump-is-pressuring-utah-on-an-ai-bill-gov-cox-says-states-should-lead-on-policy
“How Trump’s Bid to Crush State AI Laws Splits His Own Party,” Time, December 17, 2025 https://time.com/7341296/republican-backlash-trump-ai-executive-order/
“DeSantis’ AI Bill of Rights clears Senate, but House won’t touch it,” Florida Phoenix, March 4, 2026 https://floridaphoenix.com/2026/03/04/desantis-ai-bill-of-rights-clears-senate-but-house-wont-touch-it/
“Florida Senate approves ‘AI Bill of Rights’ as it remains halted in the House,” WUSF, March 6, 2026 https://www.wusf.org/politics-issues/2026-03-06/florida-senate-approves-ai-bill-of-rights-remains-halted-in-house
“States will keep pushing AI laws despite Trump’s efforts to stop them,” Stateline, December 12, 2025 https://stateline.org/2025/12/12/states-will-keep-pushing-ai-laws-despite-trumps-efforts-to-stop-them/
“On AI and data centers, state lawmakers find bipartisan agreement,” NPR, February 26, 2026 https://www.npr.org/2026/02/26/nx-s1-5726431/data-centers-ai-trump-housing-states
Trump’s Executive Order on AI
“Ensuring a National Policy Framework for Artificial Intelligence,” The White House, December 11, 2025 https://www.whitehouse.gov/presidential-actions/2025/12/ensuring-a-national-policy-framework-for-artificial-intelligence/
“President Trump Signs Executive Order Challenging State AI Laws,” Paul Hastings LLP https://www.paulhastings.com/insights/client-alerts/president-trump-signs-executive-order-challenging-state-ai-laws
“AI Executive Order Targets State Laws and Seeks Uniform Federal Standards,” Latham & Watkins https://www.lw.com/en/insights/ai-executive-order-targets-state-laws-and-seeks-uniform-federal-standards
“Executive Order Targets State AI Regulation Through Federal Preemption,” McGuireWoods Consulting, January 20, 2026 https://mwcllc.com/2026/01/20/executive-order-targets-state-ai-regulation-through-federal-preemption/
“President Trump Signs Executive Order Preempting State AI Laws and Centralizing Federal Oversight,” Seyfarth Shaw LLP https://www.seyfarth.com/news-insights/president-trump-signs-executive-order-preempting-state-ai-laws-and-centralizing-federal-oversight.html
“Unpacking the December 11, 2025 Executive Order: Ensuring a National Policy Framework for Artificial Intelligence,” Sidley Austin LLP, December 23, 2025 https://www.sidley.com/en/insights/newsupdates/2025/12/unpacking-the-december-11-2025-executive-order
“State AI laws under federal scrutiny: Key takeaways from the executive order establishing federal AI policy framework,” White & Case LLP https://www.whitecase.com/insight-alert/state-ai-laws-under-federal-scrutiny-key-takeaways-executive-order-establishing
“President Trump Targets State AI Regulations,” The Regulatory Review, February 26, 2026 https://www.theregreview.org/2026/02/26/champagne-president-trump-targets-state-based-ai-regulations/
State AI Laws and Regulations
“US State AI Governance Legislation Tracker,” International Association of Privacy Professionals (IAPP) https://iapp.org/resources/article/us-state-ai-governance-legislation-tracker
“U.S. Artificial Intelligence Law Update: Navigating the Evolving State and Federal Regulatory Landscape,” Baker Botts, January 2026 https://www.bakerbotts.com/thought-leadership/publications/2026/january/us-ai-law-update
“New State AI Laws are Effective on January 1, 2026, But a New Executive Order Signals Disruption,” King & Spalding https://www.kslaw.com/news-and-insights/new-state-ai-laws-are-effective-on-january-1-2026-but-a-new-executive-order-signals-disruption
“From Proposal to Passage: Enacted U.S. AI Laws, 2023–2025,” Future of Privacy Forum https://fpf.org/blog/from-proposal-to-passage-enacted-u-s-ai-laws-2023-2025/
“Comprehensive List of State AI Laws,” Stack Cyber https://stackcyber.com/posts/ai-state-laws
“US state-by-state AI legislation snapshot,” Bryan Cave Leighton Paisner https://www.bclplaw.com/en-US/events-insights-news/us-state-by-state-artificial-intelligence-legislation-snapshot.html
“Artificial Intelligence Regulations: State and Federal AI Laws 2026,” Drata https://drata.com/blog/artificial-intelligence-regulations-state-and-federal-ai-laws-2026
EU AI Act
“AI Act,” European Commission Digital Strategy https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
“Timeline for the Implementation of the EU AI Act,” AI Act Service Desk, European Commission https://ai-act-service-desk.ec.europa.eu/en/ai-act/timeline/timeline-implementation-eu-ai-act
“Implementation Timeline,” EU Artificial Intelligence Act (Future of Life Institute) https://artificialintelligenceact.eu/implementation-timeline/
“EU AI Act 2026 Updates: Compliance Requirements and Business Risks,” Legal Nodes https://www.legalnodes.com/article/eu-ai-act-2026-updates-compliance-requirements-and-business-risks
“EU AI Act News 2026: Compliance Requirements & Deadlines,” Axis Intelligence, December 23, 2025 https://axis-intelligence.com/eu-ai-act-news-2026/
“EU AI Act High-Risk Requirements: What Companies Need to Know,” Dataiku https://www.dataiku.com/stories/blog/eu-ai-act-high-risk-requirements
China AI Regulation
“AI Watch: Global regulatory tracker, China,” White & Case LLP https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-china
“Global AI Governance Law and Policy: China,” IAPP https://iapp.org/resources/article/global-ai-governance-china
“AI laws and regulations in China,” CMS Expert Guide https://cms.law/en/int/expert-guides/ai-regulation-scanner/china
“AI Tracker Mainland China,” Herbert Smith Freehills https://www.hsfkramer.com/insights/reports/ai-tracker/prc
“China Is Worried About AI Companions. Here’s What It’s Doing About Them,” Carnegie Endowment for International Peace, February 2026 https://carnegieendowment.org/russia-eurasia/research/2026/02/china-is-worried-about-ai-companions-heres-what-its-doing-about-them
“Navigating China’s AI Regulatory Landscape in 2025: What Businesses Need to Know,” Securiti https://securiti.ai/china-ai-regulatory-landscape/
“China’s Key Developments in Artificial Intelligence Governance in 2025,” ICLG Telecoms, Media & Internet, December 15, 2025 https://iclg.com/practice-areas/telecoms-media-and-internet-laws-and-regulations/03-china-s-key-developments-in-artificial-intelligence-governance-in-2025
UK AI Regulation
“AI Tracker UK,” Herbert Smith Freehills https://www.hsfkramer.com/insights/reports/ai-tracker/uk
“Global AI Governance Law and Policy: United Kingdom,” IAPP https://iapp.org/resources/article/global-ai-governance-uk
“AI regulation in 2025: UK’s regulation,” Bryan Cave Leighton Paisner, January 28, 2025 https://perspectives.bclplaw.com/emerging-themes/creating-connections/technology/ai-in-2025-will-the-UKs-regulation-keep-up-or-be-left-behind/
“UK tech and digital regulatory policy in 2026,” Taylor Wessing, December 1, 2025 https://www.taylorwessing.com/en/interface/2025/predictions-2026/uk-tech-and-digital-regulatory-policy-in-2026
“AI regulation in the UK: The role of the regulators,” Bird & Bird https://www.twobirds.com/en/insights/2026/uk/ai-regulation-in-the-uk-the-role-of-the-regulators
Brazil and Global AI Regulation
“Brazil Artificial Intelligence Act 2025: AI Governance & Compliance Guide,” Adeptiv https://adeptiv.ai/brazil-artificial-intelligence-act/
“Brazil AI Act,” Artificial Intelligence Act https://artificialintelligenceact.com/brazil-ai-act/
“Artificial Intelligence 2025, Brazil: Trends and Developments,” Chambers and Partners https://practiceguides.chambers.com/practice-guides/artificial-intelligence-2025/brazil/trends-and-developments
“What to Expect from Brazil on Tech Policy in 2026,” Tech Policy Press, January 6, 2026 https://www.techpolicy.press/what-to-expect-from-brazil-on-tech-policy-in-2026/
“IAPP Global Legislative Predictions 2026,” IAPP https://iapp.org/resources/article/global-legislative-predictions
“Global AI Law and Policy Tracker: Highlights and takeaways,” IAPP https://iapp.org/news/a/global-ai-law-and-policy-tracker-highlights-and-takeaways
“AI Regulations around the World, 2026,” Mind Foundry https://www.mindfoundry.ai/blog/ai-regulations-around-the-world
“The Updated State of AI Regulations for 2025,” Cimplifi https://www.cimplifi.com/resources/the-updated-state-of-ai-regulations-for-2025/
Anu Bradford and Digital Empires
Digital Empires: The Global Battle to Regulate Technology, Anu Bradford, Oxford University Press, 2023 https://global.oup.com/academic/product/digital-empires-9780197649268
“Digital Empires: The Global Battle to Regulate Technology,” Columbia Law School Faculty Books https://scholarship.law.columbia.edu/books/367/
“Digital Empires: A Conversation with Anu Bradford,” Tech Policy Press, November 29, 2023 https://www.techpolicy.press/digital-empires-a-conversation-with-anu-bradford/
“Anu Bradford on her new book, Digital Empires,” McKinsey Author Talks, November 6, 2023 https://www.mckinsey.com/featured-insights/mckinsey-on-books/author-talks-anu-bradford-discusses-the-race-to-become-the-next-technology-superpower
“Control over AI is one of the most important battles for China, EU, and US, according to digital regulation expert Anu Bradford,” Foreign Correspondents’ Club of Hong Kong https://www.fcchk.org/anu-bradford-digital-empires/
“Book Review: Digital Empires: The Global Battle to Regulate Technology,” LSE Review of Books, June 17, 2024 https://blogs.lse.ac.uk/lsereviewofbooks/2024/06/17/book-review-digital-empires-the-global-battle-to-regulate-technology-anu-bradford/


