The AI Regulation Showdown: States vs. Silicon Valley in Washington’s High-Stakes Game

The Great AI Divide: Why Washington is Ground Zero for the Future of Artificial Intelligence Regulation

Imagine a world where artificial intelligence, a force rapidly reshaping our lives, operates without clear rules. That’s the precarious position the United States finds itself in right now. For the first time, the nation’s capital is on the cusp of making a monumental decision about how to govern AI. But this isn’t just a debate about algorithms and code; it’s a fierce tug-of-war over who wields the power to set those rules.

In the quiet hum of innovation, a significant challenge has emerged. As the federal government has largely been absent in establishing a comprehensive AI safety framework, individual states have taken it upon themselves to protect their citizens. We’re seeing a surge of state-level legislation aimed at mitigating the potential harms of AI. California, a known tech policy trailblazer, has introduced its AI safety bill, SB-53, while Texas has put forth the Responsible AI Governance Act, which specifically targets the intentional misuse of AI systems. These are not minor proposals; they represent a growing wave of state-led efforts to ensure AI benefits society without causing undue harm.

The Tech Titans’ Plea: A Call for a Unified Front (and Fear of the Patchwork)

Unsurprisingly, this state-by-state approach has sent ripples of concern through Silicon Valley’s tech giants and the agile startups that are the lifeblood of the innovation ecosystem. Their argument is potent: a fragmented landscape of diverse state laws creates an unworkable “patchwork” that stifles progress and innovation. Josh Vlasto, co-founder of the pro-AI advocacy group Leading the Future, articulated this sentiment to TechCrunch, warning that such regulations would “slow us in the race against China.” The underlying message is clear: a unified, national approach is essential for American competitiveness.

This perspective has found fertile ground within the halls of power, including among some influential voices in the White House. The industry, backed by significant financial resources and lobbying efforts, is pushing for a singular federal standard, or none at all. This “all-or-nothing” stance is not just a talking point; it’s a strategic battleground. Emerging efforts are actively working to preempt, or block, states from enacting their own AI legislation.

The NDAA Gambit and the Leaked EO: A Federal Power Play

Evidence of this push is mounting. Reports indicate that lawmakers are exploring the possibility of attaching language to the National Defense Authorization Act (NDAA) – a critical piece of legislation – that would effectively prohibit states from enacting their own AI laws. This maneuver is a high-stakes political play, leveraging a must-pass bill to achieve a regulatory goal.

Adding fuel to the fire, a leaked draft of a White House Executive Order (EO) has also revealed a strong inclination towards preempting state-level AI regulations. This draft EO reportedly outlines strategies to challenge state laws in court, direct federal agencies to scrutinize and potentially invalidate state regulations deemed “onerous,” and encourage federal agencies like the FCC and FTC to establish national standards that would supersede state rules.

This broad preemption strategy, which would effectively strip states of their right to regulate AI, is not sitting well with everyone in Congress. The idea of a sweeping moratorium on state authority faced significant opposition, with a previous vote demonstrating overwhelming disapproval of similar measures earlier this year. Many lawmakers argue that without a robust federal standard in place, blocking states will leave consumers vulnerable and allow tech companies to operate without adequate oversight.

The Power Brokers: Who’s Driving the Preemption Agenda?

A closer look at the leaked EO reveals some intriguing details about who might be influencing the direction of AI policy. Notably, the draft EO suggests that David Sacks – identified as Trump’s AI and Crypto Czar and a co-founder of the venture capital firm Craft Ventures – would have co-lead authority in shaping a uniform legal framework. This would grant Sacks significant influence, potentially superseding the traditional role of the White House Office of Science and Technology Policy and its director, Michael Kratsios.

Sacks has been an outspoken proponent of blocking state-level AI regulation and advocating for minimal federal oversight, often favoring industry self-regulation as the path to “maximize growth.” His stance aligns closely with the broader AI industry’s push to avoid what they perceive as burdensome and innovation-hindering state-specific rules.

The ‘Patchwork’ Argument: Innovation vs. Protection

The argument that state regulations create an unworkable “patchwork” is a central tenet of the tech industry’s lobbying efforts. Several well-funded pro-AI Super PACs have emerged, injecting substantial capital into state and local elections to oppose candidates who champion AI regulation. Leading the Future, a PAC backed by prominent figures and firms like Andreessen Horowitz, OpenAI president Greg Brockman, Perplexity, and Palantir co-founder Joe Lonsdale, has reportedly raised over $100 million.

This week alone, Leading the Future launched a $10 million campaign aimed at persuading Congress to adopt a national AI policy that preempts state laws. “When you’re trying to drive innovation in the tech sector, you can’t have a situation where all these laws keep popping up from people who don’t necessarily have the technical expertise,” Vlasto told TechCrunch. He reiterates the concern that a fragmented regulatory environment will hinder America’s competitive edge against other nations, particularly China.

Nathan Leamer, executive director of Build American AI, the advocacy arm of Leading the Future, openly supports preemption, even in the absence of specific federal consumer protection laws for AI. Leamer argues that existing legal frameworks, such as those governing fraud and product liability, are sufficient to address AI-related harms. His preferred approach is a reactive one: allow companies to innovate rapidly, and address problems in court as they arise, rather than implementing preventative measures.

No Preemption Without Representation: The States’ Stand

This push for federal preemption is encountering significant resistance, particularly from lawmakers and officials who believe states are crucial players in safeguarding citizens. Alex Bores, a New York Assembly member running for Congress, is one of the prominent figures targeted by Leading the Future. Bores sponsored the RAISE Act, legislation requiring large AI labs to develop safety plans to prevent critical harms. He emphasizes the importance of responsible innovation: “I believe in the power of AI, and that is why it is so important to have reasonable regulations,” Bores told TechCrunch. “Ultimately, the AI that’s going to win in the marketplace is going to be trustworthy AI, and often the marketplace undervalues or puts poor short-term incentives on investing in safety.”

While Bores supports a national AI policy, he firmly believes that states can and should be able to act more swiftly to address emerging risks. The data supports this. As of November 2025, an impressive 38 states have introduced over 100 AI-related laws this year. These laws primarily focus on critical areas like deepfakes, transparency and disclosure requirements, and the ethical use of AI by government entities. However, it’s worth noting that a recent study indicated that a significant majority (69%) of these state laws do not impose any direct requirements on AI developers themselves.

Congress’s Slow Burn: The Challenge of Federal Action

The pace of legislative action in Congress further underscores the argument that states can move faster. While hundreds of AI bills have been introduced, very few have successfully navigated the legislative process and become law. Representative Ted Lieu (D-CA), a prominent voice in AI policy, has introduced 67 bills to the House Science Committee since 2015, with only one passing into law. This statistic highlights the inherent challenges in enacting federal legislation, especially in a highly complex and rapidly evolving field like AI.

The opposition to preempting state authority is also gaining traction. A powerful open letter, signed by over 200 lawmakers, has voiced strong opposition to preempting state AI regulations in the NDAA. Their argument is compelling: states serve as “laboratories of democracies” and must “retain the flexibility to confront new digital challenges as they arise.” Furthermore, nearly 40 state attorneys general have also penned an open letter against any ban on state AI regulation, signaling a united front from state legal authorities.

Beyond the ‘Patchwork’: The Real Motive?

Concerns about the “patchwork” argument are also being challenged by experts. Cybersecurity authority Bruce Schneier and data scientist Nathan E. Sanders, authors of “Rewiring Democracy: How AI Will Transform Our Politics, Government, and Citizenship,” contend that the patchwork complaint is often overstated. They point out that AI companies already navigate stricter regulations in the European Union and that most industries successfully operate under varying state laws. The core of the issue, they suggest, may not be about regulatory complexity but rather a desire to avoid accountability.

Crafting a National Standard: A Glimpse into the Future

So, what could a comprehensive federal AI standard look like? Representative Ted Lieu is actively working on a substantial piece of legislation, an over 200-page “megabill,” which he hopes to introduce in December. This bill is designed to address a wide spectrum of issues, including enhanced penalties for fraud, stronger protections against deepfakes, provisions for whistleblower rights, support for academic research through compute resources, and mandatory testing and disclosure requirements for large language model (LLM) companies.

The provision for mandatory testing and disclosure for LLMs is particularly noteworthy. While many AI labs currently conduct these tests voluntarily, Lieu’s proposal would institutionalize this practice, requiring them to publish their findings. However, it’s important to note that Lieu’s bill, as currently envisioned, would not direct any federal agencies to directly review AI models. This distinguishes it from other proposals, such as a bill introduced by Senators Josh Hawley (R-MS) and Richard Blumenthal (D-CT), which would establish a government-run evaluation program for advanced AI systems before their deployment.

Lieu acknowledges that his bill might not be as stringent as some would prefer, but he believes its pragmatism increases its chances of becoming law. “My goal is to get something into law this term,” Lieu stated, recognizing the significant opposition he faces, particularly from House Majority Leader Steve Scalise, who has expressed skepticism towards AI regulation. Lieu’s strategy is clear: “I’m not writing a bill that I’d have if I were king. I’m trying to write a bill that could pass a Republican-controlled House, a Republican-controlled Senate, and a Republican-controlled White House.” This pragmatic approach highlights the intricate political landscape that any federal AI legislation must navigate.

The coming months will be critical in shaping the future of AI regulation in the United States. The battle lines are drawn between those who advocate for state-level flexibility and consumer protection, and those who champion a unified national approach to foster innovation and maintain global competitiveness. The outcome of this high-stakes debate will not only define the trajectory of AI development but also profoundly impact the safety and rights of every American navigating our increasingly AI-driven world.

Posted in Uncategorized