The AI Safety Storm: Voices Clash in Silicon Valley
In the fast-paced world of Artificial Intelligence, where innovation often races ahead of regulation, a significant storm has been brewing. This week, prominent figures in Silicon Valley, including David Sacks, the influential White House AI & Crypto Czar, and Jason Kwon, Chief Strategy Officer at OpenAI, have ignited online debate with their pointed remarks about groups dedicated to AI safety. Their comments suggest that some advocates for AI safety might not be as altruistic as they appear, hinting at ulterior motives or even manipulation by powerful billionaires behind the scenes.
This dramatic turn of events has not gone unnoticed by the AI safety community. Representatives from various AI safety groups, speaking with TechCrunch, have voiced concerns that these allegations are merely the latest tactic in Silicon Valley’s ongoing effort to intimidate its critics. This isn’t an isolated incident; the landscape is littered with similar attempts to stifle dissent.
A History of Intimidation Tactics
Recall the events of 2024, when several venture capital firms were accused of spreading misinformation about California’s SB 1047, an AI safety bill. Rumors circulated wildly, suggesting that founders of AI startups could face jail time if the bill passed. The Brookings Institution, a respected think tank, ultimately debunked these claims, labeling them as “misrepresentations.” Despite these clarifications, Governor Gavin Newsom ultimately vetoed the bill.
While it remains to be seen whether Sacks and OpenAI’s recent actions were intentionally designed to intimidate, the impact is undeniable. Several AI safety advocates have expressed feeling genuinely intimidated. Many leaders of non-profit organizations, when approached by TechCrunch for comment, requested anonymity, fearing retaliation against their groups.
This controversy starkly highlights the growing chasm within Silicon Valley itself – the fundamental tension between those who prioritize rapid, unfettered AI development for mass consumer appeal and those who advocate for building AI responsibly and with an eye towards potential societal impacts. This complex theme is at the heart of our latest "Equity" podcast episode, where my colleagues Kirsten Korosec, Anthony Ha, and I delve deeper into these critical issues.
Anthropic Under the Microscope: Fear-Mongering or Forewarning?
On Tuesday, David Sacks took to X (formerly Twitter) to voice his strong opinions about Anthropic, a prominent AI lab known for raising concerns about AI’s potential to cause widespread unemployment, facilitate cyberattacks, and even pose catastrophic risks to society. Sacks alleged that Anthropic’s warnings are nothing more than fear-mongering aimed at pushing through legislation that would benefit the company itself and overwhelm smaller startups with bureaucratic hurdles.
His comments came in response to a widely circulated essay by Jack Clark, co-founder of Anthropic. Clark had delivered this essay as a speech at the Curve AI safety conference in Berkeley weeks earlier. For many in the audience, Clark’s words resonated as a sincere expression of a technologist’s reservations about the powerful tools he and his colleagues are creating. However, Sacks interpreted the situation quite differently.
"Anthropic is running a sophisticated regulatory capture strategy based on fear-mongering. It is principally responsible for the state regulatory frenzy that is damaging the startup ecosystem."
— David Sacks (@DavidSacks) October 14, 2025
Sacks characterized Anthropic’s approach as a “sophisticated regulatory capture strategy.” Ironically, he also pointed out that such a sophisticated strategy might not involve alienating potential allies, such as the federal government. In a subsequent post on X, Sacks noted that Anthropic has “consistently positioned itself as a foe of the Trump administration.”
It’s important to contextualize Sacks’ critique. Anthropic was the only major AI lab to publicly endorse California’s Senate Bill 53 (SB 53). This bill, which was signed into law last month, mandates safety reporting requirements for large AI companies. Sacks’ assertion implies that Anthropic supported this bill not out of genuine safety concerns, but as a strategic move to shape regulations in their favor.
OpenAI’s Subpoenas: A Search for Transparency or Intimidation?
Meanwhile, OpenAI, a leading name in the AI development space, has also found itself at the center of controversy. Jason Kwon, OpenAI’s Chief Strategy Officer, posted on X this week to explain the company’s decision to issue subpoenas to AI safety non-profits, including Encode. Encode is an organization dedicated to advocating for responsible AI policies.
A subpoena is a legal order requiring the recipient to provide documents or testimony. Kwon stated that following Elon Musk’s lawsuit against OpenAI – which alleges that the company has strayed from its non-profit mission – OpenAI found it suspicious that several organizations also voiced opposition to its recent restructuring.
"There’s quite a lot more to the story than this. As everyone knows, we are actively defending against Elon in a lawsuit where he is trying to damage OpenAI for his own financial benefit. Encode, the organization for which @_NathanCalvin serves as the General Counsel, was one…"
— Jason Kwon (@jasonkwon) October 10, 2025
Kwon elaborated, saying, “This raised transparency questions about who was funding them and whether there was any coordination.”
NBC News reported that OpenAI has indeed sent broad subpoenas to Encode and six other non-profits that have been critical of the company. The requests are for communications related to OpenAI’s prominent critics, Elon Musk and Meta CEO Mark Zuckerberg. OpenAI also sought communications from Encode specifically concerning its support for SB 53.
This action has raised alarm bells within the AI safety community. One prominent AI safety leader, speaking anonymously to TechCrunch, suggested a growing disconnect within OpenAI. This individual pointed to a perceived split between OpenAI’s government affairs team and its research arm. While OpenAI’s safety researchers are known for publishing reports detailing the risks associated with AI systems, the company’s policy unit reportedly lobbied against SB 53, expressing a preference for uniform federal regulations.
Even within OpenAI, there are signs of unease. Joshua Achiam, OpenAI’s head of mission alignment, publicly expressed his discomfort with the company’s subpoena actions on X, stating, “At what is possibly a risk to my whole career, I will say: this doesn’t seem great.”
The Broader Implications: Accountability vs. Innovation
Brendan Steinhauser, CEO of the Alliance for Secure AI, a non-profit that has not been subpoenaed, shared his perspective with TechCrunch. He believes that OpenAI is operating under the assumption that its critics are part of a conspiracy orchestrated by Elon Musk. However, Steinhauser contends that this is not the case. He highlights that a significant portion of the AI safety community is deeply critical of xAI’s safety practices, or the perceived lack thereof.
“On OpenAI’s part, this is meant to silence critics, to intimidate them, and to dissuade other non-profits from doing the same,” Steinhauser asserted. “For Sacks, I think he’s concerned that [the AI safety] movement is growing and people want to hold these companies accountable.”
Sriram Krishnan, the White House’s Senior Policy Advisor for AI and a former general partner at Andreessen Horowitz (a16z), also weighed in. He characterized AI safety advocates as being out of touch, urging them to engage with “people in the real world using, selling, adopting AI in their homes and organizations.”
Navigating Public Perception and Economic Realities
Recent studies offer a glimpse into public sentiment. A Pew study revealed that approximately half of Americans feel more concerned than excited about AI, though the specific nature of these worries remains somewhat vague. A more detailed study indicated that American voters prioritize concerns like job losses and the proliferation of deepfakes over the existential or catastrophic risks that often dominate the AI safety movement’s discourse.
Addressing these public safety concerns could potentially impede the breakneck pace of AI industry growth. This presents a significant dilemma for many in Silicon Valley. Given that AI investment is a substantial pillar supporting America’s economy, the fear of over-regulation is understandable.
However, after years of largely unfettered AI advancement, the AI safety movement appears to be gaining considerable traction as we move towards 2026. Silicon Valley’s increasingly visible attempts to counter the influence of safety-focused groups might, in fact, be a testament to the growing effectiveness of these very movements.
This ongoing struggle underscores a pivotal moment for artificial intelligence. It’s a debate that pits the allure of rapid innovation and economic growth against the crucial need for responsible development and the mitigation of potential harms. The decisions made today, and the voices amplified or silenced, will undoubtedly shape the future of this transformative technology and its impact on society.
Leave a Reply