OpenAI’s Economic Research: Shifting Tides or Strategic Evolution?

The Shifting Sands of AI Research: Is OpenAI’s Economic Scrutiny Muted?

In the rapidly evolving landscape of artificial intelligence, the research conducted by leading companies like OpenAI holds immense sway. It not only shapes our understanding of this transformative technology but also influences policy, investment, and public perception. However, recent claims suggest a potential shift within OpenAI’s economic research team, with allegations that the company may be hesitant to publish findings that highlight the potential downsides of AI, particularly concerning its economic impact.

Four sources, speaking on condition of anonymity to WIRED, have painted a picture of a research unit grappling with internal tensions. They suggest that the team, once known for its open exploration of AI’s multifaceted economic implications, is now encountering a reluctance to disseminate research that casts the technology in a less favorable light. This perceived shift has reportedly contributed to the departure of at least two key members of the economic research team in recent months.

A Researcher’s Departure and the ‘Advocacy Arm’ Dilemma

One such departure is that of Tom Cunningham, who left OpenAI in September. According to individuals familiar with his decision, Cunningham concluded that publishing high-quality, critical research had become an increasingly arduous task. In a candid internal message shared upon his exit, he reportedly articulated a growing conflict between the demands of rigorous academic analysis and the perceived role of the team as a de facto advocacy arm for OpenAI’s broader objectives.

While Cunningham declined to comment directly on his departure, the sentiment he reportedly expressed strikes at the heart of a critical debate within organizations developing powerful technologies: where does objective research end and corporate advocacy begin?

OpenAI’s Response: Responsibility and Solution-Building

OpenAI’s Chief Strategy Officer, Jason Kwon, addressed these concerns internally following Cunningham’s departure. In a memo obtained by WIRED, Kwon acknowledged the validity of grappling with difficult subjects. However, he framed OpenAI’s position not as a prohibition on discussing problems but as a mandate for proactive engagement. As a leading actor introducing AI into the world, Kwon argued, OpenAI has a responsibility to "build the solutions" alongside identifying the challenges.

"My POV on hard subjects is not that we shouldn’t talk about them," Kwon stated in an internal Slack message. "Rather, because we are not just a research institution, but also an actor in the world (the leading actor in fact) that puts the subject of inquiry (AI) into the world, we are expected to take agency for the outcomes."

Expanding Scope or Narrowing Focus?

OpenAI spokesperson Rob Friedlander offered a counter-perspective, emphasizing the company’s commitment to comprehensive economic research. He pointed to the hiring of Aaron Chatterji as OpenAI’s first Chief Economist last year and stated that the team’s scope has since expanded. "The economic research team conducts rigorous analysis that helps OpenAI, policymakers, and the public understand how people are using AI and how it is shaping the broader economy, including where benefits are emerging and where societal impacts or disruptions may arise as the technology evolves," Friedlander explained.

However, the sources speaking to WIRED suggest a divergence between this stated intention and the team’s actual output. They allege that over the past year, a growing reluctance has emerged to release work focusing on AI’s economic downsides, such as potential job displacement, while favoring publications highlighting positive findings.

The Shadow of Corporate Partnerships

This alleged shift in research dissemination occurs against a backdrop of OpenAI’s deepening multibillion-dollar partnerships with corporations and governments. As a central player in the global economy, OpenAI’s influence is undeniable. The technology it develops promises to revolutionize how we work, but the timeline and the extent of this transformation remain subjects of intense speculation and debate.

Historically, OpenAI has been a consistent publisher of research on AI’s impact on labor and has collaborated with external economists. A notable example is the 2023 paper "GPTs Are GPTs," which investigated sectors most vulnerable to automation. However, the recent claims suggest a departure from this previous openness, with a perceived preference for research that casts OpenAI’s technology in a more favorable light.

An external economist who has previously worked with OpenAI, also speaking anonymously, corroborated this sentiment, stating that the company increasingly publishes work that favorably positions its technology.

A Recent Report: Efficiency Gains Highlighted

Indeed, a recent report published by OpenAI showcased findings from enterprise users who claimed their AI products had saved them an average of 40 to 60 minutes per day. The report also suggested that companies have "significant headroom" for increased AI adoption, framing a positive outlook on AI’s economic integration.

This is not the first time OpenAI researchers have voiced concerns about the company’s publication practices. Miles Brundage, the former head of policy research, also departed in October 2024, citing the company’s high profile making it difficult to publish on topics he deemed important. While acknowledging that some constraints are inherent in a high-profile organization, Brundage reportedly felt OpenAI had become excessively restrictive.

Navigating Public Perception and Policy

The implications of this alleged research focus are far-reaching. Publishing statistics that paint a potentially gloomy picture of AI’s economic impact, such as widespread job losses, could indeed complicate OpenAI’s already complex public image. In the United States, for instance, the conversation around AI and jobs is politically charged. While some administrations champion AI’s potential, concerns about job displacement resonate deeply with the public, particularly younger generations. A November survey from the Harvard Kennedy School’s Institute of Politics indicated that roughly 44 percent of young people in the US fear AI will reduce job opportunities.

While it’s common for companies to highlight research that benefits them, the leading AI labs operate with a unique level of autonomy in self-reporting the risks and capabilities of the technologies they are rapidly deploying. The significant lobbying efforts by Silicon Valley, including a reported $100 million campaign, underscore the industry’s desire to shape the regulatory landscape and resist constraints on its development and deployment.

A Contrasting Approach: Anthropic’s Open Warnings

OpenAI’s reportedly cautious stance stands in stark contrast to that of its competitor, Anthropic. Anthropic’s CEO, Dario Amodei, has been notably vocal, warning that AI could automate up to half of entry-level white-collar jobs by 2030. Amodei frames these stark predictions not as doomsaying, but as essential catalysts for public discourse and proactive preparation for workforce transformations. Such warnings, however, have drawn sharp criticism from some political circles, with figures like David Sacks, a White House special advisor for AI and crypto, accusing Anthropic of employing a "sophisticated regulatory capture strategy based on fear-mongering."

The Leadership Behind the Research

Currently, OpenAI’s economic research efforts are spearheaded by Chief Economist Aaron Chatterji, who led the significant September report on global ChatGPT usage. Tom Cunningham is also listed as an author on this report, underscoring his involvement prior to his departure. This report was released months after Anthropic published a similar paper on the usage of its chatbot, Claude.

The organizational structure suggests a close integration of the economic research team with OpenAI’s broader political and policy strategy. Sources indicate that Chatterji reports to Chris Lehane, OpenAI’s Chief Global Affairs Officer. Lehane has a notable background, having previously helped Airbnb navigate regulatory challenges in San Francisco and served in the Clinton administration, where he gained a reputation as a skilled strategist in navigating complex public affairs.

The Broader Context: AI’s Economic Frontier

The questions raised by the alleged shift in OpenAI’s research publication practices are not isolated incidents. They speak to a larger, ongoing debate about transparency, accountability, and the ethical development of artificial intelligence. As AI continues its rapid integration into the fabric of our economy and society, understanding its full spectrum of impacts – both positive and negative – is paramount. The rigor and independence of economic research play a crucial role in fostering this understanding, informing crucial decisions for policymakers, businesses, and individuals alike. The pursuit of AI’s immense potential must be balanced with a clear-eyed assessment of its challenges, ensuring that progress benefits society broadly and equitably.

Posted in Uncategorized