Concerned About Authoritarian Trends, Researchers Are Leaving OpenAI in Droves

When technologies advance at full speed, transparency becomes just as essential as innovation. In the field of artificial intelligence, it is sometimes the researchers themselves who worry. At OpenAI, several recent departures do not call into question the progress made. On the other hand, they raise questions about the way in which this progress is framed, presented and sometimes oriented. Thus, behind these warning signals, concern is growing around the excesses of OpenAI.

Publish or direct, a dilemma that has become central in AI laboratories

Since its beginnings, OpenAI has cultivated an image of an open, transparent laboratory focused on fundamental research. However, this model seems to be crumbling. According to a Wired investigation, at least two members of the economic research team left the company recently, tired of seeing their work repurposed or downplayed when it didn't serve the company's interests. One of them, Tom Cunningham, affirmed internally that his team had become a “propaganda showcase” rather than a real research center.

Behind these departures, it is scientific freedom itself which seems to be called into question. While OpenAI published a study in 2023 praised for its analysis of automation risk (GPTs are GPTs), its recent publications focus more on productivity gains than on economic threats. A report led by Aaron Chatterji in September 2025, widely distributed in the media, presented ChatGPT as a performance lever for companies, without addressing the issues of job insecurity or job destruction.










The excesses of OpenAI seen by those who chose to leave

For some employees, the situation was no longer tenable. In a message shared within the company, Tom Cunningham denounced increasing pressure to direct analyzes towards flattering conclusions. These accusations were echoed by other former collaborators. Steven Adler, a security researcher who left OpenAI at the end of 2024, recently expressed his “terror” at the breakneck pace of AI development. Writing in The Guardian, he lamented the lack of a concrete solution to the problem of aligning AI with human values, calling the race toward AGI (human-level AI) a “very risky bet.”

The malaise goes far beyond the economic sphere alone. William Saunders, a former security researcher, left the company with concern. He saw commercial imperatives take precedence over user protection. Other departures followed in the same silence. Little by little, a deeper unease sets in. Many researchers believe that their work only serves to validate decisions already made. Thus, technological and commercial choices no longer take internal alerts into account.

Why this blocking of criticism worries well beyond OpenAI

OpenAI's recent decisions reflect a clear strategic shift. The company, initially founded as a non-profit, now operates under a hybrid model. It combines profit objectives and public interest. In recent years, she has raised colossal sums of money. It has also strengthened its ties with Microsoft and could soon receive major financial support from Nvidia. According to the New York Times, this investment would reach $100 billion. Such growth is often accompanied by tighter control of public image.

In an internal memo revealed by Futurism, OpenAI's strategy director, Jason Kwon, recalled that the company was not a simple research center but a “world actor” with responsibilities regarding the impact of AI. This seemingly legitimate position, however, raises a fundamental question. Can a company simultaneously develop a disruptive technology, measure its effects and communicate freely about its limits without bias?

In its official communication, OpenAI claims to maintain a high level of rigorous analysis to enlighten the public and decision-makers, as indicated in a press release included on its own site. However, many signals suggest that the selection of published studies now reflects economic as well as scientific considerations. This progressive blocking of internal criticism is reminiscent of the abuses already observed in other strategic sectors, from energy to digital.

More news

Berlin’s Unsold Christmas Trees Repurposed to Nourish Zoo Elephants

Even after the holidays, the Christmas spirit continues to be felt at Berlin Zoo. To the delight of the park animals, it was time ...

Concerned About Authoritarian Trends, Researchers Are Leaving OpenAI in Droves

When technologies advance at full speed, transparency becomes just as essential as innovation. In the field of artificial intelligence, it is sometimes the researchers ...

Resurrected from the Depths: The French Submarine Le Tonnant, Lost in 1942, Unearths a Forgotten Chapter of WWII off Spain’s Coast

For more than eight decades, Le Tonnant existed only in military reports and family memories. Scuttled in the chaos of the Second World War, ...

Leave a Comment