Resign From Job Or Fired Employee Moving Out Of Office, Frustrated Fired Asian Employee Woman Packing Belongings In Cardboard Box

OpenAI Leadership Exodus Continues: Fifth C-Suite Departure

Saturday morning, March 7. OpenAI’s robotics lead opens her laptop. The Pentagon deal was announced six days ago. Internal Slack channels are still debating surveillance and autonomous weapons. She types: “This wasn’t an easy call.” 

By Monday, the departure was made public. 

Caitlin Kalinowski resigned over the Pentagon deal, specifically that surveillance of Americans and lethal autonomy decisions were announced before guardrails were defined. “It’s a governance concern first and foremost,” she wrote on X. She had been leading hardware and robotics since November 2024.

This marks the fifth executive departure since September 2024, with leadership changes continuing in a consistent pattern.  

The departures since 2024

September 2024 marked the acceleration. 

Mira Murati, CTO and the executive who briefly led the company during Sam Altman’s five-day ouster, announced her departure. On the same day, Bob McGrew and Barrett Zoph resigned, the Chief Research Officer and VP of Research, both gone within 24 hours. Murati had been at OpenAI for six years. She later founded her own AI research company. McGrew and Zoph moved to competitors. 

Julia Villagra took on the Chief People Officer role in early 2025 after being promoted internally. She left by August. Hannah Wong, who had spent five years as Chief Communications Officer navigating the company through the Altman firing and multiple ChatGPT controversies, stepped down in December 2025 without announcing her next move. 

In March 2026, Kalinowski resigned over the Pentagon deal. The announcement, she said, was rushed. Guardrails were not defined before the deal was made public. Over the summer of 2025, at least seven researchers left for Meta’s Superintelligence Lab. Shengjia Zhao, who co-created ChatGPT and GPT-4, was among them. Meta reportedly offered compensation packages of up to $300 million over four years. 

Only two of OpenAI’s original 11 founders remain active. Sam Altman is one.  

What people said on the way out

Jan Leike, who co-led OpenAI’s Superalignment team, resigned in May 2024 and stated: “Safety culture and processes have taken a backseat to shiny products.” He is now at Anthropic.

Subscribe to our bi-weekly newsletter

Get the latest trends, insights, and strategies delivered straight to your inbox.

So is John Schulman, another co-founder who left, citing the inability to do hands-on alignment work. Ilya Sutskever, co-founder and chief scientist who led the board coup against Altman, left the same month and later founded Safe Superintelligence. 

Daniel Kokotajlo publicly stated that he had lost trust in OpenAI’s leadership and its ability to responsibly handle AGI. Helen Toner, a former board member, said Altman misled the board on multiple occasions regarding safety processes. The board learned about ChatGPT’s launch via Twitter. 

Each departure points to a consistent conclusion: safety-focused individuals leaving because they could not do the work they set out to do.  

The management style problem

Murati and Sutskever both concluded that Altman was acting dishonestly. Sutskever reportedly provided the board with a self-destructing PDF containing Slack screenshots documenting multiple instances. Murati told board members she was not comfortable with Altman leading the company toward AGI. 

At least five individuals within two levels of him reportedly gave similar feedback. 

When restrictive non-disparagement agreements became public, Altman claimed ignorance. However, Vox obtained incorporation documents from April 2023 bearing his signature authorizing the equity clawback provisions. The board that removed him cited that he was “not consistently candid.” He was reinstated five days later after Microsoft threatened to hire away staff. The board was subsequently replaced with business-focused members. 

Altman testified before Congress advocating for oversight. A month later, OpenAI was lobbying to weaken the EU AI Act. By 2025, he described regulation as “disastrous” — a significant shift from his earlier position. Those working closely with him tend to either remain aligned or leave, and the balance appears to be shifting.  

What the exodus means for enterprise customers

IT leaders have built infrastructure around ChatGPT, integrating it into workflows, training employees, and developing internal tools built on its API. OpenAI reported over 5 million business users as of January 2026. 

By the end of 2025, OpenAI’s enterprise market share had declined from 50% in 2023 to 27%, a shift that closely aligns with the period of high-profile exits. Google Gemini, Anthropic’s Claude, and Meta’s Llama have begun securing deals that OpenAI would likely have won two years earlier. 

This is not due to a decline in technology quality, but rather a shift in enterprise perception. Procurement teams are now asking: if the people who built these systems are leaving, how stable is the dependency?  

OpenAI leadership: Before and after 

The individual departures are visible. What is harder to see until it is laid out is how much of the organisation that existed in 2023 is simply gone. It has not been restructured, but replaced. The people, the mission alignment, and the oversight mechanisms have all shifted. 

Factor Pre-Exodus (2023) Post-Exodus (2026) Impact 
Original founders active 7 of 11 2 of 11 Institutional knowledge loss 
C-suite stability Stable leadership 5 departures in 18 months Execution uncertainty 
Enterprise market share 50% 27% Competitive disadvantage 
Safety team leadership Dedicated Superalignment group Dissolved, leaders at competitors Reduced oversight 
Researcher retention Industry-leading 7+ to Meta in 6 months Technical capability erosion 
Board composition Technical and safety focus Business and investor focus Mission drift 

The for-profit pivot 

OpenAI completed its transition from a nonprofit to a for-profit public benefit corporation in October 2025. Altman received equity for the first time. Microsoft now holds a 27% stake following its $13.8 billion investment. Murati’s September departure closely aligned with these restructuring developments. 

The organisation that began as a nonprofit focused on building AGI for humanity’s benefit has transitioned into a commercial entity where profitability influences decision-making. 

OpenAI reported losses of $5 billion in 2024 against $3.7 billion in revenue, spending more than two dollars for every dollar earned. This level of financial pressure shifts priorities, as profitability does not just become important but competes directly with long-term safety work. Miles Brundage and Tom Cunningham also resigned in late 2024, reflecting a quieter but consistent trend.  

What comes after the fifth departure

ChatGPT continues to function. The API ecosystem remains stable. Enterprise adoption has not dropped significantly. The infrastructure built under the current leadership continues to operate as expected. However, the pattern behind Kalinowski’s departure remains consistent: 

  • Strategic decisions prioritise market position  
  • Ethical concerns are raised internally  
  • Those concerns are not acted upon  
  • Senior talent exits  

The board that previously attempted oversight has been replaced. The safety team that could have slowed decisions has been dissolved. For enterprise buyers, the evaluation criteria are shifting. Stability is no longer defined solely by uptime or performance metrics. It is defined by whether the organisation can retain the expertise required to manage risk.  

Distilled

Five C-suite executives have departed in 18 months, alongside a steady outflow of senior researchers and co-founders to competitors such as Meta and Anthropic.

The original board has been replaced, the safety team has been dissolved, and the organisation has moved away from its nonprofit roots. The individuals leaving are not peripheral; they are the architects of the models and the systems designed to govern them. 

The pattern is consistent: commercial priorities are taking precedence over research-led safety culture, and internal dissent is not shifting direction but leading to exits. The future of ChatGPT will depend on whether OpenAI can continue to build at scale without the people who originally built its foundations. 

She crafts SEO-driven content that bridges the gap between complex innovation and compelling user stories. Her data-backed approach has delivered measurable results for industry leaders, making her a trusted voice in translating technical breakthroughs into engaging digital narratives.