Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
Sam Altman was pushed out of OpenAI over a “breakdown in communications” with the company’s board rather than financial impropriety or malfeasance, according to an internal memo sent to staff at the ChatGPT parent and generative AI pioneer.
Altman’s abrupt firing as co-founder and chief executive of OpenAI on Friday shocked employees and partners of the company, including its main backer Microsoft.
His departure along with other senior figures at the company has thrown into doubt the company’s efforts to sell up to $1bn in employee stock and secure an $86bn valuation, and has exposed tensions within the world’s most prominent AI start-up.
The board has so far provided little explanation of its rationale for sacking the 38-year old beyond issuing a statement on Friday saying Altman had not been “consistently candid”.
In a memo sent to OpenAI employees on Saturday and seen by the Financial Times, OpenAI’s chief operating officer Brad Lightcap wrote: “We can say definitively that the board’s decision was not made in response to malfeasance or anything related to our financial, business, safety, or security/privacy practices. This was a breakdown in communication between Sam and the board.”
Lightcap added that the announcement of Altman’s firing “took us all by surprise” and that remaining executives at the company were “fully focused on handling this, pushing toward resolution and clarity, and getting back to work”. The contents of the memo were first reported by Axios. OpenAI did not immediately respond to a request for comment.
Neither OpenAI nor Altman has elaborated on how communications broke down, but tensions have been brewing within the company. According to one person with knowledge of the situation, there were concerns at board level about Altman’s efforts to raise as much as $100bn from investors in the Middle East and SoftBank founder Masayoshi Son to establish a new microchip development company to compete with Nvidia and TSMC.
There was also a widening schism between advocates of Altman’s efforts to rapidly roll out the powerful technology the group has developed and turn a company founded as a non-profit into a commercial juggernaut, and those who wanted to emphasise safety over speed, according to people familiar with the matter.
Speaking to the Financial Times earlier this month, Altman said he was motivated by “a moral imperative” to develop technology which could dramatically boost “everyone’s quality of life”.
“I think for the most part [executives of AI companies] are taking the risks seriously and sort of wanting to do the right thing,” Helen Toner, an OpenAI board member and director of strategy at the Georgetown Center for Security and Emerging Technology, told the Financial Times in an interview last month.
“At the same time, they’re obviously the ones building these systems. They’re the ones who potentially stand to profit from them,” she added. “So I think it’s really important to make sure that there is outside oversight not just by the boards of the companies but also by regulators and by the broader public. So even if their hearts are in the right place we shouldn’t rely on that as our primary way of ensuring they do the right thing.”
Toner remains on OpenAI’s board, alongside Ilya Sutskever, who co-founded OpenAI with Altman and Brockman and is the company’s chief scientist. Since July, Sutskever has led the team at OpenAI tasked with ensuring superintelligent AI can be deployed safely.
At the same time as Altman was sacked, Greg Brockman, another co-founder, was stripped of his position as chair of the board. Later on Friday, he announced he was quitting the company altogether. A trio of senior researchers also left the company late on Friday, according to reporting in The Information.
Mira Murati, formerly chief technology officer, has stepped up to lead the company on an interim basis. In an interview with the Financial Times in June this year, she said: “Our mission is to get to artificial general intelligence and figure out how to deploy that safely. And so we’re always very careful not to lose sight of that.”