The Mismanagement of Sam Altman’s Dismissal by OpenAI

Sam Altman
Sam Altman Getty Images

New York — The board of OpenAI feared that the company was developing a technology that could have catastrophic consequences for humanity, and its CEO, Sam Altman, was accelerating the process.

The board decided to terminate him. That might have been a reasonable decision. However, the way Altman was dismissed – suddenly, secretly and without notifying some of OpenAI’s major stakeholders and partners – lacked logic. And it threatened to cause more harm than i f the board did nothing.

The board of directors of a company has a duty, above all, to its shareholders. OpenAI’s most significant shareholder is Microsoft, the company that invested $13 billion in Altman & Co. to help Bing, Office, Windows and Azure surpass Google and keep ahead of Amazon, IBM and other AI competitors.

But Microsoft was not informed of Altman’s dismissal until “just before” the public announcement, according to CNN contributor Kara Swisher, who spoke to sources familiar with the board’s removal of its CEO. Microsoft’s stock dropped after Altman was fired.

Employees were also kept in the dark until the last minute. So was Greg Brockman, the company’s co-founder and former president, who said in a post on X that he learned about Altman’s dismissal moments before it occurred. Brockman, a key ally of Altman and his strategic vision for the company, resigned Friday. Other Altman supporters also left the company.

OpenAI was suddenly in turmoil. Reports that Altman and former OpenAI employees were planning to launch their own venture threatened to undo everything that the company had achieved over the past few years.

The next day, the board reportedly changed its mind and tried to bring Altman back. It was a stunning reversal and a humiliating blunder by a company that is widely seen as the most promising creator of the most cutting-edge new technology.

The unusual structure of OpenAI’s board added to the confusion.

The company is a nonprofit. But Altman, Brockman and Chief Scientist Ilya Sutskever created OpenAI LP, a for-profit entity that operates within the larger company’s framework, in 2019. That for-profit company increased OpenAI’s valuation from zero to $90 billion in just a few years – and Altman is widely recognized as the architect of that strategy and the key to the company’s success.

But a company with big investors like Microsoft and venture capital firm Thrive Capital has a responsibility to expand its business and generate revenue. Investors want to make sure they’re getting a good return on their investment, and they’re not known for their patience.

That likely prompted Altman to push the for-profit company to innovate faster and market its products. In the typical “move fast and break things” style of Silicon Valley, those products are not always reliable at first.

That might be acceptable, perhaps, when it’s a dating app or a social media platform. It’s a different story when it’s a technology that can imitate human speech and behavior so well that it can deceive people into thinking its fake conversations and images are real.

And that’s what reportedly alarmed the company’s board, which was still dominated by the nonprofit side of the company. Swisher reported that OpenAI’s recent developer conference was a turning point: Altman announced that OpenAI would make tools available so anyone could create their own version of ChatGPT.

For Sutskever and the board, that was crossing the line.

Sam Altman Getty Images
Sam Altman Getty Images

A caution justified by the circumstances

According to Altman himself, the company was taking a huge risk.

Four years ago, when Altman founded OpenAI LP, the new company stated in its charter that it was “concerned” about the potential of AI to “cause rapid change” for humanity. This could happen either unintentionally, with the technology performing malicious tasks due to faulty code, or intentionally, by people exploiting AI systems for nefarious purposes. Therefore, the company committed to prioritize safety, even if that meant sacrificing profit for its stakeholders.

Altman also called for regulators to impose limits on AI to prevent people like him from causing serious harm to society.

“Is [AI] going to be like the printing press that spread knowledge, power, and learning widely across the landscape, that empowered ordinary, everyday individuals, that led to greater flourishing, that led above all to greater liberty?” he asked in a May Senate subcommittee hearing, advocating for regulation. “Or is it going to be more like the atom bomb – a huge technological breakthrough, but with consequences (severe, terrible) that still haunt us to this day?”

Supporters of AI believe that the technology has the potential to transform every industry and improve humanity in the process. It could enhance education, finance, agriculture, and health care.

But it could also take away jobs from people – 14 million positions could vanish in the next five years, the World Economic Forum warned in April. AI is especially skilled at spreading harmful disinformation. And some, including former OpenAI board member Elon Musk, fear that the technology will surpass humanity in intelligence and could destroy life on the planet.

A crisis poorly handled Given those threats – real or imagined – it is not surprising that the board was worried that Altman was moving too fast. It may have felt obliged to fire him and replace him with someone who, in their opinion, would be more cautious with the potentially dangerous technology.

But OpenAI is not isolated. It has stakeholders, some of them with billions invested in the company. And the so-called adults in the room were behaving, as Swisher put it: like a “clown car that crashed into a gold mine,” quoting a famous Meta CEO Mark Zuckerberg line about Twitter.

Involving Microsoft in the decision, informing employees, working with Altman on a graceful exit plan…all of those would have been solutions more commonly used by a board of a company OpenAI’s size – and all with possibly better outcomes.

Microsoft, despite its huge stake, does not have an OpenAI board seat, because of the company’s unusual structure. Now that could change, according to multiple news reports, including the Wall Street Journal and New York Times. One of the company’s demands, along with the reinstatement of Altman, is to have a voice in the board.

With OpenAI’s ChatGPT-like capabilities integrated in Bing and other key products, Microsoft thought it had made a smart investment in the promising new tech of the future. So it must have been a shock to CEO Satya Nadella and his team when they found out about Altman’s dismissal along with the rest of the world on Friday evening.

The board alienated a powerful ally and could be permanently altered because of the way it handled Altman’s departure. It could end up with Altman back in charge, a for-profit company on its nonprofit board – and a huge culture shift at OpenAI.

Alternatively, it could become a rival to Altman, who may eventually decide to start a new company and lure talent from OpenAI.

Either way, OpenAI is probably worse off now than it was on Friday before it fired Altman. And it was a problem it could have avoided, ironically, by slowing down.