I was going to write an in-depth analysis of what actually happened at OpenAI …
Me: I want the truth!
… but Bloomberg beat me to it. In their article Doomed Mission Behind Sam Altman’s Shock Ouster From OpenAI, by Max Chafkin and Rachel Metz, there is a fascinating insight into how a technology company booms and explodes and then argue and implode.
Here’s a summary of the Bloomberg article (I recommend you clickthrough and read the whole thing):
Healthy companies led by competent, commercially successful and globally beloved founders generally don’t tend to fire them. And, as Sam Altman walked on stage in San Francisco on November 6, all those things could have described his role at OpenAI.
The co-founder and chief executive officer was, by this point, regularly compared to Bill Gates and Steve Jobs. Eleven days later he would be fired — kicking off a chaotic weekend during which executives and investors loyal to Altman were agitating for his return. The board ignored them, and hired Emmett Shear, the former Twitch CEO, instead. Nearly all OpenAI employees then threatened to quit and follow Altman out of the company.
On November 6, at the company’s first developer conference … Altman invited CEO Satya Nadella onto the stage ...
... and asked him how Microsoft felt about the partnership. Nadella started to respond, and then broke into laughter, as if the answer to the question was absurdly obvious … [then] Nadella announced at midnight local time on Sunday [November 19] that Altman would lead a new in-house AI lab alongside OpenAI board member Greg Brockman, who’d also left last week. Microsoft, meanwhile, remains committed to the OpenAI partnership, he said. Shares jumped to their highest ever.
Even though OpenAI's most important shareholder is still in favour of Altman, his board of directors remained deeply sceptical. The board included Altman and Brockman, a close ally and OpenAI’s president, but was ultimately controlled by the interests of scientists who worried that the company’s expansion was out of control, maybe even dangerous.
That put the scientists at odds with Altman and Brockman, who both argued that OpenAI was growing its business out of necessity. Every time a customer asks OpenAI’s ChatGPT chatbot a question it requires huge amounts of expensive computing power — so much that the company was having trouble keeping up with the explosive demand from users. The company has been forced to place limits on the number of times users can query its most powerful AI models in a day. In fact, the situation got so dire in the days after the developer conference, Altman announced that the company was pausing sign-ups for its paid ChatGPT Plus service for an indeterminate amount of time.
From Altman’s point of view, raising more money and finding additional revenue sources were essential. But some members of the board, with ties to the AI-sceptical effective altruism movement, viewed this in tension with the risks posed by advanced AI. Many effective altruists — a pseudo-philosophical movement that seeks to donate money to head off existential risks — have imagined scenarios in which a powerful AI system could be used by a terrorist group to, say, create a bioweapon. Or in the absolute worst case scenario the AI could spontaneously turn bad, take control of weapons systems and attempt to wipe out human civilization. Not everyone takes this scenario seriously, and other AI leaders, including Altman, have argued that such concerns can be managed and that the potential benefits from making AI broadly available outweighs the risks.
In respect to Bloomberg, I’ll leave this there as, to find out the whole story, you need to clickthrough.