Sam Altman, former (and future?) CEO of OpenAI, just 11 days before his surprise firing. | Justin Sullivan/Getty Images
9 questions about OpenAI’s wild weekend, answered.
So, OpenAI had a weird weekend. The hottest company in tech is imploding after the shocking removal of its superstar CEO Sam Altman under still-mysterious circumstances. And now the maker of ChatGPT is on the verge of losing most — if not all — of the employees that turned it into an $80 billion company in just a few short years.
The announcement of his termination led to immediate chaos on Friday afternoon. Over the next two days, OpenAI employees as well as Microsoft, an OpenAI partner and investor, pushed to bring Altman back. As the board tried to work out a deal, Altman returned to the OpenAI offices, and it seemed as though it was just a matter of time before he’d be reinstated as the CEO.
But that didn’t happen. By Monday morning, OpenAI had a new CEO — its third in as many days — and Altman had an entirely new job … at Microsoft. OpenAI’s employees are now in open revolt, with almost all of them threatening to quit and join Altman.
The only thing faster than OpenAI’s ascension may well be its descent. Or it may, still, go on mostly as it did before, with Altman back at the helm of his old company, with a new board of directors in place. Apparently, that’s still a possibility despite everything that’s already happened.
OpenAI has been a Silicon Valley success story in a time when the industry was seen as largely stagnant. In the past year, thousands have been laid off at companies that have only ever known growth. Then along came generative AI and ChatGPT, new technology that is cool and exciting to everyone from the average consumer to one of the most valuable companies in the world. One of them, Microsoft, eagerly hitched its wagon to OpenAI and to Altman, who became the poster boy of the billion-dollar AI revolution.
Now, we may be looking at the end of OpenAI, which was shaping up to be one of the most important companies in the world. It was also the developer and owner of the technology that could shape how (or if) we live in the future. And we’ll soon see what takes its place.
Why did Sam Altman get fired?
The short answer is we don’t know. The reasons OpenAI’s board decided to remove Altman from the company are still unclear.
If nothing else, it appears there are fundamental differences between the board’s vision for AI, which included carrying out that mission of safety and transparency, and Altman’s vision, which, apparently, was not that.
How did Sam Altman, the boy wonder of AI, become a controversial figure?
Before Altman headed up OpenAI, he was the CEO of the influential startup accelerator Y Combinator, so he was well known in certain Silicon Valley circles. As OpenAI started to be seen as the leader of a new technological revolution, Altman put himself forward as the youthful, press-friendly ambassador for the company. As CEO, he went on an AI world tour, rubbing elbows with and winning over world leaders and telling various governments, including Congress and the Biden administration, how best to regulate this transformative technology — in ways that were very much advantageous to OpenAI and therefore Altman.
Altman often says that his company’s products could contribute to the end of humanity itself. Not many CEOs (at least, of companies that don’t make weapons) humblebrag about how potentially dangerous their business’s products are. That could be seen as a CEO being refreshingly honest, even if it makes his company look bad. It could also be seen as a CEO saying that his company is one of the most important and powerful things in the world, and you should trust him to lead it because he cares that much about all of us.
If you see generative AI as an enormously beneficial tool for humanity, you’re probably a fan of Altman. If you’re concerned about how the world will change when generative AI starts to replace human jobs and presumably becomes more and more powerful, you may not like Altman very much.
Simply put, Altman has made himself the face of AI, and people have responded accordingly.
And how did OpenAI get to be such a big deal?
OpenAI was founded in 2015, but it’s never been your average Silicon Valley startup. For one, it had the backing of many prominent tech people, including Peter Thiel, Reid Hoffman, and Elon Musk, who is also credited as being one of its co-founders (Altman is also a co-founder). Second, OpenAI was founded as a nonprofit. Its mission was not to move as quickly as possible to make as much money as possible, but rather to research and develop a technology with enormous transformative potential that therefore needed to be done safely, responsibly, and transparently: AI with the ability to learn and think for itself, also known as artificial general intelligence, or AGI. In order to do so, the company would need to develop generative AI, or AI that can learn from massive amounts of data and generate content upon request.
A few years later, OpenAI needed money. Altman took over as CEO in 2019, and it established a “capped profit” arm, allowing investors to get up to 100 times a return on what they put into it. The rest of the profit — if there was any — would go back into OpenAI’s nonprofit. The company was still governed by a board of directors charged with carrying out that nonprofit mission, but the board was pretty much the only thing left of OpenAI’s nonprofit origins.
OpenAI released some of its generative AI products into the world in 2022, giving everyone a chance to experiment with them. People were impressed, and OpenAI was seen as the leader in a burgeoning industry. Thanks to $13 billion of investments from Microsoft, OpenAI has been able to develop and market its services, giving Microsoft access to the new technologies along the way. Microsoft pinned a large part of its future on AI, and with its investment in OpenAI, established a partnership with the most prominent and seemingly advanced company in the field. And OpenAI’s valuation grew by leaps and bounds.
Meanwhile, Altman emerged as the leader of the AI movement because he was the head of the leading AI company, a role he has embraced. He has extolled the virtues of AI (and OpenAI) to world leaders. He says regulation is important, lest his company become too powerful (only to balk when regulation actually happens). He is — or maybe was — one of the most powerful people in tech, if not beyond.
And then he got fired.
If Altman was otherwise so popular, what was the OpenAI board so upset about?
Removing Altman could amount to a huge, potentially company-destroying deal, so you’d think there’d be a very good reason the OpenAI board decided to do it. But we don’t know that reason yet.
OpenAI’s board of directors has the authority to remove its CEO with a majority vote. Members of the board included: Altman; Ilya Sutskever, OpenAI’s chief scientist and co-founder; Quora CEO Adam D’Angelo; tech entrepreneur Tasha McCauley; Helen Toner, Georgetown’s Center for Security and Emerging Technology’s director of strategy and foundational research grants; and Greg Brockman, OpenAI’s president, co-founder, and board chair. Altman and Brockman, presumably, weren’t involved in the vote, nor did they know about it. Brockman was also voted out of the board but allowed to keep his job at OpenAI.
The board said in a statement that its decision was the result of a “deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.”
So, yeah, that’s a little vague.
“We can say definitively that the board’s decision was not made in response to malfeasance or anything related to our financial, business, safety or security/privacy practices,” OpenAI executive Brad Lightcap told staff in a message obtained by the New York Times. “This was a breakdown in communication between Sam and the board.”
Altman hasn’t said anything publicly about why he was removed. He’s clearly not happy about it, and he didn’t expect it. Brockman’s first statement about the whole thing, a few hours after OpenAI’s announcement, was also his resignation letter. A few hours after that, he followed that up by saying he and Altman were “shocked and saddened” and gave a timeline of how everything went down, which included the detail that Altman and Brockman found out what happened via a Google Meet.
Presumably, more will come out in time about the board’s reasoning for firing Altman. Given OpenAI’s mission to develop safe and responsible AI, it stands to reason that Altman was driving the development of unsafe and irresponsible AI and that the board felt it had to put a stop to it. If that’s true, removing Altman isn’t necessarily going to stop him from continuing that mission. He just won’t be doing it at OpenAI.
What happened after Altman got fired? OpenAI got a new CEO and everyone was happy?
The board said in its announcement about Altman’s departure on Friday that it had appointed OpenAI’s chief technology officer Mira Murati to be its interim CEO.
Then all hell broke loose. OpenAI’s employees were apparently in a state of open revolt, and the board was rumored to be desperately trying to get Altman back, while Microsoft was very much pressuring them to do so. Altman returned to OpenAI’s offices wearing a guest pass on Sunday, but it sure seemed like he’d be back at the reins of OpenAI by the end of the weekend and the board would be replaced.
Except that didn’t happen. Rumored deadlines came and went. Altman did, too.
In the early hours of Monday, former Twitch CEO and co-founder Emmett Shear announced that he was OpenAI’s new CEO.
Who, exactly, will Shear be leading? Probably not many of the people at Altman’s OpenAI, where more than 700 of its 770 employees signed a letter calling for Altman and Brockman to be reinstated and the current board to leave. They’re threatening to join the two former OpenAI execs at Microsoft, which, the letter says, has told them there are positions waiting for them. Murati was the first signee. Several prominent OpenAI employees have tweeted that “OpenAI is nothing without its people,” which Altman has quote-tweeted with a single heart.
And, bafflingly, one member of that board — Sutskever — is also a signatory of the letter. He has since tweeted that “I deeply regret my participation in the board’s actions.” (Which earned him a three-heart quote tweet from Altman — no hard feelings!)
How did the rest of Silicon Valley respond to the drama? Do people still think Altman should be running OpenAI?
Sam Altman is a very wealthy, very well-connected entrepreneur-turned-investor who was also running the most exciting tech startup in years. So it’s not surprising that once the news of his firing broke, the tech industry’s narrative quickly became one about the OpenAI board’s ineptitude, not any of his shortcomings. The fact that remaining OpenAI employees, starting with top executives but now the majority of its workers, have either quit or threatened to quit in solidarity makes Altman’s public support that much firmer.
That said: There is an argument that, because OpenAI’s board is supposed to run a nonprofit dedicated to AI safety, not a fast-growing for-profit business, it may have been justified in firing Altman. (Again, the board has yet to explain its reasoning in any detail.) You won’t hear many people defending the board out loud since it’s much safer to support Altman. But writer Eric Newcomer, in a post he published November 19, took a stab at it. He notes, for instance, that Altman has had fallouts with partners before — one of whom was Elon Musk — and reports that Altman was asked to leave his perch running Y Combinator.
“Altman had been given a lot of power, the cloak of a nonprofit, and a glowing public profile that exceeds his more mixed private reputation,” Newcomer wrote. “He lost the trust of his board. We should take that seriously.”
What’s Microsoft’s response to all this? And why did they hire Altman?
Microsoft has poured billions of dollars into OpenAI, and a big part of its future direction is riding on OpenAI’s success. You’d think that OpenAI’s complete implosion would be a very bad development for that future, except it looks as though Microsoft found a way to make lemonade out of lemons and may emerge from all of this in a better place than it was in before.
On Monday, Microsoft CEO Satya Nadella tweeted that the company is still very confident in OpenAI and its new leadership team, but that it’s also starting a “new advanced AI research team” headed up by — you guessed it — Sam Altman. He added that Brockman and unnamed “colleagues” were also on board.
“We look forward to moving quickly to provide them with the resources needed for their success,” Nadella concluded.
“The mission continues,” Altman said in a tweet.
Depending on how many OpenAI colleagues are willing to follow Altman and Brockman, it almost looks like Microsoft may well have acquired OpenAI in all but name. Presumably, Microsoft will keep using OpenAI’s technology to power the many Microsoft products that currently use it. But once its internal project gets up and running with Altman’s help, Microsoft may not need OpenAI at all anymore.
What does all this mean for AI safety? Are we more or less doomed than we were when Altman was in charge of OpenAI?
That kind of depends on what OpenAI had in the works and Altman’s plans for it, doesn’t it? Maybe Altman and OpenAI figured out the artificial general intelligence puzzle and the board thought it was too powerful to release so they canned him. Maybe it had nothing to do with OpenAI’s tech at all and more to do with the unresolvable conflict between a nonprofit’s mission and an executive’s quest to build the most valuable company in the world — a conflict that got worse and worse as OpenAI and Altman got bigger and bigger.
If this was about AI safety, well, Altman now works at a company that is solely about making as much money as possible, one that seems happy to devote plenty of resources to carry out his vision. So Altman has been delayed, but he hasn’t been stopped.
For what it’s worth, Shear, OpenAI’s brand new CEO, tweeted that “the board did *not* remove Sam over any specific disagreement on safety, their reasoning was completely different from that. I’m not crazy enough to take this job without board support for commercializing our awesome models.”
This whole debacle could serve as a reminder that the safety of products shouldn’t be left to the businesses that put them out into the world, which are generally only interested in safety when it makes them money or stops them from losing it. Housing that mission within a safety-focused nonprofit will only work as long as the nonprofit doesn’t keep the company from making money. And remember, OpenAI isn’t the only company working on this technology. Plenty of others that are very much not nonprofits, like Google and Meta, have their own generative AI models.
Governments around the world are trying to figure out how best to regulate AI. How safe this technology is will largely rely on if and how they do it. It won’t and shouldn’t depend on one man (read: Altman) who says he has the world’s best interests at heart and that we should trust him.
What happens to OpenAI itself, assuming all of its employees don’t quit?
More than 700 of OpenAI’s 700-plus employees have threatened to leave the company. If they follow through with that threat — either to follow Altman to Microsoft or just go to another company — there won’t be a lot of OpenAI left. OpenAI still has a commercial deal with Microsoft, which for the time being gives it money and access to computing power. If hundreds of employees defect to Microsoft, OpenAI’s commercial for-profit business would obviously be weakened, perhaps even eviscerated. You could conceivably keep the lights on with a skeleton crew, but the whole point of a software company like this is that engineers keep finding ways to make it better, and recruiting engineers will be a lot harder after this weekend.
Perhaps that could still be the impetus for the OpenAI board to welcome Altman back. Or perhaps they’ll be satisfied running a much, much smaller nonprofit.