Silicon Valley and investors are betting that OpenAI's project is too big to fail
The business is a mess. Have Sam Altman and genAI simply become inextricable from Silicon Valley's project, at least for the foreseeable future? Or is it all going to collapse next week?
Greetings, and welcome to another edition of Blood in the Machine. Another truly wild week for OpenAI and the broader tech world here, so let’s get right into it. As always, thank you for reading, subscribing, sharing, and/or supporting this project. Paid subscribers are what make this all possible—if you are able, a paid subscription, at less than the cost of a beer a month, means I can keep writing here—I’m grateful for each of you. Onwards, and hammers up.
I think it’s fair to say that not a lot of companies could get away with what OpenAI has managed to get away with over the last year or so.
There’s the clear evidence of pervasive dysfunction at the executive level, with a steady stream of top managers and co-founders exiting the company. There’s the poor report cards from financial advisory firms, the lawsuits, the controversies, the immense costs and lack of a good, reliable business model—and yet OpenAI’s valuation seems impervious to all that, and new backers keep right on lining up.
The Information reports that even after the latest dustup—the departure of OpenAI’s second most visible face, CTO Mira Murati, and two other core executives—investors are “hanging on for the ride” and are as we speak participating in a funding round that will probably value the company at $150 billion. So here’s a fun thought experiment: Could anything feasibly spoil the party at this point? Or has the *idea* of OpenAI and what it represents simply become too important to a tech sector that had until 2022 suffered a string of flops, and to a business world eager to cut labor costs, find efficiencies, and stay on the cutting edge?
What if Silicon Valley has convinced itself that OpenAI, not unlike Uber before it, is essentially too big to fail? Uber, which took a decade to approach profitability, had a hell of a pitch, as far as investors and the tech industry were concerned—OpenAI’s is even better. Uber told the story of a new generation of apps that could summon cars and cheap workers on demand; OpenAI tells the story of software that can do any digital work a human can do.
Is that narrative now just too central to the technology that Silicon Vally has placed all of its chips on for anyone to want to pause and kick the tires at OpenAI, the actual business? Or will Sam Altman finally push out one executive too many and bring the whole thing crashing down? Who knows! That’s about where we are right now.
Look, *some* level dysfunction is to be expected—OpenAI is a startup in Silicon Valley, where dysfunction is a requisite part of any young company’s mythology—but this feels like another level altogether. Each of the company’s co-founders, who once graced magazine covers and newspaper headlines and conference stages together, are gone, except Altman. Ilya Sutskever, the lauded AI scientist believed to have been partly behind the coup that briefly removed Altman last fall, was forced out. OpenAI president Greg Brockman is out on suspended leave under unclear circumstances that feel very odd. Murati is gone now too, offering no discernible reason for her departure. And those are just the biggest names; many, many others have left in recent months as well. This is extremely strange for a company that is about to be worth $150 billion, with a rumored IPO coming next year.
Despite Altman’s efforts to downplay the departures, OpenAI is currently an enormous red flag cannon. There is serious turmoil at the company, that much is obvious. As to why this might be, well, we have a few narratives to choose from:
This recent New York Times piece posits that OpenAI is a startup “trying to grow up” and to figure out how to be a real company after its “chaotic past.” This is a popular narrative, there’s truth to it, and it’s one that many people, including OpenAI’s investors, would like very much for it to be true: That those unwieldy early days are almost behind them, and more streamlined and profitable ones lay ahead. Certainly, those early employees who signed up to work for a nonprofit research company, not one aggressively seeking software automation contracts, have no doubt found the transition of the last couple of years jarring.
But it seems like that the chaos is only escalating, not diminishing, and that these are not growing pains signaling newfound maturation, but violent spasms brought forth by a consolidation of power. Altman, who was briefly pushed out last year because the board had lost confidence in him, has been called out for manipulating his peers and for potential ethical lapses. Now, essentially, Altman alone controls OpenAI.
A competing explanation for the turmoil is that founders like Sutskever and Murati saw OpenAI becoming so powerful, drawing so near to bringing AGI (artificial general intelligence) into being that they became concerned that Altman wasn’t pumping the brakes to make sure it was safe enough. This too probably contains at least some truth—many OpenAI employees have clearly been concerned that Altman is not behaving ethically. Past departing employees, especially those on the alignment team, have cited this as their reason for quitting.
But my guess is that the core issue animating the exits is not over safety, but that consolidation of power, period. It’s been widely reported that the latest investment round will accompany a restructuring; OpenAI is at last fully ditching its nonprofit holding company and becoming a for-profit corporation. Some reports have noted that this could land Altman between $7-10 billion in equity. (He denies this.) He’s also assuming more and more direct control over the company.
From the Information:
Altman on Wednesday said in a memo to employees that he would be more involved in the “technical and product parts of the company” after primarily focusing on “nontechnical” aspects such as fundraising, government relations and business partnerships with Microsoft, Apple and others. The company’s technical leaders, who previously reported to Murati and McGrew, will now report to Altman, he said.
“I obviously won’t pretend it’s natural for this [leadership change] to be so abrupt, but we are not a normal company,” Altman said.
It’s easy to see how this drive towards total control, which might make Altman one of the richest men on the planet, might also aggravate and alienate his staff.
But whatever the cause of all the chaos, the bottom line is that it has yet to meaningfully faze investors at all. What this tells me is less that investors think OpenAI is a great company and more that their faith in generative AI as the next big thing is currently unshakeable—despite many concerning signs for its versatility as a fixture in business contexts, like this Nature paper, published just this week, that shows LLMs get *more* unreliable the larger they become.
There’s an old adage in Silicon Valley that you don’t invest in companies, you invest in people, and Sam Altman has long been the avatar of the AI boom—the money doesn’t much care if Mira Murati or some other top researcher is gone; Altman, and the idea of AI, is all-important. Altman could probably at this point go down to the closest Irish pub and hire the first dozen people he saw to be his new executive staff and the UAE’s wealth fund and whoever else is joining in on this latest round would probably still pony up. The odd structure of those deals, in which some investors don’t even get equity, but promises of shares in future profits, only underline this notion.
I still think the commercial generative AI boom has probably started to peak, that consumer enthusiasm for generative AI is plateauing, and that we can expect OpenAI and other AI firms to ink as many deals and enterprise contracts as they can in coming months while they’re still considered prime movers and relatively impervious to the careful considerations of sound financial logic.1 The idea that we are on the cusp of realizing a technology that can automate any job at all is just too tantalizing, especially as it is paired with the most demonstrably dazzling software the Valley has hit upon in ages, for the tech and financial industries to let it, and its most notable advocate, slip away.
How long those shields stay up—how long, exactly, OpenAI gets to be too big to fail, if it doesn’t hit on a business model—becomes the $150 billion question.
That’s all for this week — until the next, keep those hammers up, and best of luck in putting down the machinery hurtful to commonality.
Eventually the chickens come home to roost, and the industry will likely be forced to admit that it doesn’t make sense to have a dozen different firms running very similar and highly intensive models to produce text and image output. OpenAI enjoys name recognition, that prime mover status. And venture capital and well-heeled backers kept Uber afloat for over a decade before it got anywhere near turning a profit. But generative AI is a much more resource-intensive and expensive technology—how long will those investors and oddly structured partnerships tolerate such high costs, in the event that gen AI’s use cases wind up much more limited in a business context than currently promised? Generative AI is not going to disappear like NFTs, but we can expect a massive ‘right-sizing’ of the industry in coming years to be sure.
Uber was only just profitable this year, and that by raising it's prices to where Taxi companies are now an equally good, if not superior, alternative. After some misadventures with Uber and Lyft recently, and a series of clean safe rides in taxis, I'll never use rideshare apps again.
Plus they face many MANY challenges in the courts and legislation. The business model of Uber is inherently flawed in a period where average citizens are pushing back hard on exploitative business models like gig work.
As to OpenAI, my experience has always been that all of a company's culture and business practices trickle down from the top. If all the managers at the top are chaotic unpredictable weirdos who are fleeing the company and tearing each other's throats out like rats from a ship that is both sinking and on fire then it's safe to assume that all of the staff who do most of the actual work are also searching for work elsewhere, stabbing each other in the back, and are telling all their friends and family to steer clear of the place.
I firmly believe that we're seeing is ultra wealthy gamblers doubling down on a bad bet, because they're incapable of admitting they made a mistake. If we actually taxed those idiots to a point where they didn't have so much money lying around maybe they'd be more careful about not piling it onto bonfires like OpenAI.
Or bankrupt themselves, which I'm also fine with.
The presence of a former NSA director on OpenAI’s safety committee, along with the general push to frame AI, and the companies in control of it, as a strategic resource for the US to “maintain global leadership,” suggests that OpenAI is definitely too big to be allowed to fail or succeed on purely commercial terms.