The incredible arrogance of OpenAI
With Sora 2, OpenAI is betting it can spit in the face of workers, creators, and the biggest media conglomerates on the planet — and win
OpenAI has been on a desperate quest to build an AI monopoly from just about the moment that ChatGPT blew up. Its CEO, Sam Altman, has been following the playbook of his mentor Peter Thiel, who has long held that the best way to succeed in tech is to create a novel market—say, for digital chatbots—and then to move to monopolize that market. This thinking has animated OpenAI’s expansionist, biggest-is-best strategy of pursuing ever larger datasets and data center complexes, from the start.
That plan has been complicated by a number of factors, including the sheer costs of training and running large language models, the degree to which AI chatbots have already been commoditized, and an extremely unclear vision of how, exactly, to monopolize “AI.” OpenAI has nonetheless succeeded in becoming the clear cultural frontrunner in the AI sweepstakes—it’s *the* AI company, as far as most consumers are concerned, even as it is deeply unprofitable and as competitors eat into key market segments. Yet OpenAI has continued to leverage this standing to amp up its valuation to unheard-of heights. Just this week, a pre-IPO sale of $6.6 billion worth of employee stock shares to SoftBank put the company’s current valuation at half a trillion dollars.
But this also means that OpenAI must continually find new ways to announce its AI supremacy to investors and consumers. That’s exactly what Sora 2, the AI-generated TikTok clone app that OpenAI launched to select users last week really is. It’s yet another bulletin to the world, and more specifically to now and future partners and stakeholders, that OpenAI is on the cutting edge of AI. That when it comes to generating new AI products and culture-making (or killing, but more on that in a second) moments, OpenAI stands alone.
All of this should make even starker that now, three years into the AI boom that it begot, OpenAI (or its competitors like Anthropic) is no closer to producing anything resembling a sustainable business model. It has a popular chatbot app in ChatGPT, but still loses money on nearly every query. OpenAI has announced plans to push into social media, e-commerce, search, app stores, enterprise software, and much more, with little to show for most of those plays yet.
Which brings us to Sora. If you were online at all last week, you probably found your feeds overflowing with AI-generated images of Sam Altman robbing a grocery store, or Family Guy characters having dinner with Wednesday Adams or whatever. As one widely circulated meme put it:
The potential for abuse was immediate. Vox called it “an unholy abomination” and noted that it had already led users to generate and share deepfake arrest videos and videos of real people dressed as Nazis. The privacy scholar Chris Gilliard declared that “OpenAI is essentially a social arsonist, developing and releasing tools that hyper scale the most racist, misogynistic, and toxic elements of society, lowering the barriers for all manner of abuse.”
That OpenAI would unveil a nakedly reckless product right now—at a moment when political tensions are pitched, in the wake of the most viral assassination video in digital history, and mere weeks after news broke that OpenAI’s chatbot product had encouraged a child to take his own life and an unwell military veteran to murder his mother—does not particularly surprise me. That the major media companies whose intellectual property is providing the bulk of the fuel, would let it do so, does a little bit.
We know well by now that OpenAI is a reckless company. But this is a whole new frontier of arrogance—OpenAI’s c-suite is either so desperate to show off its sloppified bona fides to enthuse investors, or so bullheaded that it genuinely believes it’s too big to fail. After all, Sora is a major legal gambit. At the announcement of Sora’s release, OpenAI said it would require copyrights holders to opt out of having their works included in OpenAI's datasets.
This is, as copyright experts have pointed out, decidedly not how it works:
Keep reading with a 7-day free trial
Subscribe to Blood in the Machine to keep reading this post and get 7 days of free access to the full post archives.