ChatGPT's weird and very quiet second birthday
Why no party for the hit app's second anniversary?
Greetings all —
I’m writing this en route to Washington DC, to attend the Outrider Nuclear Reporting Summit. I’ve written a bit about nuclear issues in the past—did I ever tell you about the time I visited the crater from the first nuclear bomb blast with one of the last surviving Manhattan Project scientists?—but it’s certainly not an area of my expertise. I’m looking forward to learning about the pressing matters in the field, including but not limited to how AI is impacting and/or threatening national security and complicating the nuclear power picture. As always, all of you readers make this stuff possible, so if you can, please do chip in. Just a coffee a month to make possible journalism that takes on big tech! What a deal. Cheers, and hammers up.
Anyone else feel like ChatGPT’s big second birthday sort of came and went with weirdly little fanfare? Generative AI is, after all, the technology that OpenAI promised would change everything, and ChatGPT its marquee product; the tip of the spear that it says will beget the rise of AGI; the fastest growing app in history (according to OpenAI’s metrics); the very idea that Silicon Valley has bet the bank and built its vision future on. So where’s the victory lap?
Maybe this seems like a small thing, but when a company’s got a world-beating product, it usually likes to shout it from the rooftops on a milestone like a big anniversary. You know, plant some puff pieces, offer some interviews with the CEO, publish a blog post or two, whatever. I’m not seeing much, if any, of that stuff.
I am seeing an effort to raise its profile during the holiday season by announcing a series of demos and new products, in what OpenAI is calling, cringe-inducingly, ‘shipmas’—but little in the way of reflections on How Far We’ve Come. I think there’s a reason for that. And look, it could be that Sam Altman and co have their hands full with Elon Musk’s online attacks and lawsuits against the company. (The Wall Street Journal reports that Altman was “blindsided” by Musk’s latest volley this week, which included stamping him with a low-grade Trumpian nickname, Swindly Sam.) Yet, if anything, wouldn’t you want to underscore the importance of your product as you enter an uncertain political climate?
But nope, not even a press release saying ‘happy birthday ChatGPT’, as far as I can tell. To me, this reflects that, despite the gargantuan valuations and industry consensus, the actual business and cultural foundation of AI is much shakier than OpenAI and co would like us, and its recent investors, to believe. And any attempt at a big celebration would only draw attention to how relatively little ChatGPT and generative AI have progressed since first launching, in November 2022, and how far we still are from all that the AI companies have promised.
Remember, this is a technology built on a narrative of rapid, even exponential improvement. [Hey, this is a good time to note that the big report on this very subject that I wrote for AI Now is out today! It’s called “AI-Generated Business: The Rise of AGI and the Rush to Find a Working Revenue Model” and I hope you’ll check it out.]
When ChatGPT launched, there’s no doubt it was a hit, but in order to turn the excitement into a sustainable business, OpenAI and its competitors had to make a series of declarations: That AI wasn’t just fun to mess around with as a chatbot, it was an early step en route to superhuman intelligence, and one that could be wielded by companies to automate work, for a fee. It was compelling now, sure—but soon it could do human jobs, just wait. On the way were massive leaps in productivity, medical advisors for people who couldn’t afford healthcare, an end to dull work like writing emails, and more. So massive, they were “scary,” even.
For a year, OpenAI teased the coming arrival of ChatGPT-5, the model that would blow the previous one out of the water—and it has yet to materialize. Recent reports show that that the models that the company hoped to release as GPT-5, while an improvement on the previous model, wasn’t exactly a serious leveling up. As a recent paper from Gaël Varoquaux, Sasha Luccioni, and Meredith Whittaker showed, compute and energy demands are growing faster than model performance. Talk of a “wall” has become more common in AI circles, with current approaches—using more compute to train ever larger models on ever larger data sets—showing diminishing returns.
This is starting to be felt more acutely on the business side too, with a recent report in Business Insider revealing that many employees inside Microsoft feel that they’ve overpromised clients on their AI systems. One insider, anticipating a backlash, thinks the AI can do maybe 75% of what the company has pledged to enterprise clients. It’s been a cloudy, if not outright rocky, picture for a lot of the AI companies, with decent enterprise revenues coming in but opaque at best report cards on the results, and a sense that business is definitely not booming. Ed Zitron has been banging the drum on OpenAI’s dubious financials all year, and his latest post is a good recap. No surprise, then, that recent reports of OpenAI experimenting with ads began to make the rounds this week, with the Financial Times reporting that CFO Sarah Frier says they’re looking at “thoughtfully” introducing ads into ChatGPT.
Meanwhile, the hallucinations, demonstrable errors, and erratic output that plagued early models were supposed to be fixed by now. They absolutely have not been, as Whittaker and co, and other critics like Gary Marcus have pointed out repeatedly, and instead persist to this day. To that end, here is one tiny example, in how ChatGPT output stated I might celebrate its birthday, despite, of course, having published reams and reams of criticism about it over the last two years.
I do appreciate the bot buttering me up as thoughtful, thanks OpenAI.
All told, while the app continues to grow in popularity and improve in specs, in the eyes of most consumers and users, ChatGPT has not massively leveled up; it’s biggest use cases remain as an automated homework plagiarizer for student essays, a marketing email generator for copywriters, a rote code creator for software engineers, and a PowerPoint and blog post image compiler for anyone unconcerned with IP rights or ethically sourcing artists’ work.
It’s shown up in some customer service applications, but few that weren’t already automated to some degree before 2022. The biggest improvement it needs to make to do big business—accuracy and reliability—remain out of reach, and so you’re left with a technology that is doing what so many feared: Automating the work that people love, and degrading conditions for writers and artists, while failing to put a dent in the dull stuff AI promised to do away with. And all the while Altman and co are calling for more data centers, more power plants to run them, more source of data; hoping that out of the sheer largess, something meaningful will arise.
But instead of rethinking the fundamentals, and considering how to, say, build technologies ethically, with artists’ and workers’ concerns in mind, or how to reconfigure a product road map that appeals to more people, or how to keep the whole thing sustainably scaled, the push is more, and bigger, and perpetual expansion, with bigger and greater tales of a dazzling automated future that is just out of reach.
There’s a reason, in other words, that OpenAI is not celebrating its biggest products’ two-year anniversary, but instead forging onward with new demos and ads, more promises of that future yet to come—if the momentum is interrupted, if the story stops making sense, and AGI begins to look not only infeasible but even ridiculous (all this power and billions directed to what, again?) then, if generative AI is a house of cards, it may well begin to fall apart. The generative AI business, and the AGI construct that powers it, is not particularly compatible with introspection.
As I argue in the AI Now paper, more so than even other recent hugely hyped Silicon Valley trends, generative AI depends utterly on the power of its narrative to overwhelm crude considerations like business fundamentals. And that narrative, in some key ways, may well be starting to unravel.
Brian, this is a nice contrast to the article Casey Newton published last night.
"ChatGPT has not massively leveled up; it’s biggest use cases remain as an automated homework plagiarizer for student essays, a marketing email generator for copywriters, a rote code creator for software engineers, and a PowerPoint and blog post image compiler for anyone unconcerned with IP rights or ethically sourcing artists’ work."
Thank you. Especially for saying what needed to be said on that last one. GenAI art doesn't make anyone's blogs look better. This was cute for a minute, but should have been done a year ago. No one's doing the thing anymore where they have ChatGPT write the first two paragraphs of an article about ChatGPT and then say "surprise, ChatGPT wrote all that!". If we can let that go, we can also stop showing readers whatever Stable Diffusion shits about when we prompt it with the title of the blog post.