25 Comments
User's avatar
Kit Noussis's avatar

Ive describes Altman as someone with a lot of 'humility'. Then the two of them go on to bloviate at length about how incredible they are. While the rest of us struggle with impostor syndrome, the actual impostors have shameless arrogance.

Expand full comment
Mark Mayerson's avatar

Does it strike anyone as tragic that a system that uses so much energy and water is now going to be used simply to slick up things that people are already doing online? We already have search, social networks and online shopping. Is this AI's future rather than curing diseases and solving climate change? Will AI just be the next victim of enshitification in the search for endless growth? Will the cost to investors, the environment and culture be worth it?

Expand full comment
𝓙𝓪𝓼𝓶𝓲𝓷𝓮 𝓦𝓸𝓵𝓯𝓮's avatar

I think AI itself is enshittification.

Expand full comment
Glen's avatar

It's the energy and water costs that are going to pop the bubble eventually. OpenAI ChatGPT Pro is $200 a month and they're still losing money on a product that does not work, as that Tribune Summer Reading list so clearly proves. Hardly anyone is going to pay $500 a month for what ChatGPT and it's competitors do.

The only thing left is, as Cory Doctorow points out, whether there's anything worth salvaging after the whole thing collapses. The only reason it hasn't collapsed yet is too much wealth is being redistributed to rich idiots who like gambling. Sam Altman and the other tech bros will keep pumping it up and taking their skim off the top for as long as it lasts.

After the dotcom crash there was cheap server equipment being sold pennies on the dollar and a lot of (relatively) cheap business real estate in an urban area packed full of nerds. This ended up being a good thing.

For a while anyway.

The second biggest tragedy (after the pollution) is that after the LLM/GM bubble pops what's left are mega-billions worth of data centers packed full of chips that are of no use for almost anything else and irreparable damage to water supplies, electrical grids and the atmosphere. I pity those towns and cities that went all in on subsidizing data center projects, and will now end up with massive holes in their budgets.

Expand full comment
Ron Amosa's avatar

Another great piece Brian and I got myself a print copy of Karen’s book too (awaiting delivery). I did feel some type of way about this part of your article “No writer should ever use AI, that’s my hardline stance, for any part of the writing process.” - because I use AI to help me edit my writing after I’ve written the first couple of drafts. I’m not a writer and I don’t have any editor contacts (nor is writing my full time job) I just want to say my piece as clearly as possible, so maybe, I’m not the target audience for that statement?

Expand full comment
Brian Merchant's avatar

I think editing is totally defensible and say so in the CJR piece in fact — especially in non pro contexts. though I might recommend looking into writer forums or groups here on substack even to find fellow writers and eds to swap services as that might ultimately be more rewarding!

Expand full comment
Ron Amosa's avatar

that's actually a pretty good tip haha, never thought of that. awesome, thank you!

Expand full comment
Harry Nilssons Bathrobe's avatar

It was a really nice engagement shoot they did together though. I hope the best for the happy couple.

Expand full comment
Ralph Haygood's avatar

"each of these tech giants - and Meta, too, as well as Anthropic - are driving to be the one-stop shop for consumer AI.": Because consumers just can't get enough of this fabulous new technology!

Oh wait ...

"the strength of a popular product, ChatGPT.": Indeed, OpenAI has claimed to have over 500 million "weekly active users". Terrific! Except that well under 5% of those users pay anything, and the vast majority of the users who pay something don't pay enough to cover the costs of serving them. If Altman et al. started charging enough to cover their costs - not of expanding or improving, just of operating their existing services - they'd immediately lose most of their users. And although I expect they'll eventually try it, the standard Silicon Valley strategy for "monetizing" "free" services, namely the exploitising strategy exemplified by Facebook and Google, is a poor fit for a chatbot. Imagine asking ChatGPT to write an email message or a term paper or a code fragment, and it throws in a plug for, say, Coca Cola. (Anthropic and xAI gave us darkly hilarious previews of this kind of thing with "Golden Gate Bridge" Claude and "Kill the Boer" Grok.)

It would be different if ChatGPT really were a "killer app", meaning it did something many of its users couldn't live comfortably without, but it doesn't. I strongly suspect most of its users could take it or leave it. If it's free or very cheap, they'll use it, but otherwise, no sale.

"To become the first thing consumers think of when they hear 'AI.'": I'm gonna go out on a limb and suggest that the first thing most people think of when they hear "AI" is or soon will be "the thing that cost me my job" or "the thing that made my job even more tedious than it used to be" or "the thing I get whenever I call a customer service number now, which I hate with the heat of a thousand suns".

"'AI' is a means of describing a new way consumers might want to ... socialize digitally, and thus compete with their social media networks": Well, I guess if your idea of "socializing digitally" is interacting with the "AI" slop that now infests Facebook, then maybe you'd like a "social media" product from OpenAI - straight from the horse's ass, er, mouth, as it were. Perhaps I'm still not cynical enough (cf. Lily Tomlin on that subject), but I doubt many people are interested in having "AI" girlfriends, buddies, etc. Much like "the metaverse", there undoubtedly is a market for this kind of thing, but it's more niche than mass. It's an idea that strikes most people as dorky at best.

"to attempt conquest in a ... market segment, *any* market segment.": Bingo! Never has the phrase "a solution in search of a problem" applied more cogently.

"So the question becomes: What happens if you aim for monopoly - on a scale that no tech company has attempted as fast before - and you miss?": I'd like to think the answer is, you become a laughingstock, a byword, and a cautionary tale for future entrepreneurs. But I've been around too long and seen too much folly to expect an honest reckoning.

Expand full comment
Kevin's avatar

Sam is an LLM.... Regurgitating all the biz strategy he's consumed. He's a very talented MBA consultant guy. And he'll get richer. But openai won't be the big winner. There probably won't be one, it'll be less lopsided than search and social, but the existing big guys will dominate. In a couple byears Gemini will be the most used ai product. It's impact will be less than we think. Coders will flock to niche offerings. Everything we do online will have mild ai augmentation... Not that different from our algorithm ladden existence already

Expand full comment
Joseph Barry's avatar

"It’s not a race to have the best technology, though all the companies surely want to do that, but more a race to become “The AI Company.” To become the first thing consumers think of when they hear “AI.”

Thank you for pointing this out! Every app and website feels like they've been filled with "try our new AI feature" which seems to serve no real function other than to say that they have AI on the mind for visibility. Sam Altman's pseudo-religious pursuit of absorbing and controlling all of what we think and want with AI has such dangerous undertones.

Expand full comment
2serve4Christ's avatar

I hate this photo of theses self satisfied douchebags.

Expand full comment
Marginal Gains's avatar

First, Sam is not Steve Jobs, nor should he try to become one. Steve Jobs had a unique ability to merge technology, design, and user experience into iconic products, but not every leader can—or should—emulate that. Success requires authenticity and leveraging one's strengths, not copying others. While a large population follows Sam, gaining attention in today’s landscape is not hard. What matters is focus and execution, and so far, they can do it because of the focus.

I do not believe the application or hardware layer will come from OpenAI. Building applications and hardware requires a fundamentally different mindset, culture, and focus—which is not their core competency. Can they hire people to do it? Yes, but they risk spreading themselves too thin, attempting to do everything for everyone, which introduces many challenges. For example, once the "Everything Store" Amazon succeeded for a while by diversifying, but its scope and focus have become more narrow. OpenAI’s challenge isn’t money or talent—it’s focus. When you try to tackle too much, you dilute your efforts and run into issues that can derail progress.

Currently, OpenAI’s approach seems to throw ideas out to see what sticks and decide what to build. That’s not a sustainable strategy. Success in one area does not guarantee success in another. AI products require a different mindset and expertise, even with an ex-Apple team onboard. Consider how Apple, despite its dominance in hardware and software, is struggling with AI integration. OpenAI will face the opposite challenge: transitioning from AI expertise to hardware and application development. Few companies successfully expand beyond their core skills, and even fewer sustain such diversification without eventually narrowing their focus.

Now, about this new product, based on what I understand(while my understanding may be completely wrong), this product may be innovative, and the team could succeed in creating it—but success as a mass-market product is unlikely to happen anytime soon. Typing on keyboards and using touchscreens are deeply ingrained habits; breaking those habits will require more than new technology. People resist change, especially when it disrupts familiar workflows. A shift to new interaction methods will likely take an entire generation growing up with these devices as their default. Major shifts like this often align with the idea that "science advances one funeral at a time." Actual adoption requires generational change.

These devices also raise significant privacy and security concerns. Ambient computing continuously monitors its surroundings—capturing images, sounds, and user behavior—which presents severe risks. Users may lose control over how their data is stored or shared, and such devices could easily be misused for surveillance or unauthorized spying, creating discomfort and ethical challenges.

From a cybersecurity perspective, these devices are prime targets for hackers because they collect sensitive data. Weak security protocols, reliance on cloud services, and real-time data transmission increase the risk of breaches. This creates vulnerabilities that could be exploited for eavesdropping, stalking, or other malicious purposes. Building trust will require companies to demonstrate robust security and transparency—any slip could be catastrophic.

While ambient computing has exciting potential, its success depends on overcoming deeply entrenched user habits, addressing privacy and security issues head-on, and building trust over time. For OpenAI, the real challenge will be maintaining focus and excelling in its core strengths rather than overextending into areas that require entirely different expertise. Disrupting established behaviors and creating a generational shift is a long-term game requiring patience, precision, and a clear vision.

Steve Jobs said, "Innovation is saying no to a thousand things."

Emphasizing the importance of focus and avoiding overextension. OpenAI is doing precisely the opposite of it.

Expand full comment
Harrison's avatar

This quote gave me goosebumps: “the most successful founders do not set out to create companies. They are on a mission to create something closer to a religion, and at some point it turns out that forming a company is the easiest way to do so.”

Expand full comment
Ounce IT's avatar

Thanks for this insightful analysis. I'm hoping that maybe a side effect of the latest Open AI shenanigans is that more people will discover Peter Gabriel's brilliant 2023 album I/O. Beautiful art created by humans!

Expand full comment
Senor Fix's avatar

Oh wonderful, they seem to be describing a discrete wearable that constantly surveilles, records, analyzes, and most importantly processes all data in your environment.

Expand full comment
pâTrīck :)'s avatar

sam says ppl are saying "im 2-3x faster at finding a cure for cancer than i was before" @7:50 mark on that jony ive video lol no notes!

Expand full comment
Birgitte Rasine's avatar

“‘Empire-building’ is a good critical articulation of what OpenAI and the other companies are trying to do with AI—given that it surfaces the exploited labor and environmental tolls necessary to make it possible—and “creating something closer to a religion” is a good description of how they are trying to do it. But “aiming for monopoly” is how all of this is being processed in the boardrooms”

For me this summarizes much of my own thinking about this insane AI race in the past 2.5 years. Icarus would be proud. Or perhaps envious.

Expand full comment
Birgitte Rasine's avatar

Quick correction… it’s a $6.5 B deal and the acquisition is of io rather than LoveFrom

Expand full comment
Ged's avatar

Thanks for being a consistent source of sanity when it comes to these topics... I include the lack of spite when it comes to the poor dude who churned out this shitty article. It's kind of hard not to become cynical these days and I appreciate to be able to surround myself with writers and in general people who aren't. It really helps.

Expand full comment