14 Comments

Uber was only just profitable this year, and that by raising it's prices to where Taxi companies are now an equally good, if not superior, alternative. After some misadventures with Uber and Lyft recently, and a series of clean safe rides in taxis, I'll never use rideshare apps again.

Plus they face many MANY challenges in the courts and legislation. The business model of Uber is inherently flawed in a period where average citizens are pushing back hard on exploitative business models like gig work.

As to OpenAI, my experience has always been that all of a company's culture and business practices trickle down from the top. If all the managers at the top are chaotic unpredictable weirdos who are fleeing the company and tearing each other's throats out like rats from a ship that is both sinking and on fire then it's safe to assume that all of the staff who do most of the actual work are also searching for work elsewhere, stabbing each other in the back, and are telling all their friends and family to steer clear of the place.

I firmly believe that we're seeing is ultra wealthy gamblers doubling down on a bad bet, because they're incapable of admitting they made a mistake. If we actually taxed those idiots to a point where they didn't have so much money lying around maybe they'd be more careful about not piling it onto bonfires like OpenAI.

Or bankrupt themselves, which I'm also fine with.

Expand full comment

I think so too, but the logic underlying OpenAI's chief exports are so alluring I wonder if they'll be lodged into place regardless — as they did with Uber indeed. But there's also the very real chance this blows up like the Dotcom bust too... Agree there should be more friction and taxation to prevent casino-style bets on this stuff. Imagine a tech sector in which sturdy, proven concepts were valued over mad cash-frenzied rushes for the next unicorn!

Expand full comment

Investment and banking work best for society when it's boring. This is proven.

I think the self-admitted 'existential threat' of IP lawsuits is what will sink not just OpenAI, but all of the LLM/GM models. I assume it depends on how the lawsuit in California that just went into discovery goes. If a bunch of big corporations like Activision, AT&T, Disney and Penguin/Random House discover a mountain of their stuff was fed into the models I would assume they'll act true to form and squish OpenAI for daring to steal from THEM as opposed to stealing from other people, which of course they are fine with.

Plus the electricity, cooling and water costs are monstrous. I've heard rumbles about utilities putting extra fees and charges on data centers for disrupting the power grid.

Expand full comment

The presence of a former NSA director on OpenAI’s safety committee, along with the general push to frame AI, and the companies in control of it, as a strategic resource for the US to “maintain global leadership,” suggests that OpenAI is definitely too big to be allowed to fail or succeed on purely commercial terms.

Expand full comment

Yep. Defense contracting another strong sign of this as well for sure.

Expand full comment

It's pretty hard to run a business in the US and not be in bed with the military and/or intelligence services in one way or another.

Just because OpenAI offered a retired general and former NSA director a big fat paycheck to try and get in good with the government doesn't mean the NSA will want to have anything to do with them. As horrible as they can be, the NSA is packed full of computer nerds who know exactly how LLMs work, and all of their many MANY shortcomings.

Expand full comment

On the first day of working at OpenAI you are given a t-shirt with "we are not a normal company" proudly emblazoned across it. At first, this seems quirky and cool. But as the sun begins to set behind the clouds, causing crimson light and bruised shadows to creep across the office floor, you look to the corner office where Sam can be seen standing in the middle of the room, staring at nothing, his eyes glazing over with tears as he mumbles over and over again "I'm sorry, as an AI language model I have failed you."

Expand full comment

As Amy Castor and David Gerard wrote a few days ago, "OpenAI's product is the promise that your boss can just fire everyone." To the myopically selfish owners and executives of corporate capitalism, that promise is so seductive that the "AI" bubble can survive an extraordinarily large amount of internal drama and dysfunction.

"Murati is gone now too, offering no discernible reason for her departure.": Maybe she realized she can probably get more money by setting up her own scam than by continuing to be a cog in Altman's scam. After all, Sutskever's new scam just raised a billion dollars:

https://techcrunch.com/2024/09/04/ilya-sutskevers-startup-safe-super-intelligence-raises-1b/

By the way, don't lionize Sutskever, who appears to be just another grifter or idiot. Consider, for example, the following foolishness from him, quoted by Ars Technica:

"what does it mean to predict the next token well enough? ... it means that you understand the underlying reality that led to the creation of that token"

(https://arstechnica.com/information-technology/2023/04/why-ai-chatbots-are-the-ultimate-bs-machines-and-how-people-hope-to-fix-them/)

That, in a nutshell, is the fundamental folly of the LLM approach to AI. An LLM is a model of documents. It is not a model of reality except in the indirect and generally murky sense that the documents are models of reality. In general, predicting the next token "well enough" definitely does not mean understanding the underlying reality. As Sam Anthony put the matter in a recent post:

"If you change the problem just slightly, let's say by asking what's the next token to produce such that the overall statement being produced will be TRUE, rather than just pleasingly likely, then the unsupervised learning trick never works, and you're back to the complexities and poor scaling properties of supervised learning."

(https://buttondown.email/apperceptive/archive/supervision-and-truth/)

Fools and frauds like Altman and Sutskever either don't understand or, more likely, refuse to acknowledge this fact, which dooms their entire enterprise. As the Nature article cited in this post says, there's a "need for a fundamental shift in the design and development of general-purpose artificial intelligence". At present, most likely, nobody knows exactly what that shift should be, but if anybody is even close, they're probably academics, not Silicon Valley hacks.

Expand full comment

I was laid off in June by an executive pinhead who thought they could replace the large and complex document management, web forms, and process automation tool I managed with an LLM. The mandate from on high was 'save money using the exciting new AI technology.'

I pushed back, pointing out that I was ALREADY saving the university thousands upon thousands of man hours in labor. And that I could use OCR (image to text scanning) to replace human data entry clerks for rote tasks in our AR/AP processes.

The business office staff freaked out and killed the project.

So I guess the bosses are only interested in firing people who aren't propping up the status of well connected political allies within the organization.

Expand full comment

I think Altman is a hype man pushing a fad that is far from what it claims to be. I've seen no evidence that generative AI is approaching anything resembling AGI, but I do still see a much bigger hallucination issue than the industry seems to admit. GenAI has serious value, but in niche use cases (such as protein synthesis and cybersecurity data analysis). It makes for a mean chatbot. But "real" AI? Not even close.

This reminds me of Arthur C Clarke's advanced technology looks like magic chestnut, and I think most investors fell for that, are still falling for that, or are suffering from gambler's fallacy. I've also noticed that some of genAI's most enthusiastic users also vastly over-estimate its capabilities and either ignore its errors or are in use cases where errors don't matter (such as writing cold call emails or marketing copy). Altman's a great hypeman for that crowd.

I adore GenAI. Because of Perplexity, Komo, and Gemini, I've not used a regular search engine in months. But these systems have serious limitations and no intelligence. At some point, this reality will catch up. GenAI is a long-term INTERFACE revolution, not a short term AI revolution.

Expand full comment

I have a suspicion that there's even more ugly copy-paste coding and lack of code reuse out there in just the past year or two than from the several years before. That under the hood, especially in corporate enterprise systems, there's an ugly mess of copy-pastes of several, or dozens, of similar but not quite identical code chunks barfed out by an LLM lashed together by ugly messes of calls to each other by inept lowest bid contractors.

Trying to explain to non-technical executives why their penny pinching several years ago has resulted in a system that can only be fixed by a very expensive complete teardown and rebuild of their system because it's now like a badly frayed sweater that will fall apart if you pull the wrong thread is not fun.

Expand full comment

I keep forgetting about the coding side of this. You are right! A lot of code is a mess, and LLMs might lookto some as a way to at least make it a cheaper mess. I don't know what the consequences of that could be, but maybe that explains the continued buy-in from executives.

Expand full comment

I find myself nodding in agreement with every single word in this article.

Expand full comment