The business is a mess. Have Sam Altman and genAI simply become inextricable from Silicon Valley's project, at least for the foreseeable future? Or is it all going to collapse next week?
Uber was only just profitable this year, and that by raising it's prices to where Taxi companies are now an equally good, if not superior, alternative. After some misadventures with Uber and Lyft recently, and a series of clean safe rides in taxis, I'll never use rideshare apps again.
Plus they face many MANY challenges in the courts and legislation. The business model of Uber is inherently flawed in a period where average citizens are pushing back hard on exploitative business models like gig work.
As to OpenAI, my experience has always been that all of a company's culture and business practices trickle down from the top. If all the managers at the top are chaotic unpredictable weirdos who are fleeing the company and tearing each other's throats out like rats from a ship that is both sinking and on fire then it's safe to assume that all of the staff who do most of the actual work are also searching for work elsewhere, stabbing each other in the back, and are telling all their friends and family to steer clear of the place.
I firmly believe that we're seeing is ultra wealthy gamblers doubling down on a bad bet, because they're incapable of admitting they made a mistake. If we actually taxed those idiots to a point where they didn't have so much money lying around maybe they'd be more careful about not piling it onto bonfires like OpenAI.
I think so too, but the logic underlying OpenAI's chief exports are so alluring I wonder if they'll be lodged into place regardless — as they did with Uber indeed. But there's also the very real chance this blows up like the Dotcom bust too... Agree there should be more friction and taxation to prevent casino-style bets on this stuff. Imagine a tech sector in which sturdy, proven concepts were valued over mad cash-frenzied rushes for the next unicorn!
Investment and banking work best for society when it's boring. This is proven.
I think the self-admitted 'existential threat' of IP lawsuits is what will sink not just OpenAI, but all of the LLM/GM models. I assume it depends on how the lawsuit in California that just went into discovery goes. If a bunch of big corporations like Activision, AT&T, Disney and Penguin/Random House discover a mountain of their stuff was fed into the models I would assume they'll act true to form and squish OpenAI for daring to steal from THEM as opposed to stealing from other people, which of course they are fine with.
Plus the electricity, cooling and water costs are monstrous. I've heard rumbles about utilities putting extra fees and charges on data centers for disrupting the power grid.
The presence of a former NSA director on OpenAI’s safety committee, along with the general push to frame AI, and the companies in control of it, as a strategic resource for the US to “maintain global leadership,” suggests that OpenAI is definitely too big to be allowed to fail or succeed on purely commercial terms.
It's pretty hard to run a business in the US and not be in bed with the military and/or intelligence services in one way or another.
Just because OpenAI offered a retired general and former NSA director a big fat paycheck to try and get in good with the government doesn't mean the NSA will want to have anything to do with them. As horrible as they can be, the NSA is packed full of computer nerds who know exactly how LLMs work, and all of their many MANY shortcomings.
On the first day of working at OpenAI you are given a t-shirt with "we are not a normal company" proudly emblazoned across it. At first, this seems quirky and cool. But as the sun begins to set behind the clouds, causing crimson light and bruised shadows to creep across the office floor, you look to the corner office where Sam can be seen standing in the middle of the room, staring at nothing, his eyes glazing over with tears as he mumbles over and over again "I'm sorry, as an AI language model I have failed you."
I think Altman is a hype man pushing a fad that is far from what it claims to be. I've seen no evidence that generative AI is approaching anything resembling AGI, but I do still see a much bigger hallucination issue than the industry seems to admit. GenAI has serious value, but in niche use cases (such as protein synthesis and cybersecurity data analysis). It makes for a mean chatbot. But "real" AI? Not even close.
This reminds me of Arthur C Clarke's advanced technology looks like magic chestnut, and I think most investors fell for that, are still falling for that, or are suffering from gambler's fallacy. I've also noticed that some of genAI's most enthusiastic users also vastly over-estimate its capabilities and either ignore its errors or are in use cases where errors don't matter (such as writing cold call emails or marketing copy). Altman's a great hypeman for that crowd.
I adore GenAI. Because of Perplexity, Komo, and Gemini, I've not used a regular search engine in months. But these systems have serious limitations and no intelligence. At some point, this reality will catch up. GenAI is a long-term INTERFACE revolution, not a short term AI revolution.
I have a suspicion that there's even more ugly copy-paste coding and lack of code reuse out there in just the past year or two than from the several years before. That under the hood, especially in corporate enterprise systems, there's an ugly mess of copy-pastes of several, or dozens, of similar but not quite identical code chunks barfed out by an LLM lashed together by ugly messes of calls to each other by inept lowest bid contractors.
Trying to explain to non-technical executives why their penny pinching several years ago has resulted in a system that can only be fixed by a very expensive complete teardown and rebuild of their system because it's now like a badly frayed sweater that will fall apart if you pull the wrong thread is not fun.
I keep forgetting about the coding side of this. You are right! A lot of code is a mess, and LLMs might lookto some as a way to at least make it a cheaper mess. I don't know what the consequences of that could be, but maybe that explains the continued buy-in from executives.
I was laid off in June by an executive pinhead who thought they could replace the large and complex document management, web forms, and process automation tool I managed with an LLM. The mandate from on high was 'save money using the exciting new AI technology.'
I pushed back, pointing out that I was ALREADY saving the university thousands upon thousands of man hours in labor. And that I could use OCR (image to text scanning) to replace human data entry clerks for rote tasks in our AR/AP processes.
The business office staff freaked out and killed the project.
So I guess the bosses are only interested in firing people who aren't propping up the status of well connected political allies within the organization.
Uber was only just profitable this year, and that by raising it's prices to where Taxi companies are now an equally good, if not superior, alternative. After some misadventures with Uber and Lyft recently, and a series of clean safe rides in taxis, I'll never use rideshare apps again.
Plus they face many MANY challenges in the courts and legislation. The business model of Uber is inherently flawed in a period where average citizens are pushing back hard on exploitative business models like gig work.
As to OpenAI, my experience has always been that all of a company's culture and business practices trickle down from the top. If all the managers at the top are chaotic unpredictable weirdos who are fleeing the company and tearing each other's throats out like rats from a ship that is both sinking and on fire then it's safe to assume that all of the staff who do most of the actual work are also searching for work elsewhere, stabbing each other in the back, and are telling all their friends and family to steer clear of the place.
I firmly believe that we're seeing is ultra wealthy gamblers doubling down on a bad bet, because they're incapable of admitting they made a mistake. If we actually taxed those idiots to a point where they didn't have so much money lying around maybe they'd be more careful about not piling it onto bonfires like OpenAI.
Or bankrupt themselves, which I'm also fine with.
I think so too, but the logic underlying OpenAI's chief exports are so alluring I wonder if they'll be lodged into place regardless — as they did with Uber indeed. But there's also the very real chance this blows up like the Dotcom bust too... Agree there should be more friction and taxation to prevent casino-style bets on this stuff. Imagine a tech sector in which sturdy, proven concepts were valued over mad cash-frenzied rushes for the next unicorn!
Investment and banking work best for society when it's boring. This is proven.
I think the self-admitted 'existential threat' of IP lawsuits is what will sink not just OpenAI, but all of the LLM/GM models. I assume it depends on how the lawsuit in California that just went into discovery goes. If a bunch of big corporations like Activision, AT&T, Disney and Penguin/Random House discover a mountain of their stuff was fed into the models I would assume they'll act true to form and squish OpenAI for daring to steal from THEM as opposed to stealing from other people, which of course they are fine with.
Plus the electricity, cooling and water costs are monstrous. I've heard rumbles about utilities putting extra fees and charges on data centers for disrupting the power grid.
The presence of a former NSA director on OpenAI’s safety committee, along with the general push to frame AI, and the companies in control of it, as a strategic resource for the US to “maintain global leadership,” suggests that OpenAI is definitely too big to be allowed to fail or succeed on purely commercial terms.
Yep. Defense contracting another strong sign of this as well for sure.
It's pretty hard to run a business in the US and not be in bed with the military and/or intelligence services in one way or another.
Just because OpenAI offered a retired general and former NSA director a big fat paycheck to try and get in good with the government doesn't mean the NSA will want to have anything to do with them. As horrible as they can be, the NSA is packed full of computer nerds who know exactly how LLMs work, and all of their many MANY shortcomings.
On the first day of working at OpenAI you are given a t-shirt with "we are not a normal company" proudly emblazoned across it. At first, this seems quirky and cool. But as the sun begins to set behind the clouds, causing crimson light and bruised shadows to creep across the office floor, you look to the corner office where Sam can be seen standing in the middle of the room, staring at nothing, his eyes glazing over with tears as he mumbles over and over again "I'm sorry, as an AI language model I have failed you."
I think Altman is a hype man pushing a fad that is far from what it claims to be. I've seen no evidence that generative AI is approaching anything resembling AGI, but I do still see a much bigger hallucination issue than the industry seems to admit. GenAI has serious value, but in niche use cases (such as protein synthesis and cybersecurity data analysis). It makes for a mean chatbot. But "real" AI? Not even close.
This reminds me of Arthur C Clarke's advanced technology looks like magic chestnut, and I think most investors fell for that, are still falling for that, or are suffering from gambler's fallacy. I've also noticed that some of genAI's most enthusiastic users also vastly over-estimate its capabilities and either ignore its errors or are in use cases where errors don't matter (such as writing cold call emails or marketing copy). Altman's a great hypeman for that crowd.
I adore GenAI. Because of Perplexity, Komo, and Gemini, I've not used a regular search engine in months. But these systems have serious limitations and no intelligence. At some point, this reality will catch up. GenAI is a long-term INTERFACE revolution, not a short term AI revolution.
I have a suspicion that there's even more ugly copy-paste coding and lack of code reuse out there in just the past year or two than from the several years before. That under the hood, especially in corporate enterprise systems, there's an ugly mess of copy-pastes of several, or dozens, of similar but not quite identical code chunks barfed out by an LLM lashed together by ugly messes of calls to each other by inept lowest bid contractors.
Trying to explain to non-technical executives why their penny pinching several years ago has resulted in a system that can only be fixed by a very expensive complete teardown and rebuild of their system because it's now like a badly frayed sweater that will fall apart if you pull the wrong thread is not fun.
I keep forgetting about the coding side of this. You are right! A lot of code is a mess, and LLMs might lookto some as a way to at least make it a cheaper mess. I don't know what the consequences of that could be, but maybe that explains the continued buy-in from executives.
I find myself nodding in agreement with every single word in this article.
I was laid off in June by an executive pinhead who thought they could replace the large and complex document management, web forms, and process automation tool I managed with an LLM. The mandate from on high was 'save money using the exciting new AI technology.'
I pushed back, pointing out that I was ALREADY saving the university thousands upon thousands of man hours in labor. And that I could use OCR (image to text scanning) to replace human data entry clerks for rote tasks in our AR/AP processes.
The business office staff freaked out and killed the project.
So I guess the bosses are only interested in firing people who aren't propping up the status of well connected political allies within the organization.