Nvidia is shedding value, OpenAI is sputtering, dubious generative AI ads are flooding the market, and signs point to a deflating tech bubble. The great degeneration begins.
One crucial danger of generative AI that rarely gets attention is its impact on the already alarming decline in critical thinking skills among young people. High school and college kids taking the easy way out of written assignments by using AI are not developing those skills. Sure, some kids have been paying others to do their work for a long time, but I worry about the better students who now make the cynical calculation that they'd be dumb not to take a shortcut, too. Coupled with the reactionary right's crippling of school curricula and libraries, what does this trend mean for the next generation of workers? Will AI become a necessary crutch because workers are unable to perform without it? I'd love to see someone turn those AI commercials on their head by starting out with the same premise as the current commercials but then taking a clever twist to demonstrate the impact that this reliance on AI will inevitably have.
You're blaming the wrong thing if you're talking about kids the United States.
The cause of lack of critical thinking skills is the gutting of public education, and parents treating their kid's teachers as the enemy instead of partners in securing a good future for their children.
If we fund schools properly class sizes can be reduced, and teachers will be able to be more actively involved and notice when a student actually knows what they wrote in their book report.
Now, the real smart ones will make a cynical calculation NOT to take that shortcut and eventually emerge as real professionals in a labor market that will be overflowing with semi-trained half-wits.
I'm not convinced doing the essays was developing useful skills for anyone. At least this way people have more time for useful side things (https://xkcd.com/519/ etc).
I teach law and business to undergraduates at a Japanese university. Even with many international students, almost all students are non-native speakers of English. For years, I used to give very varied, and long, take-home exams with 4-7 day turnaround, including many sections that students could freely choose. The idea was to help everyone be on a more level playing field, regardless of English reading speed. It was also convenient during the pandemic and even after (I am mostly based off-campus, due to my health). Many students found them interesting, and welcomed the ability to choose which modules they would answer.
Suddenly, after the 2022 academic year, the whims of the OpenAI Board forced me to find an alternative form of evaluation. At first, oral exams seemed like the best solution, and some students even said they enjoyed them. Unfortunately, the enrollment threshold at which they become unfeasible is pretty low, as I painfully discovered. I had larger enrollments in the term that just ended in July, and wound up in the hospital for a few days from complications of sitting so much -- averaging 10h per day for 4 days, without more than the most rudimentary breaks. I've also tried making in-class presentations count more towards the grade than in the past, but many students can't resist simply reading from AI-created texts (easy to tell, when they start using vocabulary they never used during the course, and discussing topics unrelated to anything we did in class).
So now it's back to the drawing board. The Chat is out of the bag, so to speak. Educators must work under the burden of an ultra-advanced, free cheating machine -- probably without any of the offsetting benefits to society promised by Sam et al. Public LLMs have simply disrupted my teaching, or my health, in ways neither needed disruption. At a minimum, all these LLMs should go back behind high paywalls.
I’m 72 years old and have been a professional accountant up from the ranks to CFO. I laughed every time I heard that AI was going to replace accountants. Accountancy is an art that requires understanding people, business processes, accounting, rules, and the reporting requirements of management and inside and outside regulatory agencies and a bunch of other stuff that’s not worth going into
AI can’t do that. I always thought the promise of AI was like the Wizard of Oz the Wizard of Oz, there is always somebody behind the curtain, riding the algorithm changing the parameters making it work. More theft of IP, by the tech industry.
I’ve just read Blood in the Machine. It’s a wonderful book and I’ve come away, thinking as ever working class heroes are all heart and generally have poor historians and the oligarchs have no heart and can afford to have whatever history written they want. So hank you for writing one for us. The connections to Byron and the Shelley’s was unknown to me and very interesting. Keep up the good work. Cheers.
Accounting was the first class in my MBA decades back. I went into it thinking "come on, a bit of correct bookkeeping, that is boring, happy to get that out of the way". How wrong I was. It was by far the most surprisingly interesting subject of the entire MBA, with ethical issues almost everywhere you went. What helped that it was taught by the best professor of the entire MBA, who tended to reply to a question with "I don't know! What do *you* think?".
Agreed. It's one of the classes I've used over and over again as a computer programmer. I took an evening class that was taught for fun and to share his passion by a bank auditor. The stories he told in class about the sorts of things white collar criminals and shady business owners got up to stick with me decades later.
Thank you for sharing your experience and a pleasant surprise to hear. Accounting is more often referred to in negative terms; maybe we should try harder to be friends with writers
Its is not an uncommon perception - "just balancing the checkbook and a little bookkeeping" and thats ok.
The CPA exam has/had a complete section on ethics. CPA's have a fiscal responsibility to all of the end users (Banks, shareholders, employees,SEC/IRS) of the financial statements they certify. Plenty of room for arguments and push-back from clients/owners who want to see better results, and the trouble that occur can occur when the Accountant relents. Remember Enron and the collapse of Arthur Anderson.
As I mentioned, an accounting degree led me to start work at a CPA firm and eventually become a CFO (accounting, finance, and IT) and then COO (manufacturing, distribution) of several multinational manufacturing companies. Its been an interesting and fun ride.
I was in band and orchestra in middle and high school and have played drums and guitar as ongoing life hobbies, play poorly but happily and use a lot of music analogies to explain things and this is how I think about accounting.
There are a multitude of instruments and genres to learn and play and some people play in very small local bands for fun and don't have to be very good, and they are happy. Some people play with the large symphonies or tour with famous groups and they have to study, practice and work very hard be the best, and they are happy.... and all of them are musicians. AI can write and play music? Kind of, but not really.
That is the is the field of accounting to me. Different industries, Finance, Manufacturing, Service, SAAS, Mining, Government, Non-profit... ond on; different sizes, regulations, personality types (tree trimmers to tech bros), and some accountants embezzle and some help launder money, and some do forensic accounting which would make for a good series. We are alike, but we are not same.
I project this idea of complexity and malleability, onto most other professions or vocations formed from my 50 years of work experience and primarily for these reasons, I think AI is a just a scam, to pass information to a few for their greed and benefit and we are seeing it fall apart as I write this.
I am currently investigating how to install Linux on my home computers to exit the Microsoft and Apple controlled operating systems. As much as possible I am done with big tech; is has become a cancer.
While many, such as Gary Marcus, have a long history of pointing out the fundamental issues that prevent neural net approaches te become really trustworthy, I think you were the first one (at least in my unavoidably limited experience) to point out that GenAI doesn't need to be good to be disruptive and how it can be disruptive, as the introduction of the 'cheap' category in some creative arts (like illustrations). And that perspective is really not shared widely enough. Which is why I added my own post about it: https://ea.rna.nl/2024/07/27/generative-ai-doesnt-copy-art-it-clones-the-artisans-cheaply/
I think this is a critically important point. If multiple experiences with outsourcing have taught me anything, it’s that a buggy, mediocre product that is super-cheap beats a good reliable (but relatively pricy) product 9 times out of 10. It’s simply what the customers will purchase when it comes time to pay (while complaining bitterly about the shoddiness of the product).
We’re about to see the Walmartization of a ton of “journeyman” creative work, and that could take a decade to play out. After all, I’m old enough to remember when the cultural hype around the Internet hit its cusp. Didn’t mean the Internet became irrelevant.
I'm convinced that the true reason why the Big Tech companies aren't quitting the race is because it all comes down to the boys having a pissing contest and if you step out, you're proclaimed a p*ssy. The contest being both about actually making some fantastic version of generative AI, and about having the most money to afford to burn in the process. Everything else is second-order reasoning 🙄
Tech Bros have even less connection with reality than Wall Street guys, and that tells you something.
Since Q4 22, I’ve felt that “AI” smells like a coordinated tech scam, or at least corporate dogpiling. Major tech stocks had crashed hard from 20 late 22. Billionaires always need more money. And the corporate media can be bought to sell any BS message.
It worked, didn’t it? Most tech stocks turned around. Suddenly, everybody was calling anything that had a filter “AI”. Doom and gloom messages were sent to every industry publication on the planet. I used ChatGarbage, which is a fine thesaurus, but absolute crap at anything else, and got a lot of people into trouble. To sorta paraphrase Ron Jeffries, it spews out incorrect information but with such authority.
If I had to pick a tipping point for the rest of the world to turn negative on “AI”, it was when Mira Murati opened her evil mouth and disdainfully said what she said about creatives. Techies like Mira Murati and Scam Altman are not human, but their pronouns are money and power.
During this time, I first heard Neil Postman’s book Technopoly, published in 1993. For a book that was written before most people had dialup internet, it is remarkably prescient and extremely relevant today. I would go as far as to call Neil Postman a prophet.
"It's not quite fair to say it has turned out just like crypto or the metaverse or web3, in terms of being pure vaporware and absent actual social utility": Pretty close, though. I often refer to it as Stupid Computer Tricks (with apologies to David Letterman). More soberly, I call it not AI but CCI: Cargo-Cult Intelligence, because it goes through the motions but largely lacks the substance of intelligence. Intelligence, as genuine computer scientists have long realized, involves modeling the world, not just language (note that most non-human intelligence is plainly non-verbal), and learning from experience in the world. "Generative AI" appears to have been created (or at least promoted) by people who misconstrued John Searle's "Chinese room" scenario as an instruction manual. (Searle introduced the scenario as part of an argument against the possibility of "strong AI". I find the argument unpersuasive, but the scenario is a pretty good description of an LLM.)
I believe the main cause of all this is that Silicon Valley (meaning the dominant culture of that place, which isn't confined to that place) has become ever more dominated by charlatans and money-grubbers like Sam Altman and Elon Musk and hence ever less attractive to engineers interested in making useful things. I certainly wouldn't work for clowns like Altman and Musk.
(I speak as someone who's spent many years in and around the computer industry, from doing research in the CS department at UC Berkeley and the Swedish Institute of Computer Science in the 90s to designing and building web applications for various purposes today. Some years ago, I spent six months taking in the Silicon Valley scene before concluding it wasn't for me, mainly because it wouldn't help me accomplish anything I cared to accomplish. Since then, it's gotten much worse.)
It’s been nauseating to hear “AI this” and “AI that” the past year. Tech has generated a lot of wealth, but their innovations have generally made things worse. ATS systems prevent talented ppl from getting an interview, AI has made companies think they need less humans employed, all while a few companies get rich. Invest in quality humans. Generative AI, or essentially a smarter chat bot, is just a tool like a ruler to draw a straight line or a protractor to make a circle. You still, and will always, need a human to draw the line and make the circle.
I think public opinion has a tendency to swing from one excess to its very opposite. Let's try to be lucid here. Web3, the metaverse...that was purely vaporwave. Yes, people spend a lot of time playing Fortnite and there is (marketing) merit in thinking of it as a "place", a hip spot you can partner with but, again, the biblical talk of a "metaverse" was pure delusion.
AI is something else. First of all, there is way more to it than "generative ai". Think of well-established applications such as supply-chain optimisation or nascent ones such as early-stage cancer diagnosis. That's very practical stuff that can save millions...and we are not just talking money.
Now, as for Gen AI... It's still gimmicky, derivative and, yes, tacky. At best, it should be used internally, for quick visualisations and as a tool (definitely not the only one!) for brainstorming. Will it ever improve, beyond that? I don't know and what we are witnessing is a collective correction. The hype and the novelty are left behind, what we are left are semi-interesting tools that have some future promise and some current merits, when employed correctly...and with taste, the one ingredient AI most certainly isn't able to replicate. This is proving a big issue for companies that have no profitability nor, it seens, a path to profitability. It's their problem, though.
One industry that, in my opinion, is already being disrupted is Search (and it's vulture-like relative, SEO). You can diss perplexity as much ad you want and they have real issues to content with (how to scrape content now that publishers are on their toes, how to make money once you start paying then, etc.) but, anecdotal evidence, I've reduced my use og Google by 50% and plenty of other esrly-adopters did the same.
A few counter arguments for you to consider: if AI is so bad, why did Hollywood go on strike to prevent it from replacing them? Saying AI is over is like looking at the first spluttering propeller planes and declaring the aviation bubble is about to burst. What you are seeing is the "trough of despair" which all new technology goes through before blasting past the old tech (in this case the old tech is human brains). (Ask AI about "trough of despair"!) It's not true that no one is making money from AI, I would draw your attention to Palantir which is growing at 27% while already making healthy profits. It's AI system has been proven to provide massive cost savings to companies in every sector of the economy, and the military. Sure, Nvidia has lost what 30% recently? But it's still up a lot. Stock prices don't move in straight lines. Even when they are trending upwards, they can have big pullbacks, and vice versa. Sometimes the whole market goes down and no one knows why. I wouldn't rely too heavily on stock prices to support your argument. Either this article or my comments won't age well. Let's see who's right in 5 years!
I wonder if part of this was also a convenient excuse by corporations to shed staff without hurting their stock price. I hear that work-from-home bans are being made in the hope that people leave rather than be laid off, so AI could also be a convenient excuse for it.
Your premise here is something I've suspected to be the case, but I don't have the knowledge to articulate or even be too confident about. So thank you for writing this! I feel both validated and humbled. I know just enough about AI to be aware that much of what is offered to consumers is pure marketing hype. For example, features on smart phones such as circling a portion of a photo to launch a search, have already been available for a while are suddenly branded as AI and presented as if they're new. The hype is getting more blatant, and as you point out, more obviously desperate.
It's very telling that big names like Goldman-Sachs and Gartner are asking these questions. Lots of capex costs sunk on the back-end, not much return on the front. We may be seeing a lagging effect among venture capital, as July was apparently the biggest month for AI deals yet, but maybe even that seemingly endless money spigot might start to tighten. Interest rates will not go back to 2020-era levels anytime soon, so there was already going to be a shakeout, anyway.
One crucial danger of generative AI that rarely gets attention is its impact on the already alarming decline in critical thinking skills among young people. High school and college kids taking the easy way out of written assignments by using AI are not developing those skills. Sure, some kids have been paying others to do their work for a long time, but I worry about the better students who now make the cynical calculation that they'd be dumb not to take a shortcut, too. Coupled with the reactionary right's crippling of school curricula and libraries, what does this trend mean for the next generation of workers? Will AI become a necessary crutch because workers are unable to perform without it? I'd love to see someone turn those AI commercials on their head by starting out with the same premise as the current commercials but then taking a clever twist to demonstrate the impact that this reliance on AI will inevitably have.
Love this idea — and thanks for this thought, and articulating a grave fear that many educators have about the long-term impacts of AI.
You're blaming the wrong thing if you're talking about kids the United States.
The cause of lack of critical thinking skills is the gutting of public education, and parents treating their kid's teachers as the enemy instead of partners in securing a good future for their children.
If we fund schools properly class sizes can be reduced, and teachers will be able to be more actively involved and notice when a student actually knows what they wrote in their book report.
Now, the real smart ones will make a cynical calculation NOT to take that shortcut and eventually emerge as real professionals in a labor market that will be overflowing with semi-trained half-wits.
I'm not convinced doing the essays was developing useful skills for anyone. At least this way people have more time for useful side things (https://xkcd.com/519/ etc).
Anyone particularly self-motivated has been able to learn most relevant things from the internet rather than a physical library for years anyway, and AI helps with that (see https://nicholas.carlini.com/writing/2024/how-i-use-ai.html).
This comment is spot on.
I teach law and business to undergraduates at a Japanese university. Even with many international students, almost all students are non-native speakers of English. For years, I used to give very varied, and long, take-home exams with 4-7 day turnaround, including many sections that students could freely choose. The idea was to help everyone be on a more level playing field, regardless of English reading speed. It was also convenient during the pandemic and even after (I am mostly based off-campus, due to my health). Many students found them interesting, and welcomed the ability to choose which modules they would answer.
Suddenly, after the 2022 academic year, the whims of the OpenAI Board forced me to find an alternative form of evaluation. At first, oral exams seemed like the best solution, and some students even said they enjoyed them. Unfortunately, the enrollment threshold at which they become unfeasible is pretty low, as I painfully discovered. I had larger enrollments in the term that just ended in July, and wound up in the hospital for a few days from complications of sitting so much -- averaging 10h per day for 4 days, without more than the most rudimentary breaks. I've also tried making in-class presentations count more towards the grade than in the past, but many students can't resist simply reading from AI-created texts (easy to tell, when they start using vocabulary they never used during the course, and discussing topics unrelated to anything we did in class).
So now it's back to the drawing board. The Chat is out of the bag, so to speak. Educators must work under the burden of an ultra-advanced, free cheating machine -- probably without any of the offsetting benefits to society promised by Sam et al. Public LLMs have simply disrupted my teaching, or my health, in ways neither needed disruption. At a minimum, all these LLMs should go back behind high paywalls.
Yes! I wish I could "like" your comment 10 times!
I’m 72 years old and have been a professional accountant up from the ranks to CFO. I laughed every time I heard that AI was going to replace accountants. Accountancy is an art that requires understanding people, business processes, accounting, rules, and the reporting requirements of management and inside and outside regulatory agencies and a bunch of other stuff that’s not worth going into
AI can’t do that. I always thought the promise of AI was like the Wizard of Oz the Wizard of Oz, there is always somebody behind the curtain, riding the algorithm changing the parameters making it work. More theft of IP, by the tech industry.
I’ve just read Blood in the Machine. It’s a wonderful book and I’ve come away, thinking as ever working class heroes are all heart and generally have poor historians and the oligarchs have no heart and can afford to have whatever history written they want. So hank you for writing one for us. The connections to Byron and the Shelley’s was unknown to me and very interesting. Keep up the good work. Cheers.
Accounting was the first class in my MBA decades back. I went into it thinking "come on, a bit of correct bookkeeping, that is boring, happy to get that out of the way". How wrong I was. It was by far the most surprisingly interesting subject of the entire MBA, with ethical issues almost everywhere you went. What helped that it was taught by the best professor of the entire MBA, who tended to reply to a question with "I don't know! What do *you* think?".
Agreed. It's one of the classes I've used over and over again as a computer programmer. I took an evening class that was taught for fun and to share his passion by a bank auditor. The stories he told in class about the sorts of things white collar criminals and shady business owners got up to stick with me decades later.
Marty in OZARK is one example. We see it all:)
Thank you for sharing your experience and a pleasant surprise to hear. Accounting is more often referred to in negative terms; maybe we should try harder to be friends with writers
Its is not an uncommon perception - "just balancing the checkbook and a little bookkeeping" and thats ok.
The CPA exam has/had a complete section on ethics. CPA's have a fiscal responsibility to all of the end users (Banks, shareholders, employees,SEC/IRS) of the financial statements they certify. Plenty of room for arguments and push-back from clients/owners who want to see better results, and the trouble that occur can occur when the Accountant relents. Remember Enron and the collapse of Arthur Anderson.
As I mentioned, an accounting degree led me to start work at a CPA firm and eventually become a CFO (accounting, finance, and IT) and then COO (manufacturing, distribution) of several multinational manufacturing companies. Its been an interesting and fun ride.
I was in band and orchestra in middle and high school and have played drums and guitar as ongoing life hobbies, play poorly but happily and use a lot of music analogies to explain things and this is how I think about accounting.
There are a multitude of instruments and genres to learn and play and some people play in very small local bands for fun and don't have to be very good, and they are happy. Some people play with the large symphonies or tour with famous groups and they have to study, practice and work very hard be the best, and they are happy.... and all of them are musicians. AI can write and play music? Kind of, but not really.
That is the is the field of accounting to me. Different industries, Finance, Manufacturing, Service, SAAS, Mining, Government, Non-profit... ond on; different sizes, regulations, personality types (tree trimmers to tech bros), and some accountants embezzle and some help launder money, and some do forensic accounting which would make for a good series. We are alike, but we are not same.
I project this idea of complexity and malleability, onto most other professions or vocations formed from my 50 years of work experience and primarily for these reasons, I think AI is a just a scam, to pass information to a few for their greed and benefit and we are seeing it fall apart as I write this.
I am currently investigating how to install Linux on my home computers to exit the Microsoft and Apple controlled operating systems. As much as possible I am done with big tech; is has become a cancer.
Me breaking frames.
Long live the Luddites. Cheers
While many, such as Gary Marcus, have a long history of pointing out the fundamental issues that prevent neural net approaches te become really trustworthy, I think you were the first one (at least in my unavoidably limited experience) to point out that GenAI doesn't need to be good to be disruptive and how it can be disruptive, as the introduction of the 'cheap' category in some creative arts (like illustrations). And that perspective is really not shared widely enough. Which is why I added my own post about it: https://ea.rna.nl/2024/07/27/generative-ai-doesnt-copy-art-it-clones-the-artisans-cheaply/
At the risk of seeming biased, I think this is right on. Thanks for sharing, Gerben, cheers.
I think this is a critically important point. If multiple experiences with outsourcing have taught me anything, it’s that a buggy, mediocre product that is super-cheap beats a good reliable (but relatively pricy) product 9 times out of 10. It’s simply what the customers will purchase when it comes time to pay (while complaining bitterly about the shoddiness of the product).
We’re about to see the Walmartization of a ton of “journeyman” creative work, and that could take a decade to play out. After all, I’m old enough to remember when the cultural hype around the Internet hit its cusp. Didn’t mean the Internet became irrelevant.
I'm convinced that the true reason why the Big Tech companies aren't quitting the race is because it all comes down to the boys having a pissing contest and if you step out, you're proclaimed a p*ssy. The contest being both about actually making some fantastic version of generative AI, and about having the most money to afford to burn in the process. Everything else is second-order reasoning 🙄
Tech Bros have even less connection with reality than Wall Street guys, and that tells you something.
Wow! Generative AI seems to follow the economic model of a ponzi scheme.
Damn who could’ve seen this coming?
You bet!
Since Q4 22, I’ve felt that “AI” smells like a coordinated tech scam, or at least corporate dogpiling. Major tech stocks had crashed hard from 20 late 22. Billionaires always need more money. And the corporate media can be bought to sell any BS message.
It worked, didn’t it? Most tech stocks turned around. Suddenly, everybody was calling anything that had a filter “AI”. Doom and gloom messages were sent to every industry publication on the planet. I used ChatGarbage, which is a fine thesaurus, but absolute crap at anything else, and got a lot of people into trouble. To sorta paraphrase Ron Jeffries, it spews out incorrect information but with such authority.
If I had to pick a tipping point for the rest of the world to turn negative on “AI”, it was when Mira Murati opened her evil mouth and disdainfully said what she said about creatives. Techies like Mira Murati and Scam Altman are not human, but their pronouns are money and power.
During this time, I first heard Neil Postman’s book Technopoly, published in 1993. For a book that was written before most people had dialup internet, it is remarkably prescient and extremely relevant today. I would go as far as to call Neil Postman a prophet.
"It's not quite fair to say it has turned out just like crypto or the metaverse or web3, in terms of being pure vaporware and absent actual social utility": Pretty close, though. I often refer to it as Stupid Computer Tricks (with apologies to David Letterman). More soberly, I call it not AI but CCI: Cargo-Cult Intelligence, because it goes through the motions but largely lacks the substance of intelligence. Intelligence, as genuine computer scientists have long realized, involves modeling the world, not just language (note that most non-human intelligence is plainly non-verbal), and learning from experience in the world. "Generative AI" appears to have been created (or at least promoted) by people who misconstrued John Searle's "Chinese room" scenario as an instruction manual. (Searle introduced the scenario as part of an argument against the possibility of "strong AI". I find the argument unpersuasive, but the scenario is a pretty good description of an LLM.)
I believe the main cause of all this is that Silicon Valley (meaning the dominant culture of that place, which isn't confined to that place) has become ever more dominated by charlatans and money-grubbers like Sam Altman and Elon Musk and hence ever less attractive to engineers interested in making useful things. I certainly wouldn't work for clowns like Altman and Musk.
(I speak as someone who's spent many years in and around the computer industry, from doing research in the CS department at UC Berkeley and the Swedish Institute of Computer Science in the 90s to designing and building web applications for various purposes today. Some years ago, I spent six months taking in the Silicon Valley scene before concluding it wasn't for me, mainly because it wouldn't help me accomplish anything I cared to accomplish. Since then, it's gotten much worse.)
It’s been nauseating to hear “AI this” and “AI that” the past year. Tech has generated a lot of wealth, but their innovations have generally made things worse. ATS systems prevent talented ppl from getting an interview, AI has made companies think they need less humans employed, all while a few companies get rich. Invest in quality humans. Generative AI, or essentially a smarter chat bot, is just a tool like a ruler to draw a straight line or a protractor to make a circle. You still, and will always, need a human to draw the line and make the circle.
I think public opinion has a tendency to swing from one excess to its very opposite. Let's try to be lucid here. Web3, the metaverse...that was purely vaporwave. Yes, people spend a lot of time playing Fortnite and there is (marketing) merit in thinking of it as a "place", a hip spot you can partner with but, again, the biblical talk of a "metaverse" was pure delusion.
AI is something else. First of all, there is way more to it than "generative ai". Think of well-established applications such as supply-chain optimisation or nascent ones such as early-stage cancer diagnosis. That's very practical stuff that can save millions...and we are not just talking money.
Now, as for Gen AI... It's still gimmicky, derivative and, yes, tacky. At best, it should be used internally, for quick visualisations and as a tool (definitely not the only one!) for brainstorming. Will it ever improve, beyond that? I don't know and what we are witnessing is a collective correction. The hype and the novelty are left behind, what we are left are semi-interesting tools that have some future promise and some current merits, when employed correctly...and with taste, the one ingredient AI most certainly isn't able to replicate. This is proving a big issue for companies that have no profitability nor, it seens, a path to profitability. It's their problem, though.
One industry that, in my opinion, is already being disrupted is Search (and it's vulture-like relative, SEO). You can diss perplexity as much ad you want and they have real issues to content with (how to scrape content now that publishers are on their toes, how to make money once you start paying then, etc.) but, anecdotal evidence, I've reduced my use og Google by 50% and plenty of other esrly-adopters did the same.
A few counter arguments for you to consider: if AI is so bad, why did Hollywood go on strike to prevent it from replacing them? Saying AI is over is like looking at the first spluttering propeller planes and declaring the aviation bubble is about to burst. What you are seeing is the "trough of despair" which all new technology goes through before blasting past the old tech (in this case the old tech is human brains). (Ask AI about "trough of despair"!) It's not true that no one is making money from AI, I would draw your attention to Palantir which is growing at 27% while already making healthy profits. It's AI system has been proven to provide massive cost savings to companies in every sector of the economy, and the military. Sure, Nvidia has lost what 30% recently? But it's still up a lot. Stock prices don't move in straight lines. Even when they are trending upwards, they can have big pullbacks, and vice versa. Sometimes the whole market goes down and no one knows why. I wouldn't rely too heavily on stock prices to support your argument. Either this article or my comments won't age well. Let's see who's right in 5 years!
I wonder if part of this was also a convenient excuse by corporations to shed staff without hurting their stock price. I hear that work-from-home bans are being made in the hope that people leave rather than be laid off, so AI could also be a convenient excuse for it.
Other than any turmoil the tech collapses cause in the economy as a whole, this is great news!
Your premise here is something I've suspected to be the case, but I don't have the knowledge to articulate or even be too confident about. So thank you for writing this! I feel both validated and humbled. I know just enough about AI to be aware that much of what is offered to consumers is pure marketing hype. For example, features on smart phones such as circling a portion of a photo to launch a search, have already been available for a while are suddenly branded as AI and presented as if they're new. The hype is getting more blatant, and as you point out, more obviously desperate.
It's very telling that big names like Goldman-Sachs and Gartner are asking these questions. Lots of capex costs sunk on the back-end, not much return on the front. We may be seeing a lagging effect among venture capital, as July was apparently the biggest month for AI deals yet, but maybe even that seemingly endless money spigot might start to tighten. Interest rates will not go back to 2020-era levels anytime soon, so there was already going to be a shakeout, anyway.
Is there any way to undo the damage of GenAI on Gen Z?
Ya your right about the bubble but you are wrong about AI.