I would echo what the Computer Programmer in this article has said (under “Gradual addition of AI to the workplace”), except for me it's been anything but gradual. The company has suddenly forced AI into everything, including coding. I'm now looking at other opportunities. I want to code, not prompt. If I have to, I'll go into another line of work entirely.
This needs to be repeated, highlighted and shoved in every AI sycophant, Wall Street stock monkey or big tech executives face till they are forced to counter it:
Forced adoption is not adoption. Just because you have faked the metric does not mean you have made it.
Bullshit metrics rigging is the name of the game now. Wall Streets, pro-monopoly, pro-fascist and anti-worker machine of auto-pumping any stock that monopolises, lays off workers and "adopts AI" has empowered the cabal of Mckinseyists, C-execs and technofascists against the people.
First it was monthly active users: cue AI companies giving away products for free, or forcing them in products that already have success. The desperation at Google to brag about MAU after forcing it in Google search, Android, Lens and soon to be Gmail. On its own, the Gemini app is a complete flop.
Then it was tokens, a bullshit metric when one realises, a) adoption has been forced (workers forced to use code assistants, Google auto-generating AI overviews in search) and b) token ouput has increased by 100x since the release of "reasoning" models.
Now its adoption. Don't you know, 30% of all code is now AI generated (oh we've included auto-complete in that wink wink). Workers are adopting AI faster than any prior technology!, the McKinseyist says. No shit, execs are literally threatening to fire people if they don't use it. A farce if there ever was one.
The reason they are hopping desperately from one metric to another, is because for 3 years and still to this day, they have been unable to address the elephant in the room.
AI is not valued by the people and makes little to no money. $600B+ spent and <$20B revenue. Take away the foundation models, and its <5B revenue. The best selling AI SaaS product is a code assistant making 0.5B revenue. MSFT Azure AI is making as little as $2B. MSFT, Google, Salesforce, Adobe have pulled their standalone AI products and now forcefully bundle AI alongside a mandatory price increase due to how bad the sales were as a standalone.
They run away from revenue and profit metrics because it shows the world how much of a flop their project has been. Anytime you want an AI exec or shill to squirm just ask these two things.
Its time for workers and the people to form a class, a class far more powerful, against the machine. Across so many industries, people have not only rejected AI slop but actively hate on it and try to ruin its image: see the recent humiliation of Klarna, Suno, Duolingo, MSFT Gaming and Meta. It is working. Public backlash breeds more public backlash. A wonderful flywheel of AI hate.
The people continuing to reject AI will kill the AI industry. Most the money has to eventually come from the consumer.
Many worthwhile observations (like: "if it worked so well. they would not have to force it onto us"). The tomatoes story floored me, what a great moment. And yes: this wave of AI slop is most likely going to bite society in the ass big time.
But much of what I read is not so much AI-specific. It is the lack of engagement of (upper) management with the complexities of large IT landscapes (instead believing in simplistic nonsense). It is stupidity married to greed. This is not new, it's just very bad currently because GenAI can be so utterly convincing to us limited humans.
In many ways we are confronted these days not so much with how intelligent machines are but how limited the intelligence of the human species really is.
While I definitely understand their plight - I have to deal with some of that in my own work, while it’s nowhere near as bad due to work culture in my country which is more… relaxed, let’s say -I can’t help but notice that stories aren’t “I lost my job because AI is able to do it better”, they are “I lost my job because upper management is hype-pilling and thinks AGI is around the corner”. Which is a bad thing, but if we suppose for a moment that AGI is not around the corner, and AI is a bubble? Those jobs will be back with vengeance once technical debt catches up. You can deny and postpone reality for a long time, but when your codebase is now an AI-written mess without documentation and tests and diffused knowledge in heads of those who have written it, it will collapse sooner or later.
In some ways this is comforting. There are no stories indicating AI is actually successfully doing work or even accelerating work. Far more stories of how struggling managers and workers are trying to get LLMs to do things they can’t do and suffering
Great article, Brian. I can’t say it’s a “feel good” piece. As I’ve said before, my partner is a high-tech patent attorney who runs his own firm. I won’t reveal his client list just use your imagination. I don’t use any AI apps including Chat GPT. When I asked my partner if AI was a revolution in the making or a hype bubble his answer was; “Who the fuck knows.”
Personally I think it’s a hype bubble. Anyone remember the pets.com dead sock puppet? We went through the 2008 recession here on the San Francisco Peninsula. Every empty building was a stark reminder.
I don’t claim to be a tech genius but I do study history and my partner is an engineer as well as an attorney. I watched the internet being born and I used to work in tech support.
Here’s my takeaway from my experience and study of history: We don’t learn. We repeat the same mistakes expecting different results. That’s one definition of insanity.
Remember what the Smith said to Neo in The Matrix about classifying our species; “You are a virus. You move into an environment and destroy it.” There’s a reason the movie is considered a modern classic.
We’re not learning now. These Star Wars obsessed nerds are and have always been short-sighted and they don’t have our best interests at heart. Palantir and Andoril, (see the Star Wars obsession?) My husband is a nerd. I grew up with nerd friends. My nephew is a brilliant engineer. But they have a huge blind spot. They forget the humanity which allows them to flourish. (Not so much my husband. He has a heart of gold.) We have to rein them in for the survival of all of us.
I'm pushing my glasses so far up my nose as I write this, but it's Anduril, those are LoTR references actually. A shame because while LoTR is in some ways very conservative/christian (it restores a blood-line monarchy for christ's sake) it also believes that a rejection of power and small acts of good are the only way to defeat evil and in pagan/heroic fashion that defeat of evil is temporary. I don't think we should let Thiel's cronies claim it for their own.
"My nephew is a brilliant engineer. But they have a huge blind spot. " I started as an engineer and can confirm. I had to go to therapy at 40 to train myself to feel, as early on in life I'd trained myself not to, and I finally hit a wall.
"I don’t use AI. I morally object to it, for reasons I hardly need to explain to you. And now I feel like I’m hiding plain sight, terrified someone will notice I’m actually doing all my own work."
I'm an AI lawyer and AI philosopher and there's plenty of work on both fronts. So AI hasn't killed my job per se but the current hype narrative around AI and the unfettered race to "AGI" has foreclosed the creation of so many much-needed jobs in responsible AI and AI ethics. That's just one of many troubling aspects to this whole situation at the moment.
Also, as a former software engineer, that story from the Google engineer was really depressing. 😕
So the PMC, who don’t really understand the current work of the people they manage, are given a good enough reason to downsize while getting output they don’t understand as a good enough result
A succinct summation, omitting only those who are cynical enough to not really care if it's not good enough and use it as a means of exercising control and/or power!
One thing I have always wondered about is why more tech people aren't working in the public sector or for small businesses. Every organization and business now uses tech, though not necessarily full on AI, and it has always been difficult to lure tech from the well paying jobs at the big companies. Wonder if that will change.
Also noting a few people have mentioned that basically none of these stories are people who've seen AI do their job; there are a lot of variations of "AI hype was used as an excuse to eliminate my job or forced AI use made my job suck".
My own take is that it's really just about macroeconomics. The stats on FRED show clearly that a tech jobs downturn started when interest rates spiked, months before ChatGPT came out and if anything has decelerated since then. In 2001 the narrative/excuse was outsourcing, in 2008 it was the "skills gap", this time it's AI.
I use GenAI every day to write code and it's ... got its uses? Is it a bigger deal than web apps, cloud computing, source control, CI/CD, containers, package managers, or any of the other innovations during my career that have added 10-20% to my productivity? Probably not, if you know what you're talking about. The others just address challenges that senior management doesn't tend to understand, while this one has a chat interface.
''I feel like I’m hiding plain sight, terrified someone will notice I’m actually doing all my own work.'' What a backwards world we live in.
That one hit me hard too.
Yeah, that quote will stick with me.
I would echo what the Computer Programmer in this article has said (under “Gradual addition of AI to the workplace”), except for me it's been anything but gradual. The company has suddenly forced AI into everything, including coding. I'm now looking at other opportunities. I want to code, not prompt. If I have to, I'll go into another line of work entirely.
This needs to be repeated, highlighted and shoved in every AI sycophant, Wall Street stock monkey or big tech executives face till they are forced to counter it:
Forced adoption is not adoption. Just because you have faked the metric does not mean you have made it.
Bullshit metrics rigging is the name of the game now. Wall Streets, pro-monopoly, pro-fascist and anti-worker machine of auto-pumping any stock that monopolises, lays off workers and "adopts AI" has empowered the cabal of Mckinseyists, C-execs and technofascists against the people.
First it was monthly active users: cue AI companies giving away products for free, or forcing them in products that already have success. The desperation at Google to brag about MAU after forcing it in Google search, Android, Lens and soon to be Gmail. On its own, the Gemini app is a complete flop.
Then it was tokens, a bullshit metric when one realises, a) adoption has been forced (workers forced to use code assistants, Google auto-generating AI overviews in search) and b) token ouput has increased by 100x since the release of "reasoning" models.
Now its adoption. Don't you know, 30% of all code is now AI generated (oh we've included auto-complete in that wink wink). Workers are adopting AI faster than any prior technology!, the McKinseyist says. No shit, execs are literally threatening to fire people if they don't use it. A farce if there ever was one.
The reason they are hopping desperately from one metric to another, is because for 3 years and still to this day, they have been unable to address the elephant in the room.
AI is not valued by the people and makes little to no money. $600B+ spent and <$20B revenue. Take away the foundation models, and its <5B revenue. The best selling AI SaaS product is a code assistant making 0.5B revenue. MSFT Azure AI is making as little as $2B. MSFT, Google, Salesforce, Adobe have pulled their standalone AI products and now forcefully bundle AI alongside a mandatory price increase due to how bad the sales were as a standalone.
They run away from revenue and profit metrics because it shows the world how much of a flop their project has been. Anytime you want an AI exec or shill to squirm just ask these two things.
Its time for workers and the people to form a class, a class far more powerful, against the machine. Across so many industries, people have not only rejected AI slop but actively hate on it and try to ruin its image: see the recent humiliation of Klarna, Suno, Duolingo, MSFT Gaming and Meta. It is working. Public backlash breeds more public backlash. A wonderful flywheel of AI hate.
The people continuing to reject AI will kill the AI industry. Most the money has to eventually come from the consumer.
Hear, hear [raises hammer].
100%
Many worthwhile observations (like: "if it worked so well. they would not have to force it onto us"). The tomatoes story floored me, what a great moment. And yes: this wave of AI slop is most likely going to bite society in the ass big time.
But much of what I read is not so much AI-specific. It is the lack of engagement of (upper) management with the complexities of large IT landscapes (instead believing in simplistic nonsense). It is stupidity married to greed. This is not new, it's just very bad currently because GenAI can be so utterly convincing to us limited humans.
In many ways we are confronted these days not so much with how intelligent machines are but how limited the intelligence of the human species really is.
While I definitely understand their plight - I have to deal with some of that in my own work, while it’s nowhere near as bad due to work culture in my country which is more… relaxed, let’s say -I can’t help but notice that stories aren’t “I lost my job because AI is able to do it better”, they are “I lost my job because upper management is hype-pilling and thinks AGI is around the corner”. Which is a bad thing, but if we suppose for a moment that AGI is not around the corner, and AI is a bubble? Those jobs will be back with vengeance once technical debt catches up. You can deny and postpone reality for a long time, but when your codebase is now an AI-written mess without documentation and tests and diffused knowledge in heads of those who have written it, it will collapse sooner or later.
Or it is an excuse for a headcount reduction
In some ways this is comforting. There are no stories indicating AI is actually successfully doing work or even accelerating work. Far more stories of how struggling managers and workers are trying to get LLMs to do things they can’t do and suffering
...or not using LLMs to do things :-)
Great article, Brian. I can’t say it’s a “feel good” piece. As I’ve said before, my partner is a high-tech patent attorney who runs his own firm. I won’t reveal his client list just use your imagination. I don’t use any AI apps including Chat GPT. When I asked my partner if AI was a revolution in the making or a hype bubble his answer was; “Who the fuck knows.”
Personally I think it’s a hype bubble. Anyone remember the pets.com dead sock puppet? We went through the 2008 recession here on the San Francisco Peninsula. Every empty building was a stark reminder.
I don’t claim to be a tech genius but I do study history and my partner is an engineer as well as an attorney. I watched the internet being born and I used to work in tech support.
Here’s my takeaway from my experience and study of history: We don’t learn. We repeat the same mistakes expecting different results. That’s one definition of insanity.
Remember what the Smith said to Neo in The Matrix about classifying our species; “You are a virus. You move into an environment and destroy it.” There’s a reason the movie is considered a modern classic.
We’re not learning now. These Star Wars obsessed nerds are and have always been short-sighted and they don’t have our best interests at heart. Palantir and Andoril, (see the Star Wars obsession?) My husband is a nerd. I grew up with nerd friends. My nephew is a brilliant engineer. But they have a huge blind spot. They forget the humanity which allows them to flourish. (Not so much my husband. He has a heart of gold.) We have to rein them in for the survival of all of us.
Love this, thanks — lots of 'who the fuck knows' going on these days!
I'm pushing my glasses so far up my nose as I write this, but it's Anduril, those are LoTR references actually. A shame because while LoTR is in some ways very conservative/christian (it restores a blood-line monarchy for christ's sake) it also believes that a rejection of power and small acts of good are the only way to defeat evil and in pagan/heroic fashion that defeat of evil is temporary. I don't think we should let Thiel's cronies claim it for their own.
"My nephew is a brilliant engineer. But they have a huge blind spot. " I started as an engineer and can confirm. I had to go to therapy at 40 to train myself to feel, as early on in life I'd trained myself not to, and I finally hit a wall.
"I don’t use AI. I morally object to it, for reasons I hardly need to explain to you. And now I feel like I’m hiding plain sight, terrified someone will notice I’m actually doing all my own work."
God forbid.
I'm an AI lawyer and AI philosopher and there's plenty of work on both fronts. So AI hasn't killed my job per se but the current hype narrative around AI and the unfettered race to "AGI" has foreclosed the creation of so many much-needed jobs in responsible AI and AI ethics. That's just one of many troubling aspects to this whole situation at the moment.
Also, as a former software engineer, that story from the Google engineer was really depressing. 😕
Wow, what a great service to humanity this series is!
Cheers Richard!
Great work Brian. And I would love to read more by the pie-baker! Loved her writing and observations.
Thanks — and I hope she sees your comment!
So the PMC, who don’t really understand the current work of the people they manage, are given a good enough reason to downsize while getting output they don’t understand as a good enough result
A succinct summation, omitting only those who are cynical enough to not really care if it's not good enough and use it as a means of exercising control and/or power!
I call it the “snake eating its tail”
Fantastic reporting and kudos to your contributors. Brave, articulate, humane.
Thank you so much for this. I'm looking forward to more installments!
One thing I have always wondered about is why more tech people aren't working in the public sector or for small businesses. Every organization and business now uses tech, though not necessarily full on AI, and it has always been difficult to lure tech from the well paying jobs at the big companies. Wonder if that will change.
Also noting a few people have mentioned that basically none of these stories are people who've seen AI do their job; there are a lot of variations of "AI hype was used as an excuse to eliminate my job or forced AI use made my job suck".
My own take is that it's really just about macroeconomics. The stats on FRED show clearly that a tech jobs downturn started when interest rates spiked, months before ChatGPT came out and if anything has decelerated since then. In 2001 the narrative/excuse was outsourcing, in 2008 it was the "skills gap", this time it's AI.
I use GenAI every day to write code and it's ... got its uses? Is it a bigger deal than web apps, cloud computing, source control, CI/CD, containers, package managers, or any of the other innovations during my career that have added 10-20% to my productivity? Probably not, if you know what you're talking about. The others just address challenges that senior management doesn't tend to understand, while this one has a chat interface.