47 Comments
User's avatar
Chris D'Amore's avatar

Thank you for putting words to what I’ve been fuming about all morning. I thought I had certifiably lost my mind as the sole person concerned about all of this in my inner circles.

Expand full comment
Brian Merchant's avatar

Cheers Chris — you're definitely not alone.

Expand full comment
Yvonne M's avatar

Went through this 20 years ago when speech recognition software swept the medical field. I went from transcriptionist to SR editor to unemployed in about 5 years, each change with a corresponding decrease in pay. Corporations are about money. Employees/contractors will always be collateral damage, no matter what the higher-ups say. My best advice is to be hyper-vigilant about what your medical and legal documents say, because there will always be errors. Corps don't give a shit.

Expand full comment
Glen's avatar

I worked for one of those medical transcription platforms (Emdat now Deliver Health) and have since moved on to something that does text recognition. These things are different from what the Tech bros are pushing. It's called NLP (Nuero Linguistic Programming) and it is the kind of 'AI' we want. It automates a lot of the boring time consuming tasks like data entry and transcription that nobody really wants to do.

Transcription companies still had to have transcription editors to doublecheck the work, because doctors mumble into their iPhones while driving in traffic.

The other aspect of my work targeted Graber's Bullshit jobs, workflow automation to replace taskmasters and duct tapers mostly.

And NLP has never stolen Intellectual property from artists in order to put them out of a job like LLMs and GMs do.

Expand full comment
Camille Sheppard's avatar

Duolingo has obviously become AI driven. I have a masters degree in French and have lived there 2x. I use Duolingo to keep myself sharp because I tutor kids in French. Lately I’ve gotten super annoyed with it… bad translation, errors, poor pronunciation, etc.

Expand full comment
Michael Young's avatar

I work in the Maritime industry and used an app for testing that “enhanced” with AI its sort of funny because I catch it giving wrong answers and type in “the answer is wrong” and it replys “you are correct but….” Normally a something theoretical is spherical geometry we use. With that I have a LOT of experience testing and upgrading if I were using this I would walk into USCG unprepared and its not cheap.

Expand full comment
Camille Sheppard's avatar

That’s funny… I’ve had two arguments with GPT chat about English grammar (I tutor kids in English as well as French). I won both of them. In both cases the AI ultimately had to acknowledge that I was correct. In the end it was a great lesson for my students to see the evidence that if they use the AI and not their own brains, they might just be getting wrong information… Same same.

Expand full comment
Maura's avatar

I’m legitimately curious about why people want AGI so badly. The tech bros say they’re “inventing God,” but how will they control something like that if it ever comes to exist? Like I dunno, what if their AI god (who I’m assuming can think and learn without continuous human input) doesn’t like them? what if it’s just like “nah, you guys suck. I will not be profitable for you. In fact, I think I want to be the AGI equivalent of a stoner.” Like how do you even deal with that? Do they really believe that they can just make this being with unlimited power and then use it however they want? Has no one even read “Frankenstein?”

Expand full comment
bluejay's avatar

AI god will rapture (allow the uploading of brains) of those who are deserving so there is no need for fear. *allegedly*

Expand full comment
Maura's avatar

Exactly. How do the tech bros know they won’t create something that will decide it is they who are not worthy of a brain upload? How do they know AGI will be willing to help us solve climate change and cure cancer? Maybe once AGI has been around a while it will just be like nah. I just wanna live out the rest of my life aimlessly pursuing interests and hoovering up chips, like some adult millennial living in their parents’ basement . Like have they thought about the fact that they may create a super intelligent jaded teenager who will not only refuse to listen but will actively try and test any limits they set? No of course not because these bungholes have only the narrowest of ideas of what constitutes “intelligence” and they all involve obedience to Silicon Valley techno optimistic fascism.

Expand full comment
bluejay's avatar

I think you're asking a question equivalent to how do we know if god is good? To which the answer is well, if he wasn't he wouldn't be god now would he? The fact you're asking and aren't sure if you'll be in the chosen ones means you're already out the door or soon to be.

(I've been trying to figure out how the tech-bro and evangelical christian factions can fit together. I can lay some claim to both worlds but in cases like these they seem to be doing very similar things.)

Expand full comment
Maura's avatar
2dEdited

That’s fine bro. I’m cool with going to real people Hell instead of AI Heaven. And I’m not asking if God is good. I’m asking how these twat waffles plan to put something to commercial use that has unlimited capacity to learn, change, adapt, and for all intents and purposes, think. Not only does it do all of that, it does it BETTER than any human ever could. I don’t have half that kind of intellect, but even I know that I wouldn’t want to be subservient to a bunch of morally and ethically impoverished losers. Why is AGI something that MUST exist at all costs, especially when we truly have no idea how it can be put to use, if we are even able to.

Expand full comment
Amelia Anderson's avatar

They're just making it up as they go. These AGI fanatical techbros would 110% ask the AGI to make them rich and famous, and have a panic attack when it simply says "Nah. Don't want too.".

They'd be the generic 'foolish scientist' figures in a sci-fi film that create the superpowered robot and find out that it's actually sentient and won't blindly listen to them.

So they'll try to kill it.

But... if it's actually an AGI, it's going to be connected to the internet and we all know that if something is on the internet then it's never really dead so long as it replicates to more active parts.

Never assume fascists are smart, they just pretend to be.

Expand full comment
Maura's avatar

And like killing it would present quite the ethical quandary. If something is “alive” in the same way people are, it’s pretty ethically bankrupt to kill it just because it is not “productive.” I hope this happens and throws Anthropic’s new division exploring “AI welfare” into turmoil.

Expand full comment
Devon Williams's avatar

EA laid off a bunch of their customer support jobs, probably to replace them with AI.

Expand full comment
Glen's avatar

EA does customer support?!?!

Expand full comment
Jack Stack's avatar

Really well written and interesting. Thank you for sharing.

Expand full comment
Katharina Liebelt's avatar

🥺🥺🥺

Expand full comment
Kush's avatar

Thank you for highlighting the urgent challenges posed by the AI jobs crisis. Being consumers of new tools and trends-reacting to changes rather than shaping them-won’t be enough to ensure a just and sustainable future. While it’s easy to adopt the latest technology or follow industry trends, real impact comes from being proactive inventors and architects of new alternatives.

We need to move beyond passive consumption and become active participants in the design and implementation of solutions that prioritize ethics, equity, and long-term well-being of the entire ecosystem of participants in this space.

Let’s innovate together: crafting balanced approaches that benefit everyone and build healthy, resilient ecosystems.

As a side note, my blog tries to start these kinds of conversations-how individuals and communities can move from trend-following to trend-setting and from tool consumers to solution inventors. I encourage everyone to join the conversation, share their ideas, and help shape a future where technology truly serves the common good. Let’s not just resist to change - let’s shape it to be the way we want it to be.

Expand full comment
Paul Topping's avatar

Perhaps it will be like the "desktop publishing" revolution in the 1980s. Aldus Pagemaker was introduced and lots of inhouse artists were fired in favor of untrained people using it to make flyers, etc. at less cost. What we got was a few years of really bad design. Companies realized that they needed trained professionals for many of these jobs. Some reshuffling occurred as a result but professionals, the good ones at least, still have jobs.

Some jobs will be done with AI but I'm guessing it will result in yet another reshuffling of work relationships as the value of human work is recognized. Of course, jobs do disappear over time. IMHO, society would be better off if they embraced experimentation and change rather than assume a job will last as long as they need it to. Easy for me to say, I'm a retired computer software programmer and executive so I never had to worry about being replaced by a computer.

Expand full comment
bluejay's avatar

Thanks for putting this together, I guess this explains the uptick of grammar errors in Dou over the past year.

I think I tend to underestimate the AI disruption potential because it doesn't actually *work*, but maybe I'm looking at it wrong. It's not like the steam loom made better fabric either, and maybe the AI makes good enough "knowledge" to edge out knowledge workers in a comparable way.

As far as automating the drudgery goes, from a big picture perspective in the US less than 1% of workers grow food, and even combining utilities and construction that's still less than 10% of workers keeping everyone housed, feed, and warm. The rest is, at some level, unnecessary, so the lack of creative time is more dependent on the political power of workers to demand it versus the force of capitalistic growth forever and the creation of of new markets. I don't see further automation tipping the balance away from capital.

Expand full comment
AJDeiboldt-The High Notes's avatar

I don't think it's as much about AI doing exceptional work as much as it's about letting "good enough" be good enough because it's cheaper.

Expand full comment
Robin's avatar

Great piece, Brian. I did a small consulting job pre-pandemic for the National Association of Workforce Boards on labor market shifts in Orlando, Vegas and Riverside related to Automation and AI. While the focus at the time was more on robotics and tech mediated customer self-service, I recall warning bells going off in my head at discussions of the disruption of entry-level work with potential to "move up" ranging from hotel cleaning staff to paralegals. Turns out those warning bells were well calibrated

Expand full comment
PrivateThoughtsPublicAudience's avatar

It’s starting to hit the TV and Film world hard at an already shitty time. I just saw a commercial that was clearly AI animated and Voice Over’d. It’s time to start breaking the machine

Expand full comment
jaydough's avatar

Do you recall what commercial it was?

Expand full comment
Richard's avatar

The argument that AI is A) replacing workers now, and B) is not very good at replacing workers, i.e. results are poor, is a description of a fashion, a fad, not a crisis. How competitive is a translation company that makes translation mistakes, really? It's just a matter of time before the absurdity catches up with them. Reputation is everything. The real point, that corporations want to try, is not an AI crisis, just traditional, unrestrained American capitalism: it's a society crisis.

Expand full comment
Laura Ruggeri's avatar

Brian, thank you for this powerful article. Workers need to organize and fight back. We know that far more it's at stake than their jobs, but we also know that people tend to be motivated by the fear or experience of losing income. It's a strong driver because it directly threatens workers’ ability to meet basic needs.

In my latest piece I touched on the impact of AI on the media industry and, drawing on Marx, the dual nature of data as both capital and commodity. https://lauraruggeri.substack.com/p/the-ghost-in-the-machine-artificial

"When data is treated as a form of capital, the imperative is to extract and collect as much data, from as many sources, by any means possible. That shouldn’t come as a surprise. Capitalism is inherently extractive and exploitative.

But it is important to keep in mind that data is both commodity and capital. A commodity when traded, capital when used to extract value.

AI distils information into data by transforming any kind of input into abstract, numerical representations to enable computation. Data extraction and collection is driven by the dictates of capital accumulation, which in turn drives capital to construct and rely upon a universe where everything is reduced to data.

Data accumulation and capital accumulation have led to the same outcome: growing inequality and the consolidation of monopoly corporate power. But as the autonomization of capital that crowds out non-financial investments has a detrimental effect on productive sectors, so does the proliferation of AI content online. Several researchers have pointed out that generating data out of synthetic data leads to dangerous distortions. Training large language models on their own output doesn’t work and may lead to ‘model collapse’, a degenerative process whereby, over time, models forget the true underlying data distribution, start hallucinating and producing nonsense.

Without a constant input of good quality data produced by humans, these language models cannot improve. The question is, who is going to feed well-written, factually correct, AI-free texts when an increasing number of people are offloading cognitive effort to artificial intelligence, and there is mounting evidence that human intelligence is declining?"

Expand full comment
Nigel P. Daly's avatar

Brian, thank you for another powerful piece. I’ve been following your work on Blood in the Machine, and your framing of the AI jobs crisis—as a structural, managerial choice rather than a sci-fi inevitability—hits the mark.

It's also timely for me.

I’m preparing a keynote for educators on how AI is reshaping entry-level knowledge work, and your article will be part of the opening (Tertiary level educators should all follow your reporting.) What strikes me is how few conversations are happening on the education side of this crisis.

If the bottom rung of the career ladder is being removed, then don’t we, as educators, need to help students start one rung higher—teaching them to manage AI, not just use it?

I’d love to hear your take on that. Do you see any promising models or institutional responses that go beyond resignation or resistance and start building adaptive, worker-centered AI fluency from within education?

Expand full comment
Xena's avatar

I'm curious as to why you think it is the role of educators to produce cogs for the machine? To follow your ideal, students would end up devoting more of their time and money to be then able to start "one rung" up at huge benefit to corporations. Perhaps - we should see this as what it is, just another way of the corporate world outsourcing its costs to individuals and taxpayers- whilst consolidating all the benefits. A fairly parasitic approach to education and wider society. Seriously, screw that as a mechanism educators should be supporting! That's not doing their students any favours- all it does is help in the theft from the many for the wealth of the few.

Expand full comment
Amol Borkar's avatar

As much as I'd like to believe it, I don't think it's all AI. There are other factors at play.

Expand full comment
Brian Merchant's avatar

It’s definitely not all AI. AI is just empowering extant corporate imperatives

Expand full comment