"AI is an attack from above on wages": An interview with cognitive scientist Hagen Blix
The author of 'Why We Fear AI' on why he sees generative AI as "class warfare through enshittification."
Greetings all,
Hope everyone’s hanging in, hammers at the ready. So, I know I promised a podcast in the last newsletter, but, well, I blew it on the recording. I am not, I regret to inform you, an audio engineer, and the sound quality just wasn’t there. However! The conversation, with the New York-based cognitive scientist and author Hagen Blix, was so good and timely that I was moved to LABORIOUSLY transcribe our chat, largely BY HAND, edit it, and present the finished product here as a Q+A instead.
Blix has a book out with coauthor Ingeborg Glimmer, called Why We Fear AI, which argues, convincingly, that… well I won’t give it away quite yet because that’s basically my first question. But suffice it to say that I’d been meaning to chat with Blix for months now—things have just been so, well, you know. But he makes a number of compelling arguments about the nature of AI, why not just workers but bosses are afraid of it, and why we shouldn’t see it as a productivity tool but a wage depression tool. It’s all good stuff, and I’d heartily recommend anyone pick up the book and give it a read; it’s nice and short to boot.
As always, work like this—research and interviews and the editing and transcribing thereof—is only made possible by my glorious paid supporters, who chip in a few bucks a month, or $60 a year, so I can keep the blood in the machine pumping. If you value this stuff, please consider doing the same, so I can continue to publish great discussions like this one, with folks like Hagen, and keep the vast majority of this site paywall free. Many thanks to those kindred ludds who already do chip in, you are the greatest. Okay, thanks everyone, and onward.

BLOOD IN THE MACHINE
The book is called “Why we fear AI.” So why do we fear AI?
Hagen
The book grew partly out of Ingeborg and me talking about all these crazy narratives around like, oh, AI is going to destroy the world. AI is going to take over. And so we, we said, well, There’s a lot of debunking these stories out there. A lot of people are very clear and concise about saying “this is bullshit.” But to us, there was a secondary question in the background, which is, well, sure, the Matrix is not around the corner, but there’s something about these stories that resonates with people.
There are different ways in which these kinds of stories can resonate, right? So take the story about AI taking over the world and controlling everything. There’s an Amazon warehouse worker whose life is literally right now being controlled by an AI classification system, right? There’s a material reality in which AI really is a tool of control.
But maybe there’s also another way of thinking about it, maybe even for the Sam Altmans and Mark Zuckerbergs of the world. Maybe they feel like—the whole ruling class feels like—they can’t really do anything about things like climate change. Naomi Klein recently called this “end times fascism.” They don’t want to let go of their power, but they know that what they’re doing is literally making the planet uninhabitable. So maybe to them, that’s a totally different way of saying, oh, technology is taking over and maybe gonna kill everyone—it’s about a totally different thing.
So we set out to analyze the material facts on a class basis, and then to make sense of the stories and the material facts in order to figure out what can we do politically, to maybe not have the world burn down. Because I think it’s good for people to exist, personally. Controversial opinion these days, I know, but I’d rather have humanity.
BITM
So, let’s drill into this a little bit. These are AI companies that are at root just profit-seeking firms like any others. They have stakeholders and shareholders and competitors, and they are all offering their various iterations of the mass-automating software that’s going to replace all human labor or do this and that, and they feel certain pressures to find novel ways to up the ante or chime in.
Hagen
There are all these bizarre promises of AI is going to like automate all human labor. I think most people who have looked critically at these technologies, and been like, that’s not what’s going to happen.
But I think there’s a larger narrative that we’re all swimming in, which is to think about technology through the lens of productivity increases. And, you know, nobody has ever said the bad thing about capitalism is that it increases productivity. That it’s bad at increasing productivity, right? Clearly that’s a thing that capitalism is good at. But there’s a second aspect to technology that kind of always gets put under the rug just in terms of capitalist development of technology.
And that is a lot of technologies are developed in order to increase the control of management over workplaces and in order to de-skill people. And by de-skilling, I don’t mean that people become less skilled, but that a workplace is transformed in a way that allows a company to pay people who previously were skilled workers as unskilled workers, right? And you have written a lot about this.
Like Blood and the Machine, it’s full of this stuff. Those weavers still know how to weave. They’re not less skilled, but the factory system allows certain people to out-compete the previous weavers with a shitty product that’s really cheap and where they can hire people that, rather than leading three years of training or however long it may have taken a person to become a skilled weaver back then, it takes three weeks to train a hand in the factory, right?
And I think we need to think about the development of AI technology in that kind of context. What kind of effect is AI going to have? It’s not going to replace everyone. But if we think about AI, language models are the industrial production of language. It’s the same way that the factories in the 1800s were the industrial production of cloth. And I think we get a much clearer picture of what is going on.
We should think about the AI as a wage depression tool rather than a productivity increasing tool
And we get a clear picture of like, this isn’t primarily, this isn’t just about productivity. Maybe some of it is, maybe. A lot of studies are saying productivity increases aren’t coming. But I think what we still see, and that is also very clear from your work—you have all the work on AI is coming for your job—is that people’s jobs are getting to be more shit, right?
Like translators. It’s not that we’ve gotten rid of translators. It’s that we’ve made a machine that can produce a kind of shit translation. Not so shit that it’s not useful, but not up to the standards that a translator would expect. That is really cheap to produce now, but the translator now has to compete with this kind of thing. And even the translators that then have to fix the AI translation, they’re now much more like gig workers because they have to compete with this thing. The supply of the shit version of the thing is so high that it just depresses the prices overall and depresses wages.
If we think about the AI as a wage depression tool rather than a productivity increasing tool, then I think the idea that we should get our hopes up [for the AI boom to end] because all these studies say productivity isn’t actually increasing is a bit premature.
AI Killed My Job: Translators
In July 2025, Microsoft researchers published a study that aimed to quantify the “AI applicability” of various occupations. In other words, it was an attempt to calculate which jobs generative AI could do best. At the very top of the list: Translators and interpreters.
BITM
Yeah. And then this also feeds into your argument about why we’re ultimately afraid of AI in general, because it can be understood as an omnidirectional and omnipresent vessel for de-skilling.
Hagen
I think there’s a lot of ways in which, if we think about people who work with language, they had been working in areas where industrialization typically hadn’t happened to the same degree. If you want to call it in classical terms, proletarianization, in a sense, this is what this push is for, right?
BITM
The proletarianization of everything, that’s the aim.
Hagen
Yeah. And we’ll see how far this goes, right? Like, my expectation is that generally we see this kind of like, we’re producing a bifurcation, you know, like the higher class will be of quality of goods produced will be more expensive.
Think about lawyers. If you’re doing corporate mergers, you’re not going to replace your paralegals with an AI. Because every mistake could cost millions of dollars. But if you’re a public defender and you’re like, actually, I can use an AI, I’m only going to win 80% of the number of cases that I used to win, but I can’t do twice as many cases because the AI apparently was so much cheaper and so much faster.
That’s really bad for the people who need a public defender, precisely the people who are very often going with the cheapest option that there is, because half this country is living paycheck to paycheck. People don’t have savings. When emergencies come up, they’re scrounging. So there’s a sense in which it will put the expensive stuff increasingly out of reach.
And we’ve seen this in other domains, like furniture or shoes or fast fashion. You read a 19th century novel, and people have three pairs of shoes over the course of their life. Somehow all these people had shoes that were made by a cobbler that would last 15 years. And it just seems so wildly mind-boggling.
BITM
The Luddites made high quality cloth garments that were designed to last. You owned a couple of them, and you wore them all the time, and they lasted forever.
Hagen
The idea that you could inherit just normal clothes just seems so wildly bizarre out of all proportions of the imaginable for us today. Similar with furniture. We have all this beautiful turn-of-the-century furniture from 1900 that is still amazing, that is very expensive now.
I love a lot of IKEA stuff. A lot of IKEA stuff is great. But I don’t expect my IKEA stuff to be handed down, or stuff that people might still want to use in 100 years. I barely expect it to survive one or two moves.
AI is an attack from above on wages
So that sense of the middle quality is getting attacked most, by AI, this thing that is partly productivity increases, and a lot of it is de-skilling, that attacks quality. It makes products that are so much cheaper that they out-compete not on quality, but on price. I think that’s the expectation that we should be operating under with AI.
And we should see that as an attack from above on wages. And I think that’s really important in a sense, because, you know, It’s a horrible thing.
I don’t have any control over what these companies do. What do we have control over? I think what we have control over is that as workers, we can collectively organize against this. And the first step to doing that is, of course, to realize that maybe we have shared interests, right?
Like that is before you can form a union, you have to realize that actually together you might be able to do something about your interests in a way that you can’t individually.
And so historically, I think there’s a good case to be made that, you know, precisely the people that these technologies are coming for, if you’re like working in a medical facility and you’re interested in evaluating medical scans, that’s a job that’s getting de-skilled. If you’re, like, a paralegal who might want to try to be a lawyer, that is getting de-skilled. Everybody who’s in teaching knows that these things are coming in and enshittifying those kinds of domains of jobs.
There’s a real sense in which those were often the relatively privileged kind of jobs that were very much also the bulwark of the system of exploitation that we live in, right? Lawyers are not the kind of guys who typically form unions, right? They’re the kinds of guys who typically form small companies.
So there’s a sense in which maybe we can get a lot of people invested in joining the labor movement. Workers that so far were essentially structurally hostile to the labor movement. Those are the kinds of questions that I think are really important to think about.
BTIM
I’ve had this sense as I’ve been reporting on AI and work. I mean, when the Writers’ Guild of America and the Screen Actors Guild went on strike, it was a big surprise to me how much solidarity it generated. I’m old enough to remember the last time the screenwriters had a major strike, and it was very different. Like, it was framed everywhere as look at the coastal elites and their made-up problems. There was a large scale effort to write off those concerns. This time people were like, “yeah—this could happen to me.”
That’s to say that this time, with this fear of AI, as you put it, in the mix, a lot of people were able to see or be concerned or to share in that concern. Not everybody always recognizes it as this mass deskilling or this mass attack on wages, but they have a sense even from it’s cultural positioning, how we’re taught to vaguely understand AI as something that can replace us on some degree, that there was this the knee-jerk reaction was to side with the writers and the creators in a way that that felt novel.
Hagen
And I think that it’s also true.
BITM
Just last week, I was speaking to a group of designers. Product designers, historically, they’ve had over the last few decades, pretty secure and well-paying jobs. Well, let’s guess what’s happening right now. You will be shocked to learn that the same thing that you’re describing—a deskilling, a precarization, an attack on their wages—is applying to them too.
And a couple of them came up to me after this talk and said, “you know, design has never been an industry that has ever really like entertained the idea of forming unions, but I’m kind of thinking that, there’s more talk and more people are interested in doing so now.”
Hagen
Yeah, that’s, that’s, that’s exactly, I think the thing we should be pushing for, right?
I think sometimes there’s a knee jerk reaction to just—you see that there’s so much bullshit that’s being produced, and so you kind of want to say, it’s hype. You want to focus on the anti-hype thing But I think that the most important thing is exactly making these connections, getting people to realize that there are collective interests at stake and have collective interests against the people who are using this to make money. This is your bosses. This is the big tech companies. This is venture capital investors. That there are dividing lines, right? And we want to be sure that we draw the line politically in the right way. Like sometimes I worry that when that focuses a little too much on the, like, scam… hype… whatever nature, that you might accidentally draw the line in a way that your boss is also getting scammed, right? But your boss knows what they’re doing. They know why they’re trying to make money, right?
So drawing these political lines, right? And, yeah, increasing the sense of solidarity. That’s really why we wrote the book to be like, there’s in a sense something about this moment that makes our shared humanity and our shared interests as workers of all stripes
I think there’s something really clarifying about this moment precisely because this particular technology is coming for so many of us, and for so many of us who have up until this point said “yeah, maybe there’s a lot of injustice going on in the system but me personally? I have a set of skills that sell pretty well on the market I know how to make a career out of my thing I’m okay.” So now a lot of us that were in that position are getting drawn in and being like, no actually the impoverishment that has always been the underbelly of capitalism—maybe not your folks, maybe it was more in the global south, the distribution of where the the most severe injustice, this is clearly changing historically, but that—ends up now with, our shared humanity depends on this.
And maybe even connecting that to climate change, right? I do think there’s always in these apocalyptic stories, just like in the 50s, 60s, 70s, the apocalyptic stories always had a nuclear war echo to them. Now they always have a climate change echo to them. And again, that is clearly a market failure—you can be the biggest fan of Milton Friedman that you want, but, the damage to the environment is not getting priced in by the market system. It requires a kind of global collective action that we do something about. And that’s why people like Peter Thiel are like basically saying, oh, Greta Thunberg is the Antichrist, you know?
BITM
So I wanted to go back and pick at the previous point because with AI, and its enormous capitalization and cultural status, it can be hard to cut through that mystique. Which is why we need the hype debunkers. But it can be a challenge to succinctly articulate what you’re getting at. I’ve interviewed at this point probably hundreds of workers who have often horrifying stories, in which AI has been used to deskill them or cited as a reason for outright layoffs—it’s a challenge to separate the mystique of AI from the dull truth on the ground, which is that a boss is trying to save on labor costs. It presents a challenge.
Hagen
It does. We do have some work to do. I do feel like the de-skilling argument really, really resonates with people.
BITM
I think we need a better term though.
Hagen
I agree. It’s a shit word. I’ve been playing with things like calling it “class war through enshittification.” So one thing that I found useful, and that I found resonated with people who have come to my book events, is that sense of it’s simultaneously a way of attacking your wages and the quality of your work. Because it’s not just coming for your money. In a sense, people are like, yeah, that’s normal. The boss wants to pay you as little as possible and you want to earn as much as possible. And there’s some market-based negotiation happening. But it’s also coming for the quality of the thing you’re doing.
BITM
And your ability to derive satisfaction from doing that work.
Hagen
Exactly. So that adds insult to injury.
BITM
I still think we need a word. We need a term.
Hagen
I agree we need a word.
BITM
Let’s workshop it.
Hagen
We’ll workshop it.
BITM
There are not a lot of cognitive scientists I know of who spend their time probing the political economy of AI—how do you think that your background there has helped sort of inform this this broader work?
Hagen
That’s a really good question. To me they’re two very different worlds and I’ve just happened to have occupied both of them. So my PhD is in linguistics as a cognitive science. So I was really interested in how grammar works in the mind. And then AI kind of came crashing into that space.
But I had also been interested in political organizing for a long time. And to me, those were always two completely separate worlds. In fact, personally, I was interested in cognitive science, partly because I was like, “all this linguistic stuff has no application. It’s pure science. It won’t be turned into a tool against the working class or into a weapon or something.” And then this stuff happens. And I’m like, “well, fuck, that was a miscalculation from my part.”
Certainly, I had that sense of what these things are actually doing and what they’re being sold to do are not the same thing. But again, for me, that discrepancy should be used to enlighten something, to make something clear that is otherwise unclear, right? Like the fact that technology is always, always, always about class power and not just about productivity.
BITM
Since we were talking about hype and we’re clearly in some kind of a bubble—I mean who knows but more likely than not there’s going to be a, um, sort of a burst or a deflation, or—god forbid a full collapse—but, in that context, how are you thinking about, the role of AI, decoupled from its like peak hype powers.? I ask because this is something that I think about a lot—AI is still going to be a tool that’s available to management, that it’s still going to have the capacity to deskill and to depress wages. How should we be thinking about this in the longer term, do you think?
Hagen
I think one really crucial thing to keep in mind about these things, it certainly looks like there’s going to be a bubble that a lot of investors are going to lose a lot of money and that it’s going to be bad for workers. But again, that is not really in the realm where we can do something about it, right?
If you’re a worker and AI is coming for your job and it’s gratifying it, the knowledge that it’s a bubble isn’t helping you, right? So that’s one way that I think it’s always important to contextualize that: Well, they’re gambling with our livelihood, but a lot of them are going to lose their money. It’s going to make our lives shittier. But they’re going to keep doing it afterwards, right?
The dot-com bubble only gave rise to the gigantic tech companies that we still live with now. And one of the things that I think is crucial to understand there, and again, to push back against the kind of normal media discourse that happens, is that markets are not a natural phenomenon, right? Markets are always artificial products. And we can see that there’s a lot of market-making going on right now, for example, in the military context. A while back, the US government asked Meta to remove restrictions in their open source licensing of the Lama models that had previously said ‘forbidden military usage’. So that restriction was cut, right? Meta also just got a $1 billion contract from the US government together with Anduril, Palmer Luckey’s company.
So there’s a lot of market-making going on there. And these companies that are investing in these AI things, they’re very, very skilled at figuring out how to produce something that is more like infrastructure. Peter Thiel is always very explicit about this: The way you build a giant company is by creating an artificial monopoly.
BITM
Yeah, and that in turn is where a lot of the enshittification stuff feels so deeply related to this, right? This is about trying to create a kind of infrastructure.
Hagen
And I think that’s probably only going to entrench. And this is going to be done together with government forces. This is going to be done with the help of companies, not because it makes them money, but because these tools are excellent for labor disciplining, for wage depression, and so on.
So in that sense, I feel like, there’s probably a bubble, but I think we should focus on where we can do something. And I think that’s building a collective sense of how this affects us, how this is enshittifying our jobs, the quality of the things we make, our dignity at work. Our pride in what we do.
And that’s why I think the artists, whether it’s the actors or the visual artists as we see right now, they’re such a canary in the coal mine. And the fact that there’s broad solidarity with them is a really good sign.
Artists are losing work, wages, and hope as bosses and clients embrace AI
After the launch of ChatGPT sparked the generative AI boom in Silicon Valley in late 2022, it was mere months before OpenAI turned to selling the software as an automation product for businesses. (It was first called Team, then Enterprise.) And it wasn’t long after that before it became clear which jobs managers were likeliest to automate...
BITM
Right.
Hagen
One of the stories we have in the book is about that Apple Crush ad. There was that ad where they had all these instruments and drawing materials, et cetera, like all these art related things. on the hydraulic press, and then the hydraulic press crushes it. The piano keys shudder, the trumpet gets crushed, the paint gushes out, and at the end of the hydraulic press, there’s, the iPad.
BITM
And people hated it. They had to apologize for it.
Hagen
Yeah, they apologized, for an ad. So there’s this sense of, “yes, this is actually about transferring knowledge and skills into a tool so that you can pay people who use the tool less.” This is happening over and over again in capitalism. And we’re going to dismantle by force all of these things that you love and that create beautiful art.
It’s such a useful metaphor that they handed us. It was so beautifully clear that this is a way of bulldozing precisely the aspects of like human creative activity, which labor should be. "Labor should be, you’re changing something in the world in accordance with your will. And that’s what it means to be alive as a human. It should be a good thing. But unfortunately, so much labor under capitalism is done under circumstances where it’s not like that at all.
But this just made it so obvious that this is really an attack on not just wages, but even on the ability to take pride in one’s job, right? So that activation of that sense of dignity that also like maybe we should really lean into also thinking about, what is an alternative?
Like why do we live in a society where technology is developed as a tool to make it so that people have less control over their labor? Technology should be developed in such a way that doing your work is more pleasant. But because the interests of the people who pay for the technology, the companies, the bosses are hostile to the interests of the workers very often because the workers may want to do things differently. You want your work to be comfortable and interesting and maybe social, in a good way, but the company wants to exert control. There are many, many levels of hostility there.
So pointing these things out and asking Couldn’t we develop technology in a way that serves the human interest in having labor be a good part of life? I am not one of those people who are like, “work sucks.” Work should be great. People love doing meaningful things with their time, and people like producing things.
I love technology. One of the reasons why I got interested in these language models was because i was like “What the hell is happening? These are really fascinating interesting tools what can we learn about how language works!” Just like the Luddites, as you always point out—the Luddites weren’t opposed to technology. They were opposed to technology as a tool of crushing the working class.
A Marxist or Luddite approach is not the solution to the challenges AI poses to working people.
Instead, we need to imbue the tech class with Abrahamic spiritual values.
I'm convinced that no practicing Catholic would ever declare a corporate goal of eliminating all jobs in five years, even as a fundraising stunt, as one bunch of tech bros in the valley recently did.