In my information literacy class last night, we had a guest speaker who has previously written about AI and librarianship, and spent about half of the class discussing AI literacy. Of the thirty-odd students in the class, the biggest concern raised was the environmental impact, followed by the impact on labor and privacy concerns.
The vast majority of my classmates already work in the field of librarianship, with the rest of us either switching to the field or jobs where a master's in library and information science degree would be beneficial. I know it's a small sample size in a hyper-specific industry, but I didn't see the environmental impact listed on the slides you shared, and it makes me wonder where it would've been ranked if it was included in the survey.
When asked to identify as either an enthusiast, dabbler, skeptic, or luddite in regards to generative AI, the vast majority of students who responded chose skeptic, with what was probably a normal distribution forming a bell curve around the other responses. I chose dabbler, skeptic, and luddite, but clarified that I was more of a luddite in the labor sense of the term, and recommended this blog.
Based on the chat that was occuring during the discussion, (it's an online course, but very participatory given the fact that it's synchronous), most of my fellow students recognize that generative AI seems like it could potentially be an existential threat to our occupation. I mentioned how the political science/international relations professor, Paul Musgrave, has stated that the "recommendations about what to read in the scholarly literature" provided by Claude were "on par on with any recommendation I’ve ever gotten from an organic research librarian," which may cause some university administrators to (mistakenly) think they can replace us with an LLM.
However, I think it's worth noting that as research librarians, experienced academics are only one small part of the community we serve, and providing research recommendations is only one small part of our interactions with them. I suppose I should clarify that I'm not actively working at an academic library yet, just applying for these positions (which is kind of terrifying given the tumult in the field of higher education that trumpism is causing). Nevertheless, the value I hope to provide in the role of research librarian goes far beyond just providing scholarly literature recommendations, but that's something I'll expand on in a more appropriate space than this comment/note.
The problem, as I think many workers realize, is that employers in many industries beyond just higher education or public service are going to be emboldened by the actions of Musk, DOGE, and Trump to try to reshape employment in their sectors around LLM's and generative AI. What I don't think they realize, that I think many workers might (even if it's just subsconciously), is that beyond the brittle nature of generative AI, is that market concentration, rising inequality, private equity, and enshittification have all come together into something like an economic bubble. When that bubble pops, which I think will occur due to trumpism's in-flight disassembly of the modern administrative state taking a needle to it, the outcome has the potential of looking far more like France in 1789 than the techlash.
Sorry for such a long comment, the tl;dr is essentially that I'd be curious to see how workers ranked the environmental impact if given the option in the survey of their concerns.
To add a technical note (very much FWIW), my library recently introduced a pilot AI search engine for its resources (online and not). From my testing results *in general* reasonable, but there'll often be one reference that's wildly off (debate about the causes of WW1 featured an edited volume about origins of WW2). I have to say, however, that so far all references in the results were actual papers, as opposed to hallucinations.
A major shift is underway in the workforce, and it's one that most policymakers and business leaders either failed to anticipate or actively ignored. The transition from Software-as-a-Service (SaaS) to Employee-as-a-Service (EaaS) is already well in motion. By the end of 2025, entire sectors will find themselves in economic collapse as AI-driven agential systems begin competing directly with human payroll.
The issue is not just automation but economic thresholds. Every industry and region has tipping points where human labor will simply become unviable due to cost. AI isn't just a tool; it is being structured as an economic agent that will make decisions based on efficiency and cost reduction. This means that where an AI system can replace workers at a lower long-term cost, it will—especially as executive decision-making remains locked in a shareholder-primacy model.
The article highlights workers’ skepticism toward AI's supposed benefits, and they are right to be wary. AI is not being designed to empower workers but to serve corporate interests. The major players—OpenAI, Microsoft, Anthropic—are embedding AI into enterprise systems to streamline operations, monitor employees, and ultimately reduce dependence on human labor. The belief that AI will augment jobs rather than eliminate them is a comforting myth, but the trajectory of AI investment and deployment tells a different story.
Preventing this collapse required urgent political and economic adaptation—last year at the latest. Governments should have been preparing for structural shifts in employment, crafting policy frameworks to protect workers from displacement, and investing in alternative economic models that
Honestly, I'm far more concerned about the economic impact of the AI crash that's coming. Because AI is ruinously expensive, and no one is turning a profit off it. Open AI is doing the best, and they spent $9B to make $4B last year, and it's only getting worse. Their $200/month "membership" *loses them money*! AI is nowhere near being able to replace workers from a reliability, or economic standpoint, and when Wall St. catches on to this, tech stocks are going to collapse as every single one of the Big 7 have sunk tens of billions in capx on AI.
The trajectory is already obvious—costs are dropping, efficiency is rising, and the economic incentives are locked in. AI isn't profitable yet in all use-cases, but its trajectory is like every other transformative technology: initial high costs that rapidly decrease as efficiencies scale. The mistake is assuming current numbers tell the full story, rather than recognizing the trendline.
The collapse people are waiting to see "in real time" will play out across business quarters, not decades. The moment AI reaches certain economic thresholds, the shift will be insurmountable, and anyone still expecting the "AI hype collapse" will be in for a brutal wake-up call. Gary Marcus and others holding onto that narrative are about to see their position become untenable—because the thing about cognitive biases is that reality eventually catches up, and the blind spots of this era will be studied for decades to come.
A pretty good study. thanks for sharing those slides and talking about it in this post.
Considering Union and non-Union views were pretty similar I didn't understand why they kept visualizing them separately. It just really convoluted their message and took away from the actual concerns for me at least
I was just saying that I have been getting emails on my work account for AI software that potentially could replace me, could take my job as a construction take off estimator and in order to prevent that I am trying to get into the mindset of learning how to use the ai software to do my job and be more efficient as an estimator. I'm not sure what the timeframe is for this but I'm also looking for a second job just in case. Not out of fear of AI taking my job moreso geopolitical uncertainty.
The point of this article is that "learning how to use the ai software" and becoming "more efficient" have benefits, if these have benefits at all, that are solely captured by management to squeeze you. Efficiency is a vague term used by management and fascists--see DOGE--to justify their abuses of power. I don't believe adaptation is the answer, since that is one of the lines touted by those selling this software--you have to fight for your value. You're the one who does the work. You have value that supersedes software.
Also, looking at your Youtube channel, you seem to be one of those insidious individuals who always comment on articles skeptical of AI and purposefully miss the point in order to subtly advocate for its use. You do not care about worker's rights. You do not care about the rights of individuals to their creative labor and work, which are so fundamental to the formation of professions.
In my information literacy class last night, we had a guest speaker who has previously written about AI and librarianship, and spent about half of the class discussing AI literacy. Of the thirty-odd students in the class, the biggest concern raised was the environmental impact, followed by the impact on labor and privacy concerns.
The vast majority of my classmates already work in the field of librarianship, with the rest of us either switching to the field or jobs where a master's in library and information science degree would be beneficial. I know it's a small sample size in a hyper-specific industry, but I didn't see the environmental impact listed on the slides you shared, and it makes me wonder where it would've been ranked if it was included in the survey.
When asked to identify as either an enthusiast, dabbler, skeptic, or luddite in regards to generative AI, the vast majority of students who responded chose skeptic, with what was probably a normal distribution forming a bell curve around the other responses. I chose dabbler, skeptic, and luddite, but clarified that I was more of a luddite in the labor sense of the term, and recommended this blog.
Based on the chat that was occuring during the discussion, (it's an online course, but very participatory given the fact that it's synchronous), most of my fellow students recognize that generative AI seems like it could potentially be an existential threat to our occupation. I mentioned how the political science/international relations professor, Paul Musgrave, has stated that the "recommendations about what to read in the scholarly literature" provided by Claude were "on par on with any recommendation I’ve ever gotten from an organic research librarian," which may cause some university administrators to (mistakenly) think they can replace us with an LLM.
However, I think it's worth noting that as research librarians, experienced academics are only one small part of the community we serve, and providing research recommendations is only one small part of our interactions with them. I suppose I should clarify that I'm not actively working at an academic library yet, just applying for these positions (which is kind of terrifying given the tumult in the field of higher education that trumpism is causing). Nevertheless, the value I hope to provide in the role of research librarian goes far beyond just providing scholarly literature recommendations, but that's something I'll expand on in a more appropriate space than this comment/note.
The problem, as I think many workers realize, is that employers in many industries beyond just higher education or public service are going to be emboldened by the actions of Musk, DOGE, and Trump to try to reshape employment in their sectors around LLM's and generative AI. What I don't think they realize, that I think many workers might (even if it's just subsconciously), is that beyond the brittle nature of generative AI, is that market concentration, rising inequality, private equity, and enshittification have all come together into something like an economic bubble. When that bubble pops, which I think will occur due to trumpism's in-flight disassembly of the modern administrative state taking a needle to it, the outcome has the potential of looking far more like France in 1789 than the techlash.
Sorry for such a long comment, the tl;dr is essentially that I'd be curious to see how workers ranked the environmental impact if given the option in the survey of their concerns.
To add a technical note (very much FWIW), my library recently introduced a pilot AI search engine for its resources (online and not). From my testing results *in general* reasonable, but there'll often be one reference that's wildly off (debate about the causes of WW1 featured an edited volume about origins of WW2). I have to say, however, that so far all references in the results were actual papers, as opposed to hallucinations.
To paraphrase Cory Doctorow, we can't just focus on what technology does, but who it does it for, and who it does it to.
A major shift is underway in the workforce, and it's one that most policymakers and business leaders either failed to anticipate or actively ignored. The transition from Software-as-a-Service (SaaS) to Employee-as-a-Service (EaaS) is already well in motion. By the end of 2025, entire sectors will find themselves in economic collapse as AI-driven agential systems begin competing directly with human payroll.
The issue is not just automation but economic thresholds. Every industry and region has tipping points where human labor will simply become unviable due to cost. AI isn't just a tool; it is being structured as an economic agent that will make decisions based on efficiency and cost reduction. This means that where an AI system can replace workers at a lower long-term cost, it will—especially as executive decision-making remains locked in a shareholder-primacy model.
The article highlights workers’ skepticism toward AI's supposed benefits, and they are right to be wary. AI is not being designed to empower workers but to serve corporate interests. The major players—OpenAI, Microsoft, Anthropic—are embedding AI into enterprise systems to streamline operations, monitor employees, and ultimately reduce dependence on human labor. The belief that AI will augment jobs rather than eliminate them is a comforting myth, but the trajectory of AI investment and deployment tells a different story.
Preventing this collapse required urgent political and economic adaptation—last year at the latest. Governments should have been preparing for structural shifts in employment, crafting policy frameworks to protect workers from displacement, and investing in alternative economic models that
Honestly, I'm far more concerned about the economic impact of the AI crash that's coming. Because AI is ruinously expensive, and no one is turning a profit off it. Open AI is doing the best, and they spent $9B to make $4B last year, and it's only getting worse. Their $200/month "membership" *loses them money*! AI is nowhere near being able to replace workers from a reliability, or economic standpoint, and when Wall St. catches on to this, tech stocks are going to collapse as every single one of the Big 7 have sunk tens of billions in capx on AI.
The trajectory is already obvious—costs are dropping, efficiency is rising, and the economic incentives are locked in. AI isn't profitable yet in all use-cases, but its trajectory is like every other transformative technology: initial high costs that rapidly decrease as efficiencies scale. The mistake is assuming current numbers tell the full story, rather than recognizing the trendline.
The collapse people are waiting to see "in real time" will play out across business quarters, not decades. The moment AI reaches certain economic thresholds, the shift will be insurmountable, and anyone still expecting the "AI hype collapse" will be in for a brutal wake-up call. Gary Marcus and others holding onto that narrative are about to see their position become untenable—because the thing about cognitive biases is that reality eventually catches up, and the blind spots of this era will be studied for decades to come.
A pretty good study. thanks for sharing those slides and talking about it in this post.
Considering Union and non-Union views were pretty similar I didn't understand why they kept visualizing them separately. It just really convoluted their message and took away from the actual concerns for me at least
I was just saying that I have been getting emails on my work account for AI software that potentially could replace me, could take my job as a construction take off estimator and in order to prevent that I am trying to get into the mindset of learning how to use the ai software to do my job and be more efficient as an estimator. I'm not sure what the timeframe is for this but I'm also looking for a second job just in case. Not out of fear of AI taking my job moreso geopolitical uncertainty.
The point of this article is that "learning how to use the ai software" and becoming "more efficient" have benefits, if these have benefits at all, that are solely captured by management to squeeze you. Efficiency is a vague term used by management and fascists--see DOGE--to justify their abuses of power. I don't believe adaptation is the answer, since that is one of the lines touted by those selling this software--you have to fight for your value. You're the one who does the work. You have value that supersedes software.
"You're the one who does the work. You have value that supersedes software"
Exactly this. Well put.
Also, looking at your Youtube channel, you seem to be one of those insidious individuals who always comment on articles skeptical of AI and purposefully miss the point in order to subtly advocate for its use. You do not care about worker's rights. You do not care about the rights of individuals to their creative labor and work, which are so fundamental to the formation of professions.
You are a shill. We see what you are.
Like share and subscribe 😜