"If AI is writing the work and AI is reading the work, do we even need to be there at all?" Education workers reveal a growing crisis on campus and off
AI Killed My Job: Educators.
Few spheres of public life have been more rapidly and thoroughly transformed by generative AI products than education, and few professions have been more dramatically upended than teachers and education workers. There’s a case to be made that the first major social transformation of the modern AI era was the mass diffusion of ChatGPT into classrooms, where students took to using it as an easy implement for cheating on homework.
This mass plagiarization crisis has only deepened and complicated since, leaving educators, administrators, and students to grapple with how to construct, enact, and enforce AI policies at school. I should know: My partner is a professor at a university, and dealing with students who use AI to cheat on assignments has become a core part of her job, and an endless source of frustration.
But cheating on coursework is only the tip of the iceberg. Universities have signed huge contracts with AI companies, which have been driving hard into the space, while K-12 public schools have adopted AI tools, sometimes disastrously. (Los Angeles Unified School District superintendent Alberto Carvalho’s home was raided as part of an FBI investigation into a multimillion dollar deal with an educational chatbot developer that failed within months). Such deals, like the California Statue University system’s $17 million partnership with OpenAI, or Ohio State’s policy to mandate all students learn AI fluency, are top-down initiatives that have left many educators working in the classroom backfooted. Teachers and students alike are being encouraged from all angles to adopt AI products, setting up new arenas for tension and conflict, and posing serious questions about the future of instruction.

AI has indeed flooded the profession, impacting tutelage, administrative work, counseling, testing, and beyond. And it’s already had serious ramifications for labor; as in many other trades, education jobs are being deskilled, degraded, and even lost outright to clients and bosses embracing AI systems. Librarians and tutors are watching as administrators and edtech companies embrace AI tools as a means of cutting their work hours. IT and HR professionals in the education space are competing with AI products on the market and speeding up work to match their output. Educators of every stripe worry that quality instruction and critical thinking skills are taking serious hits as AI provides an easy, if frequently incorrect, route to an answer. And ominously, critical AI programs are being cut just as universities turn to embrace chatbots. One instructor at the University of California at Irvine, Ricky Crano, wrote in to share his story of being laid off from a job organizing a series of seminars that examined the tech industry—just around the time the school was promoting its proprietary chatbot, ZotGPT.
Some educators are fighting back: The American Association of University Professors, a union representing academic workers, for instance, has called for faculty control over all AI decisions as a matter of policy, and AI has become a battleground in contract negotiations and campus life. Graduate student unions, librarians, and activists are organizing against administrations that have rushed to deploy AI.
I’ve heard stories like these, and many more. Last year, 404 Media ran a great roundup of stories told by teachers, and how they’re struggling with AI in the classroom. So with this, the fifth installment of AI Killed My Job, we’ll hear not just from teachers, lecturers, and instructors, but from education workers across the field—private tutors, student athlete coaches, librarians, HR employees, essay graders, and edtech workers—who have all had their jobs transformed by AI. These stories, some of which may sound familiar a few years into the AI boom, and many which will not, help paint a fuller picture of just how the technology has already impacted some of our most crucial institutions.
If your job has been impacted by AI, and you would like to share your story as part of this project, please do so at AIkilledmyjob@pm.me. The next installments will aim to cover healthcare, journalism, and retail and service jobs.
Before we proceed, I want to share word of a project that might be of interest to readers. The Fund for Guaranteed Income is a nonprofit that researches and enacts, you guessed it, guaranteed income projects, and they’re working on “a support program for workers whose jobs or income have been affected by AI, designed with direct input from impacted workers,” FGI head Nick Salazar tells me. He asked if I might extend a call for participants with readers, and I’m happy to do so:
If AI has changed your work, you can share your story anonymously at aicommonsproject.org. Submissions directly shape what the program will look like, and anyone who shares will be first to know when it launches.
And now, AI Killed My Job: Educators.
This story was edited by Joanne McNeil.
I don’t have any reason to believe my employer will not replace me totally with AI
Tutor at a community college
I work at a community college as a tutor to students in ESL, English, and—more broadly—writing for other courses like major midyear essays for students in other classes or those working toward high school equivalency tests. My hours were halved after Trump DOE cuts, so like many other workers I was already in a precarious position.
It’s a fairly low-pay job that takes a lot of emotional labor, especially considering many of our students have not only been failed by the education system but are often literally hungry, might not have heat or AC, are refugees from war-torn countries, and/or are facing a constellation of other life challenges that make it very difficult to succeed. Many of my students survive on gig work and/or Amazon warehouse jobs. It didn’t surprise me when some of these first-year English students started using AI for their papers.
One particularly memorable student brought in her paper followed by the AI version written out paragraph to paragraph in her notebook. She wanted me to help her merge them. ChatGPT made a reference to a movie anecdote that doesn’t exist. This was a personal essay. This was early on—when the AI was so shitty, I sort of believed I could convince her of the value of her own story and voice—which I attempted to do for an hour. We really bonded, it felt like a win for the day, and then she had to go work at Amazon.
A few weeks later, she brought in an AI outline that made no sense and she could not explain which made brainstorming for her paper impossible. Her professor had given her an A [on the outline]. Shortly after, she brought in an AI-written paper that also made no sense and pulled from real sources, but ones that were not reputable and with references to quotes that did not exist. Other students had also begun to bring in oddly perfect personal essays.
The focus was frequently how to personalize them, to try to inject a tiny bit of who they are into the bland product AI had spat out. Sometimes I just couldn’t tell whether it was their own work or not.
I cannot express enough how important the human connection is at my job: these are people who rarely get the support they need and deserve.
That’s really at the core of the issue for me: This was one of my favorite jobs because I felt like I was doing a good thing, I saw students make wildly awesome improvements as writers over the months, and we built real relationships. I cannot express enough how important the human connection is at my job: these are people who rarely get the support they need and deserve. I’ve had students break down crying, I’ve talked one off the ledge. I spent a year decoding the insanity that is a green card application for another. One of the biggest barriers to success where I work is just signing in—basic tech literacy. If/when we switch to AI tutors, the lack of accessibility is one of many issues that is invisible to people who don’t do our job every day.
In the past when my students left, I had faith that they could succeed, that they’d really learned something—namely how to think critically and value their own story. We were able to work together because there was a basic foundation of trust. That trust is gone now, replaced by suspicion and frustration. It feels like my new job is how to help students cheat better.
At our latest professional development training, we were told that the college was piloting Khanmigo, an AI “learning assistant” (lol) for math. We were told to write down all of our fears about AI or in the chat and then push them aside. We listed out the fear of job loss the most, followed by loss of critical thinking skills, privacy issues, feeding a machine that steals our ideas and churns out mediocrity…Many tutors went out of their way to include links to sources like MIT. They also pointed out that AI had yet to improve productivity or make a profit. Our supervisors literally did not address any of this. The message was clear: *AI is here to stay and we have to adapt.*
We were told not to question whether students are using AI, to in fact assume they are, and tutor them on how to better use it. The “use cases” my supervisor included had students choosing between different AI rewrites of passages for whichever one is better and why. We’re supposed to encourage them to think critically about what AI spurts out. We’re also supposed to pretend that this type of tutoring makes any sense when students can just ask for suggestions, click apply all, and get on with it—or, as we know many do—just drop the assignment prompt into AI, mix it up in a few different models, and then ask it to dumb itself down a bit to sound more like them.
I don’t have any reason to believe my employer will not replace me totally with AI even though my supervisor insists they won’t. I know a machine can technically do my job and that AI is already making my job obsolete considering students don’t have to write anymore. My college has chosen to hire numerous adjuncts part-time, limiting how many full-timers they have, and I am one of them. I ultimately ended up taking fewer hours than they offered and contacting some old freelance writing clients to spend more time away from there. It feels like rejecting them before they reject me. My supervisor is giving us the “option” to lead AI workshops and said to think about it. I know the right answer is to say yes. I won’t.
—Lauren Krouse
[We followed up with Krouse before publication. She told us she had quit.]
We’re expected to accept work that is clearly not the student’s as if it were
Professor
I’m a professor in the California State University system, which was recently profiled due to its stated desire to be the “largest AI-driven university in the world.” I want to talk about academic misconduct.
Academic misconduct is when students pass off work that they haven’t done as their own. It used to be “when students pass off other people’s work as their own,” but now, students simply plug exam questions into a chatbot and copy and paste what it spits back to them. While academic misconduct has always been a problem on campuses, before AI, marshaling your evidence and presenting it to the student would trigger a confession, which could then become an opportunity to teach.
Now, one of two things happens: either the student confesses but doesn’t change their behavior, or they double down and insist that it’s their work despite the evidence you present. Students are convinced that the AI cannot be detected, and they refuse to listen when you show them the tells, insisting that it was their own work. I had a student make reference to three issues that were extremely relevant to the exam question but so far outside the realm of what we studied that the only way they would know to talk about it is through extensive self-study that would be reflected in their post-exam recollection. When the student had no idea what their own exam was talking about, but they insisted that they had written the exam answer, I was left relatively nonplussed.
I’m new to the CSU, so I haven’t had occasion yet to send a student over to the formal investigatory process. But in general, my experience at other institutions suggests that without a confession, administrators are loathe to impose any penalties for academic misconduct - and the mere fact of referral means that the student will be on guard against less formal sanctions (and in fairness, it would not be inappropriate to call those “retaliation,” so arguably the student is correct). But this means that when a student refuses to take responsibility for their work, that there’s usually simply no consequences whatsoever - and we’re expected to accept work that is clearly not the student’s as if it were.
There’s also the problem of mixed messages. While we as faculty are free to ban the use of AI from our classes, the university system is sending multiple messages that these are good and useful tools for students. Students are given a subscription to a bespoke ChatGPT bot for the university, and there are constantly workshops and continuing education sessions about “how to use AI for [this thing we have to do].” Combining the administration’s aggressively pro-AI stance with the easy availability of tools means that faculty protests usually fall on deaf ears - even after we show students the complete and utter uselessness of the tools for the purposes they want.
And our students are, frankly, primed to be the targets of AI flimflam. The CSU isn’t an open-admissions university but it is an access institution. This means that we admit students who are at-risk of failing out of higher ed, either because of lack of preparation, lack of resources, or lack of bandwidth due to work or care obligations. This means that a lot of our students struggle with the basic kinds of tasks that we assign them. The promise of an automatic task completer is deeply attractive, and the background and training of our students doesn’t really equip most of them to adequately assess the claims of the AI pitch.
A lot of our students struggle with the basic kinds of tasks that we assign them. The promise of an automatic task completer is deeply attractive, and the background and training of our students doesn’t really equip most of them to adequately assess the claims of the AI pitch.
I haven’t been fired and replaced by AI; we have a strong and militant union that is aggressively pushing back against the use of AI to replace faculty. But bargaining is concessions, and it’s not clear whether the administration will be willing to give ground on this issue given how strongly they’ve staked the institution’s future on it. The job has changed due to AI, and combined with all of the other assaults on American higher education, I don’t know if my career will remain on its current track long enough for me to get tenure.
—Anonymous
My university used to WorkDay’s AI to streamline me out of job
Adjunct professor and HR worker at a university
I worked in HR for a university, handling the paperwork for our adjunct professors. Contract hires. And I was an “HR Partner” in addition to my role as an office coordinator.
Anyway, I’m an adjunct myself. Or I was. I taught writing, one quarter per year. A one-credit class. Amounts to about $700. Before taxes.
I took a job at my alma mater, and when it faced financial turbulence, the university responded by laying off 40% of the full-time faculty. They told me I’d be extra busy now, what with all the new adjuncts coming in (cheap labor).
And I was.
The day that HR called me into the Dean’s office to lay me off, I asked them a simple question:
“Who is going to work with all of the adjuncts—the people I handle paperwork and onboarding for every quarter? The people I talk to every day, helping them sort out their classes, keys, syllabi, schedules, and miscellaneous concerns?”
“Oh-oh-oh, WorkDay will do that!” the HR rep told me with a smile. WorkDay is an AI platform for streamlining work. It certainly streamlined mine.
—Jason M. Thornberry
Refusing to use Copilot cost me my job
IT professional at a university
For the past 2 years I’d been working as an IT Professional II at TAMU AgriLife, an organization under Texas A&M University that conducts research and programs related to agriculture and life science. In November 2024, I was moved from the department I’d been working as an IT Professional at since March 2023 to a new department under a new manager. This new manager made it clear from the start that he was obsessed with generative AI, telling me that his department was the place to be if I wanted to learn how to use gen AI. I, however, have never been a fan of generative AI, and had been working my job well for 2 years without it. As soon as I arrive in this new department, this new manager tries to subtly push me into using ChatGPT, Copilot, and Grok to do my work of helping others with technology problems, but every time I was asked I politely declined.
Then, in late March 2025, I had an employee evaluation with this new manager. As soon as the evaluation begins he starts heavily trying to push me into using generative AI, saying it’s “the way of the future” and that “everyone who doesn’t use it will be left behind.” When I tell him I have no interest in using AI, he says that I “better start or I’ll have a hard time finding or keeping a job”. He then tells me he’s going to get me a Copilot license, and that I must take a training course to use Copilot. I tell him that even if I take a Copilot course, I won’t use it in my work and thus buying a Copilot license for me would be a waste of company money. The employee evaluation continues, and he keeps trying to pressure me into using AI, but I continue to decline.
Then, on the following Monday in early April, he sends me a message that he’s gotten me the Copilot license (despite me telling him not to) and tells me to pick a day to take Copilot training. I reiterate to him that I have no interest in using Copilot and try to continue doing my job, but he sends more messages to me in an attempt to try and make me take the Copilot lessons. That same day as I am working, one of my coworkers suddenly gets up from his desk and starts yelling excitedly to our manager about how he had just used generative AI to make a musical about “an impregnation ninja who fertilizes every woman on the planet.” My manager responds by laughing it off like it was nothing. As soon as I get the chance I have a private conversation with my manager to tell him that I have problems with my coworker using generative AI to make explicit musicals in the workplace, but this manager says that this coworker “has always been kind of a degenerate” and that “I’ve told him to stop, but the AI is a tool, so it’s up to him how to use it”...meaning this has happened before, and he’s done nothing about it.
The week continues until Friday, when after asking me again if I’ve signed up for the Copilot training and I reiterate that I haven’t, this manager sends me an email saying that if I don’t take the Copilot training I will either be disciplined or potentially terminated. At this point I’ve had enough, so I call Human Resources and the boss of our IT company and report everything I just told you to them, and that my manager is trying to force me to use generative AI but isn’t doing anything about another employee making explicit works during work hours. Unfortunately though, both the boss and HR try to split this into 2 separate issues; the boss says that he will talk to my manager about allowing a worker to get away with making explicit works at the office, but that this manager can make me use AI if he wants and if I don’t like it I should “choose between my morals or my job”; and HR says they will wait to take action until after the boss talks to my manager.
When I tell him I have no interest in using AI, he says that I “better start or I’ll have a hard time finding or keeping a job.
The following Monday, during the second week of April, the boss of our company does have a private meeting with my manager, but I was never informed what was said at that meeting. I assume that the boss told my manager to back off from trying to force me to use AI, because this manager doesn’t mention the Copilot training courses at all for the next several weeks. Thus, I return to working my job and try to move past this whole fiasco.
Things settle down for the next few weeks, until April 23, 2025. On that day, this manager waits until everyone except for me and him are out of the office, and then suddenly asks if I took the Copilot training (after having not mentioned it in the past few weeks). When I say no, he tells me that the time for the Copilot training in April has already passed, and that because I missed the training I will have to either resign in 60 days or be terminated. I ask about taking the Copilot courses in May that were also being offered, but he doesn’t accept and reiterates that I have to choose resignation or termination. I decided to resign.
—Caleb Polansky
My students genuinely do not understand why they shouldn’t use AI
University Lecturer
I’m a lecturer in Psychology at a large private college in Dublin. At a recent meeting (zoom—of course!) our Data Analytics and Reporting manager asked what we all thought about getting AI to mark our students’ work, the things that couldn’t already be turned into auto-marked online MCQs etc, like long form essays. I pointed out that we are paid to mark the work we assign (it’s actually my least favourite part of the job, but that’s not the point) and asked if we could expect a reduction in pay as a result. I was told “We aren’t going to talk about that.” I should train the AIs to replace me as a marker but should not even be so bold as to wonder what effect that will have on my pay.
Obviously, students are using AI to write the assignments anyway. The idea that we can catch this kind of plagiarism effectively is pure fantasy. Increasingly my students genuinely do not understand why they should not use AI anyway... what is the point of ‘wasting’ days researching and writing an essay when the AI version will be as good or even better?
My question now is if AI is writing the work and AI is reading the work, do we even need to be there at all?
This whole profession never really recovered from powerpoint, this is just the nail in the coffin.
—Anonymous
The majority of the students learn nothing
Private computer science tutor
When the pandemic hit in the Spring of 2020, it was a catastrophe for students suddenly forced into remote learning, as professors were blindsided and desperately improvising… Begrudging acceptance of online education was indisputably bad for learning, but it did create demand for online tutors, and that has been my job since Covid.
The majority of students are behind on an assignment, and just want answers that will get them a decent grade. As a good tutor, my job was to try to redirect that desire toward actual learning, not just do their work for them. Then came the large language models. I got to watch some lower-performing students use them, and it was deeply concerning. They would type (or copy-paste) a description of the code they wanted into a coding environment, then accept whatever completion emerged so long as it compiled. If it did not, they would try again, with no ability to understand or correct the generated code. No learning took place.
I have seen far fewer students, as the casual, lazy ones who just wanted a B-level answer can now often get one from a coding LLM.
This past year I have seen far fewer students, as the casual, lazy ones who just wanted a B-level answer can now often get one from a coding LLM, since homework problems are well-represented in the LLMs’ training sets. They don’t understand (or care) that the point of their assignments is not to create more solutions to homework problems, but to teach them fundamentals of programming and computer science, and there is no one there to gently correct them. The less mercenary students still look for tutoring, and they are more fun to work with, but they are a minority, and unfortunately there are not enough of them.
—Sean
My colleagues have abandoned their values to board the AI hype-train
Librarian at an R1 research university
It is pretty understood in my department that we are opposed to AI in the research process but as soon as we leave our office suite, we are met by many AI advocates. I actually signed up for this substack after I had a disagreement with a professor while I was in the middle of teaching a session for one of their classes and felt like I needed to learn more detailed information on the way AI works. I told students that ChatGPT is not a search engine, and while I was technically wrong in that it has web browsing capability, I was correct in that it is terrible. Unfortunately for me in that moment I have integrity and can say things like “I don’t know that for sure so I will cede that to you and look into it more.”
Pettiness aside:
I think the change is coming fast. Every day there are new conference presentations and papers on why librarians should be using AI and how they can do it. Academic librarians are guilty of always trying to “prove our worth” and get on board with every new trend regardless of whether or not we should. And in the case of AI we absolutely should not. It goes against the core values of the profession as stated by the American Library Association: “Access, Equity, Intellectual Freedom and Privacy, Public Good, and Sustainability.” It violates all of the tenets of the ACRL Frameworks for Information Literacy. It is shocking to me how fast some of my colleagues have abandoned their stated values to get on board the AI hype-train. I get a bitter taste in my mouth every time I think about the ones that were giving land acknowledgements (maybe still are) and now champion AI.
Academic librarians are guilty of always trying to “prove our worth” and get on board with every new trend regardless of whether or not we should. And in the case of AI we absolutely should not.
At my university AI is being pushed from the top down. Leadership has openly stated that workers who do not use AI will get left behind. If there is organized resistance on campus, I haven’t found it outside my department. I know that there are many in the profession who are opposed, however.
I am not sure if I can say specifically that it is changing the way that patrons are experiencing the library. I would not be surprised if it was, though. I do believe it is changing the way that patrons perform research and increasing the likelihood that they will be satisfied with “good enough” or even just “well, it’s something.” I have heard students defend the position that chatbots have access to 80% of the internet and are gaining more every day. I don’t know where this belief comes from other than well crafted propaganda?
But I do want my last point to be this: I don’t blame people, especially students, for using these tools when they don’t know better. We live in a hell world with increasingly limited time for ourselves. ChatGPT and LLMs like it claim to offer them some of that time back. The way that the bots “talk” to them is with a sense of sureness and like the bot is their friend. It doesn’t offer critique of what the user does, it doesn’t challenge them. And while those things might feel comforting, it is cheating them of real learning.
Teaching is a relational process. Student and teacher should both learn from one another and with that comes friction. LLMs will do anything possible to eliminate that relational friction to maintain the comfort of users. So, what’s more appealing? The librarian telling you no, or the chatbot giving you all the “answers?”
—Anonymous
AI training programs are failing student athletes
Assistant Strength and Conditioning coach at an NCAA Division-3 University
I currently work as a part-time, hourly wage, no benefits, assistant strength and conditioning (S&C) coach at an NCAA Division-3 university. My hiring as a part-time assistant already represents a reduction in staffing, at least partially due to AI use. I struggle to get even adequate part-time hours, which may result in the future elimination of the position or my inability to keep the position. This is largely due to direct and indirect effects from AI use.
We use a virtual training platform to deliver strength training programs to the university’s hundreds of student-athletes. This offers several ostensible positives. Ideally, we write training programs in the app that students can access on their smartphones, which includes short gif exercise demonstrations and allows them to enter training data. Streamlining training program delivery can allow us to spend more time coaching and interacting more meaningfully with athletes versus writing the program in Excel, emailing/printing it, and spending most of our contact time reminding athletes how to read it and what the exercises are. The app guidance can help students achieve the training on their own when away from our coaching for various unavoidable reasons.
The app also facilitates a lazier and labor-cutting approach with their range of semi-responsive AI training programs. Provide some demographic details, such as sport, primary competitive season, gender of athlete, beginner/advanced, and any specific equipment exclusions, and the AI will generate as mediocre of a training program as one would expect of such broad inputs and limited knowledge of the actual humans and environment. In the typical AI use case, this is better than literally nothing, and it can receive human intelligence tweaks to go from mediocre to adequate. There is no source info available as to how the AI designs the program, namely, what are the purported differences between sports, genders, beginner/advanced, etc. that they are using to program? A capable human coach would be responsible for answering these questions. I’ve made some comparisons between close options, say male/female or baseball/softball or women’s/men’s lacrosse and found either no differences or arbitrary changes that I would be unable to explain.
In reality, we do not use time saved by AI programming to spend more time coaching and interacting with athletes and colleagues. We do not actually see all athletes or teams. Several teams do not participate in S&C at all. These sports often have team accounts set up on the AI program, but don’t know about them or use them. Even if they did, the programs are so clearly inadequate that I can’t in good faith recommend that they use it without modification. Some athletes do S&C on their own individually, while some sport coaches handle it themselves without us. We’ve ceded this ground as a staff rather than use time saved from programming to develop relationships with coaches and athletes who don’t inherently engage with us. This especially includes athletes and teams who aren’t traditionally enthusiastic about S&C: more women’s teams, endurance or more “niche” sports, and sports with chronically poor win-loss records.
Even if following an app was a direct substitute for in-person participation with qualified staff, this reduces our role to “programmers” instead of teachers.
I build my own training programs in the app and use the app only as the delivery platform. This improves the quality of my training programs, because I’m writing them for the actual humans in front of me, in our actual shared environment, considering their actual sport and academic schedules, instead of the AI estimations of those key factors. I feel that my relationship with the athletes is better, as we’re talking about the training and I’m taking their feedback and we’re working with it together and we see each other’s collected invested efforts. I try to communicate with sports coaches on our shared teams, to mixed results. Some appreciate the collaboration and it has improved my work and deepened my relationship with the team. Others seem to wonder why I’m bothering them. A human-intelligence approach also increases my working hours so that I can actually get close to a full 20-hour week. I would have more like 5-10 hours of “floor time” (ie. in the gym with athletes) only if I followed the head coach’s example to only use prompt-and-tweak AI programming.
I have seen numerous instances of poor quality training due to our use of the AI programs. Here are a few significant examples:
The AI programs are automatically set up to change exercises every 4 weeks. One team changed exercises during the week of their conference championship semi-final game. Sore legs were had by all, as changing exercises is known to increase muscle soreness and the new exercises were more intense. They played the semi-final that weekend to a highly fatiguing overtime win and then lost in the final on the following weekend.
The AI calendar follows pre-established program pathways from one physical focus quality to the next (eg. muscle size, strength, power, etc.). This resulted in one team doing a maximum strength phase (heavy weights, slow speed, high fatigue) during the final weeks of their competitive season, unadjusted for their game schedule. Many athletes simply did not follow the program.
The AI only sets a single competitive season, so it’s immediately inappropriate to use for athletes who have two competitive seasons over a year. Some sports have a split season of both fall and spring competition, either equally weighted or with one slightly prioritized over the other. Athletes on at least one team did high-fatigue hypertrophy (muscle-gaining) training during their spring in-season phase of faster pace, lighter bodyweight, more readiness-dependent performance demands.
Athletes often no-show to sessions with the provided reason that they can do it on their own with the app. Coaches often cancel sessions for the same reason. Even if following an app was a direct substitute for in-person participation with qualified staff, this reduces our role to “programmers” instead of teachers. Of course, as a staff we aren’t even “programming” because the AI program is.
Cutting in-person time eliminates our ability to provide actual instruction of physical movement, develop relationships, and create a quality team training environment. We know that these factors are actually what improve training outcomes, creates a rewarding athlete experience, and benefits life beyond immediate sport performance, not toiling away in isolation guided by an app. Session cancellations, reduced attendance, and low communication also reduce my enjoyment of the job, working hours, and income.
—Anonymous
AI is causing the most damage in student learning and skill development
Academic
I’m an academic. I work in the UK but I spent a decade in higher education in the US and was tenured before moving across the Atlantic. I didn’t have a lot of AI to deal with before my current role, obviously ChatGPT kind of started this trend.
Firstly, MetaAI has stolen every last thing I’ve ever published. Academic publishing doesn’t bring in a lot of money. My book royalties are 2-5% and my articles bring in nothing other than prestige (whatever that is). Academic journals, I should add, are very expensive to access as you may already know. I see nothing from that of course, it’s essentially free labor that my institution sort of expects me to complete but only in vague terms. Certainly if I don’t publish, I may “perish”, to borrow an academic turn of phrase.
In the classroom, student attendance is sporadic at best. In many cases this is because they no longer “have to” show up to class to pick up the material. It’s required to be posted online (again this could be made available to AI training, the material posted technically belongs to the institution), and also they don’t really need to learn anything to do their final papers. Here is where I think AI is causing the most damage: student learning and skill development.
I begin every new class with a whole spiel about learning how to do research and how to communicate research findings. I try to reason with them that they need these skills; simply using AI means they fail themselves even if they manage to pass the class. Their future boss is not going to pay them salary to input prompts to AI and email or print off the output. It never fully gets through. Now we see university leaders, clueless as to how to fight back against the deskilling that has further undermined the concept of higher learning, setting up degree programs in “such and such with AI”. Buzz word loaded plans are shared institutionally without anyone ever asking why. Ironically, one area where AI might do a sufficiently mediocre job is in university management, perhaps turning those meetings that could have been an email into actual emails.
It’s aggravating. I can’t even imagine how bad this is going to get before it gets better. If it does.
—Andrew
Bosses are rushing to use AI to implement “unstaffed opening hours” at public libraries and to deskill school librarians
I’m a library worker and union organizer, working at a public library service in Melbourne Australia. My job at the library is running tech help workshops, but on the side I organize my workplace and organize with other union activists across public libraries in my state.
I thought you might be interested in one of the specific applications of AI in libraries. While this hasn’t led to any significant job less yet, I think it points to the future of the sector. There are two twin technological threats currently facing public libraries in Australia and around the world, and both of these seek to replace (unionised) library workers. This is on top of a culture war on libraries and library workers with the fascist transphobes attacking public libraries for running drag storytime events or even just having queer books in the collection.
In Melbourne there is a rush by bosses to implement unstaffed opening hours at public libraries. While this hasn’t led to a reduce in staffed opening hours yet, once the technology is introduced it can and will be used to replace staff hours as funding gets cut. In addition to the threat of unstaffed libraries, the introduction of an ‘AI’ chatbot to school libraries is directly threatening the jobs of skilled librarians. Called “Book Bot” by Huey, this chatbot housed in an iPad with cutesy trimmings replaces the job of a librarian in helping kids to find appropriate books to read. The company is advertising it as a solution to underfunding and understaffing in schools.
Ironically this private company has received government funding to do this. While Huey’s Book Bot hasn’t been introduced to public libraries yet to my knowledge, taken with the technology of unstaffed library access there is a clear threat to public libraries and all of us who work in the sector.
—Taichen
[Editor’s note: Taichen also shared two briefing documents they’d put together with coworkers; one on AI chatbots in libraries, and another on unstaffed libraries, aimed at educating other library workers. I’m sharing them here.]
Teaching has become a bullshit job
Programming instructor at a community college
I now teach programming online at a community college where I spend most of my time trying to detect cheaters and fake accounts. There’s a whole racket around state and federal scholarships paying nonexistent students. AI facilitates the process of creating profiles and pretending to take classes.
Almost every faculty meeting is about training us teachers how to teach students to use AI rather than helping us teach students how to think and learn and write. Even teaching has become a bullshit job.
I may be able to turn it around for some of my students by showing them how they can build text adventure games and then automate the playing of their text adventure games with AI. But only the most disciplined students (and those without the time constraints of working bullshit jobs to pay off their education debt) will get much out of my course. An English teacher colleague was able to buck the trend and was able to shame his students into writing thoughtful essays. But it takes a lot of effort and skill that isn’t being taught to overworked teachers.
Universities are worse. Much of the funding for Social Media and AI impact research in psychology, sociology, computer science, etc, chins from big tech and most papers do not even critically question whether AI is reasoning at all, or the ethics and safety of teaching and using it. Lost in the noise are the authentic voices of Timbit Gebru, Melanie Mitchel, even Gary Marcus and the few impactful researchers questioning the inevitability of AI as a tool for hyper capitalism, fascism, and genocide. It’s like Hitler discovered the nuclear bomb before the US did. And now fascism is mainstream, almost unquestioned, inevitable. It’s not the power of AI so much as the power of technology to shape minds -- the capture of all sources of media and information and art. It’s just 1984, exactly as Reagan dreamed and Orwell feared. Fiction and art and news are no longer consumed as warnings coming from authentic smart human voices, they are just entertainment, brainwashing tools. And artists and teachers and workers have no alternative but to participate in the ponzi scheme or starve.
—Anonymous
AI Killed my job grading student essays
Grader
One of the first jobs I got out of college (2008, recession era) was grading student essays for standardized tests. Cool job, lots of retired teachers did it. We sat in a huge room for a few months, maybe 45-60 people, and scored every essay written by fifth graders in a state a few states away from where we were, based on pretty specific criteria. Just a temp job, to be clear.
Years later, this was a job I did during COVID, something that could be done remotely. But now the training, done online, only had 8-12 people in it, with some people flunking out of that training, and the work itself was scheduled only to last a few weeks. I learned that most of this essay grading was done by AI, and we were only getting the papers AI couldn’t quite handle.
—Brian Nicholson
The AI evangelists are tough to fight with
“Tech guy” working in a school system
I’m not a teacher but I am a “tech guy” in a school system. We’ve taken a very slow and steady approach with it, banning its use in all but very specific cases, and requiring teachers to define how they’ll be used. But there are students who use it without any regard for the right or wrong of it.
But what has frustrated me most about it is the teachers trying to push for more AI access. They want to use it as 1) AI detectors, and no amount of “that doesn’t work” has convinced them. Or 2) to grade and summarize papers.
And it drives me crazy when I say “does the student’s work differ significantly from their previous work?” they look at me like I have six heads. Like thinking about whether the student’s most recent paper reads like a “written-by-HR” ass document is a huge ask.
And as for point 2, there comes a time when I have to look at them and say “if the students are writing papers with AI, and you’re summarizing the papers with AI, then why are any of us in this building.”
To be clear, most of these folks are decent and only want the very minimal AI intrusion in their classrooms. But the few that are loudly in favor are driving me up a wall.
Like I said, most of the teachers (and students!) either want little to do with AI stuff, or just the barest minimum of streamlining a process they have difficulty with. And, for instance, our Special Ed folks are looking for uses of the tech that will help their students fill the needs they have. It’s noble for the most part, and I appreciate that they’re willing to listen to feedback and really talk through these things.
But the evangelists are tough to fight with. No amount of “these companies are hoovering up data” and “we have both legal and moral obligations to protect our students’ privacy” convinces them, even when we point out that they are not FERPA or COPPA compliant.
—B-rad
My team of education workers has been cut in half
Team manager at an edtech company
My current job is as a team manager at a small private education adjacent tech company where we’ve seen traffic steadily decline because LLMs can do what we do (even if it is more expensive, less reliable, and wrong more often than not).
To try to find our place in this new world, over the last year, we’ve seen a transition away from our normal creative and fulfilling development work towards generating massive datasets that we’ve sold off as “training data” for LLMs to many of the big tech companies, under the guise of getting them better at math and reasoning.
This work is often mind numbing and demoralizing and especially demeaning to have creative programmers turn out rote, repetitive work like this. Worse than the generation is the QA sweep (that devs got looped in on) of manually looking over these massive datasets to do a quality pass.
Moreover, while this project was sold to me as being a deal so lucrative that it would set us up for the next few years, and let us hire new developers, but once we finally got paid, it only just brought the budget back to zero. So it increasingly seems like this is the future of my work and this company.
We had one round of layoffs last year when these projects started that my team luckily avoided. However, I was just told that more layoffs are coming. I know several good developers who are looking for work elsewhere, but the tech industry as a whole seems to be on a bit of a hiring freeze, and many of us have health insurance needs or families to support that makes simply walking away a terrifying choice.
[We followed up with the contributor to see what’s changed since they wrote to us six months ago. They had this to say:]
My team has literally been cut in half since my last message. Some were let go, some reassigned to different departments, and some quit seeing the writing on the wall. One person was let go on the spot via email because they were having trouble with their pc, and the company decided to end their contract on the spot rather than help them debug the technical issue.
My company, at least my corner of it, creates k-12 and collegiate math and science education material. My team helped create dynamic and visual aids, and homework-helping walkthroughs to solve problems in math and science. These are used some in classrooms, but mostly by students doing homework. A lot of my team are former teachers and educators who find this work engaging and satisfying - they feel they are still contributing to the education of the next generation, just in a more indirect way than teaching. My favorite feedback I hear from people in the wild when they find out where I work is some variation of “Thanks for getting me through high school math!”
Gen AI has not replaced this work, but because sites like ChatGPT can do most of what we can do without manual development work, our internal priorities have shifted away from creating these programs towards other efforts. The few of us who remain are instead tasked with figuring out how to best integrate LLM technology into our already existing tools and functions. This involves building the runway as the plane is taking off - creating the tools we need to use, as we are using them.
Multiple people have had their future contracts tied to specific LLM-related projects, and I’ve been told in not so many words, that if the project fails, these people are gone. But without any clear roadmap or direction, let alone documentation for how to do such integration, I feel like they are being set up to fail. The deadlines for these are fast approaching, and hopefully what we’ve cobbled together will be acceptable, but even if they are, they will still be LLM-powered features, and thus have the same inaccuracy and inconsistency problems that plague all LLM projects. It would be embarrassing to release something that can be so incorrect at times, when this company is known for mathematical accuracy. It’d be like if your pocket calculator occasionally returned 2+2=5.
—Anonymous
Gen AI Edtech platforms “lovebombed” my client and then my contract was up
Edtech contractor
The last two years have been hell because working in tech education, you are fighting to make [clients] understand the risks and harms, and they all think they can just use a gen ai LMS to make the trainings that you make. We are increasingly seen as disposable, especially as women. I notice men in tech aren’t losing money making training, but we are. But that’s another story.
I had a contract up until this month to make IT trainings for an educational setting. I lost the role because they got love bombed, basically, by two large gen ai edtech platforms. All these promises of productivity and ease of content creation etc. It won’t work out for them long term but they don’t see that. Anyway, part of the sales pitch was how the gen ai can make training on policy and other tech areas—which was my job. They are so sold on this that they ended my 6 month probation with “we don’t see the trial as working”. I made incredible material for them, and worked in small groups and individually with staff on how to use the IT. Some of them had never used Word or Outlook before. They will now have terrible mandatory training too, full of errors and stolen work, but it will be generated in minutes. No one will proofread it or think like I do about the language, the accessibility. They just want easy quick work. And the saddest thing is they won’t even save money as they are paying the Twitter and TikTok influencers who work with these platforms TWICE what they paid me in one month to come and “train staff”.
—Michelle
The hiring committee requires applicants that would incorporate Copilot in their workflows. I didn’t get the job.
Teaching Fellow
I applied for a role as a “Ethics and Regulatory Coordinator” at the University of Auckland a few weeks back. the role seemed to require the applicant to act as a go-between for researchers making applications, committee members making decisions on what kind of research they’ll allow, and the university bureaucracy itself. The detailed job description includes a point about applicants being familiar with Microsoft Office, Including Copilot. As a final piece of background, ethics applications at UoA have been taking a while and some researchers have been frustrated with long wait times and inconsistent feedback from committees, while the committee is apparently sick of dealing with poor-quality research applications that require a lot of remedial work.
At the job interview, I was asked about my familiarity with using Copilot to create efficiency solutions in the Office. I gave a measured answer where I noted the usefulness of AI tools when summarizing spreadsheets and creating templates etc but said I didn’t trust Microsoft’s claim about data being separate and also stated I didn’t think we should use LLMs in decision-making or communications (email summaries and responses etc) for research ethics.
It took an unusually long time to hear back about their hiring decision, I had to email the relevant HR person to ask if something had happened. When I did get a call, I was told that they appreciated my experience with research design and ethics etc but they needed to find someone who was comfortable with incorporating Copilot into the ethics process and experienced with doing so.
—Benjamin Richardson




Hi Brian, you do important work, and the latest article is very troubling for anyone with strong ethics and morality. Sadly, perhaps, tragically, we're at the very early stages of the disruption of humans' traditional value propositions. I've written extensively about this, beginning in mid-2023. Unfortunately, AI x Capitalism signals disaster for workers. Neither of us has the Silver Bullet that could even begin to unwind what's occurring. I can say I have also thought deeply about what people can do to begin to reposition themselves in how their value is perceived. My article today might interest you. It's over on the Medium channel: https://medium.com/@gregtwemlow/build-the-human-capability-ai-cannot-replace-8f1bfa02a0fc
Thank you for your fine reporting and for fighting the good fight. I am an older person so I won’t be around to experience the dream-come-true of the greedy tech gods. So sad to see humans and human emotions and intelligence valued at zero. But don’t give up—as you have reported, many people see the utter insanity of AI and maybe the tech lords will be defeated.