Workers know exactly who AI will serve
A new study shows workers aren't buying Silicon Valley's hype, and know just who will benefit from AI in the workforce if current trends hold.
It’s been two and a half years since generative AI has become Silicon Valley’s consensus product, and the industry’s standard line in promoting that product goes something like this: AI is powerful and even scary, but it’s going to make our lives easier, do work we don’t want to do, and pave the way for a world of abundance. OpenAI even began that period, way back in the heady days of 2022, as a nonprofit that claimed it was dedicated to ensuring all of humanity would share in that abundance.
Now, in 2025, OpenAI is vying to complete its transition into a fully for-profit corporation, Sam Altman has become one of the richest men in the world, and one of the biggest markets for generative AI is enterprise software, where OpenAI, and peers and competitors like Anthropic and Microsoft are selling AI as an automation service. Meanwhile, the companies touting the transformational power of AI in the workplace as loudly as ever. The question is, where do workers stand on all this?
Blood in the Machine is 100% funded by readers. If you find this kind of writing, reporting, and analysis valuable, consider becoming a paid supporter so I can continue doing it. Many thanks.
According to a new study by FGS Global, they see a technology that will primarily benefit large corporations, be used to surveil them and invade their privacy, and over which they will have little power. FGS interviewed 800 union workers, 800 nonunion workers, as well as industry and political leaders. (Disclosure: The study was commissioned by the Omidyar Network, where I was previously a reporter in residence.)
Workers are excited about the potential productivity benefits the technology enables, but are also keenly aware that as it stands, those benefits will be captured by management, and that they will have little control over how AI is ultimately used in the workplace.
In other words, I would say that, by and large, workers are seeing right through Silicon Valley’s hype, and AI for what it is. They get it.

Now personally, I’d rank government lower — unless your rationale was using AI DOGE-style, to justify eliminating government — but otherwise, seeing as how, in a workplace context, AI is primarily an automation tool that management will use to cut labor costs, that’s about how I’d rank who stands to benefit from AI, too. Corporations, execs, and startup founders selling the stuff at the top, workers at the bottom.
This list of fears of how AI will be used in the workplace is pretty spot on, too; workers are aware of and educated about the litany of risks posed management foisting AI tools on them.
However, when asked about the most concerning impacts about AI in general, job loss falls a couple slots—workers are more broadly concerned about job loss as a general phenomenon than the threat AI poses to their own roles. Once again, they’re well aware of the complexities of their work, and of the limitations current AI systems possess.
Depending on which field you’re in—if you’re an artist or a copywriter, the employment threat posed by managerial AI is more existential—privacy concerns and surveillance is perhaps indeed a more uniform risk to workers. It’s interesting, and surprising to me, that so many workers placed AI’s threat to children’s ability to learn so high; I do think that threat is real, and perhaps it’s a vestige of witnessing the impacts of social media, the last generational tech trend to adversely impact youth.
Just wanted to pull out this slide, too, that shows how overwhelmingly workers are concerned about AI’s impact on privacy—these are workers who have past experience with exactly how mandatory digital systems affect them at work:
Finally, here are the benefits workers see in AI…
… *if* they could use it in a way that wasn’t dictated solely by management.
Only a third of non-union workers and 39% of union workers think they’d be in control of the technology at work:
Most workers want regulations and safeguards, and measures to prevent AI systems to be used against their interests—and, of course, from being used to erode working conditions.
“Workers don’t think they will benefit the most from the growth of AI tools in the workplace—with productivity gains accruing to those in charge,” Graeme Trayner, a partner at FGS and a co-author of survey, tells me. “This has the potential to further exacerbate frustrations around income inequality.”
“Workers do bring a healthy skepticism to AI but see the positive impact it might have on their roles,” Trayner added, “particularly the potential to give all workers access to the same expert knowledge. However, this optimism could easily erode if the technology isn’t properly guided.”
Trayner expects that workers will increasingly demand more transparency and guardrails around how and when AI is used in the workplace, especially around automation and privacy matters. “We were also struck by how fears of AI are driven by concerns over data privacy and security, more so than fears over job loss or displacement,” Trayner told me. “Workers express deep misgivings over how AI may infringe upon their own digital rights.”
Indeed—as tracks with historical precedent, workers tend to know best how managers and bosses will use technology designed to enhance productivity in the workplace. Typically, it will be in service of speeding up their work, surveilling their behavior, and automating their tasks (or jobs) for saving on labor costs. The hype around AI may be louder than usual, the stories about its implications more dramatic, but workers know the score. If they want to capture the benefits of a technology—or merely to ensure it isn’t used to squeeze them—as usual, they’re going to have to fight.
In my information literacy class last night, we had a guest speaker who has previously written about AI and librarianship, and spent about half of the class discussing AI literacy. Of the thirty-odd students in the class, the biggest concern raised was the environmental impact, followed by the impact on labor and privacy concerns.
The vast majority of my classmates already work in the field of librarianship, with the rest of us either switching to the field or jobs where a master's in library and information science degree would be beneficial. I know it's a small sample size in a hyper-specific industry, but I didn't see the environmental impact listed on the slides you shared, and it makes me wonder where it would've been ranked if it was included in the survey.
When asked to identify as either an enthusiast, dabbler, skeptic, or luddite in regards to generative AI, the vast majority of students who responded chose skeptic, with what was probably a normal distribution forming a bell curve around the other responses. I chose dabbler, skeptic, and luddite, but clarified that I was more of a luddite in the labor sense of the term, and recommended this blog.
Based on the chat that was occuring during the discussion, (it's an online course, but very participatory given the fact that it's synchronous), most of my fellow students recognize that generative AI seems like it could potentially be an existential threat to our occupation. I mentioned how the political science/international relations professor, Paul Musgrave, has stated that the "recommendations about what to read in the scholarly literature" provided by Claude were "on par on with any recommendation I’ve ever gotten from an organic research librarian," which may cause some university administrators to (mistakenly) think they can replace us with an LLM.
However, I think it's worth noting that as research librarians, experienced academics are only one small part of the community we serve, and providing research recommendations is only one small part of our interactions with them. I suppose I should clarify that I'm not actively working at an academic library yet, just applying for these positions (which is kind of terrifying given the tumult in the field of higher education that trumpism is causing). Nevertheless, the value I hope to provide in the role of research librarian goes far beyond just providing scholarly literature recommendations, but that's something I'll expand on in a more appropriate space than this comment/note.
The problem, as I think many workers realize, is that employers in many industries beyond just higher education or public service are going to be emboldened by the actions of Musk, DOGE, and Trump to try to reshape employment in their sectors around LLM's and generative AI. What I don't think they realize, that I think many workers might (even if it's just subsconciously), is that beyond the brittle nature of generative AI, is that market concentration, rising inequality, private equity, and enshittification have all come together into something like an economic bubble. When that bubble pops, which I think will occur due to trumpism's in-flight disassembly of the modern administrative state taking a needle to it, the outcome has the potential of looking far more like France in 1789 than the techlash.
Sorry for such a long comment, the tl;dr is essentially that I'd be curious to see how workers ranked the environmental impact if given the option in the survey of their concerns.
To paraphrase Cory Doctorow, we can't just focus on what technology does, but who it does it for, and who it does it to.