This Labor Day, let's consider how we want technology to work for *us*
On AI, workers' resistance, and reimagining a truly democratic technological society
Greetings, fellow machine breakers. This dispatch is arriving a bit later than I would have liked, as I spent last week at a remote gathering talking about the farther-reaching implications of AI with some very smart folks in some unusual contexts—more on that before too long, I hope. I also have a half dozen or so pieces I need to finish up for the newsletter here, including one on the tech politics of a prospective Kamala Harris administration, Mark Zuckerberg’s ‘bad boy’ open source play, and how the Alien films refract our dystopian present through the void of space. Fun stuff, promise. But for today, seeing as how it’s Labor Day, I thought I’d take a quick stroll down memory lane, and consider the status of workers’ confrontations with AI companies—and ask what we want technology to do for *us*, for a change.
As always, this newsletter is free to read—so subscribe and forward away—but it is made possible by paying supporters, to whom I am always thankful. General Ludd salutes you.
Ten years ago to this weekend, I stumbled onto a journal article about the Luddites that my partner had left out on a table in our Philadelphia apartment. She was a grad student at the time, and I found the Donald MacKenzie article among her reading assignments in a pile of printouts. The piece, on technological determinism and Capital, noted in passing that the Luddites—long cast in history books as reactionaries and/or idiots for smashing technologies which they did not understand—were anything but.
No, their revolt was strategic and organized; “the most dramatic expression” of a “working-class critique of machinery,” as MacKenzie wrote. That machinery, wide frames and power looms that allowed lower quality clothes to be produced more rapidly, at lower cost, was despised not for its technological qualities, but the uses to which it was put. Because the machinery was deployed by bosses to drive down workers’ wages, degrade their craft, and facilitate the hire of child laborers to run it.
This was news to me. I’ve probably mythologized that day a bit since then, but I recall all of this feeling like a revelation. I’d been covering the tech industry as a reporter and editor for years; the so-called ‘techlash’ was still years away, but at Motherboard, we’d been watching the impact of Uber on taxi and delivery drivers, the worsening conditions at Amazon warehouses, and, more generally, the power big tech held over workers and users.
I went down a rabbit hole that weekend—the article lead to another, and another—learning the actual history of the Luddites and the true nature of their rebellion, and a lot of things started clicking into place. I wrote a short piece called ‘You’ve Got the Luddites All Wrong’1 and I’ve been thinking about who technology is made to serve, and who is made to suffer, ever since. Now, almost exactly one decade later, we’re at the height of the AI boom, in a moment that draws some uncanny parallels to those early days of the Industrial Revolution. It’s a moment in which skilled creative workers are threatened by machinery that stands to depress wages, wipe out livelihoods, and degrade the quality of output, so that a relatively few technology owners might benefit.
But this Labor Day, I wanted to reflect on a different question, and perhaps open up some room for dialogue: Now that a growing number of people have got the Luddites right, well, what next? We know that AI, automation, and tech platforms are being used to squeeze workers and to profit Silicon Valley. The resistance to AI—we’re seeing everyone from artists to nurses to truck drivers to voice actors pushing back—is lively, full-throated, and popular. So where do we go from here? We’ve seen plenty posts and pundits exhorting us to get with it or technology will leave us behind; I say this Labor Day is an occasion to consider what *we* want technology to do for *us*.
Yes, times are still dark; jobs in creative industries from video games to animation are under threat from executives aggressively wielding AI, right now. Illustration and copywriting gigs are drying up as managers turn to Midjourney and ChatGPT for cheap alternatives to creative human labor. Management remains as eager as ever to automate as many tasks as possible with AI, whether or not they really can. And yes, workers will and should continue to resist exploitative deployments of tech, whether through strikes or protests or contract negotiations or class action lawsuits.
But drawing from a number of promising but incomplete victories—primarily, the WGA and SAG-AFTRA’s triumph over the studios in preventing them from using AI to create original work, or to replace or degrade their labor, and the artists’ class action lawsuit against the AI image generation companies moving to discovery—I’d like to encourage us all to spend some time thinking about how we might expand our control and input over how AI, and tech more broadly, shapes our lives; whether in our workplaces, in the streets, or at the ballot box.
CONTINUE TO REFUSE THE TECHNOLOGIES OF EXPLOITATION
We’ve been playing defense for so long—and will absolutely continue to need to do so—but what better day is there to underline that there *is* power here. There is great solidarity among those confronting AI, who are building power around keeping it from automating vital work, for instance. There’s perhaps as much potential as much as danger if we recognize this, and organize accordingly. Now, some organizing will continue to focus the politics of refusal; identifying the contexts of technology in which AI should simply not be used, and resisting it outright. Do we want AI replacing artists simply so studio heads can bolster their bottom lines? Do we want AI to take over “teacherless classrooms”? Or AI to generate reportage in newsrooms? No, no, and no, I think.
But how do we codify this refusal, and build upon it? Those WGA and SAG-AFTRA contracts demonstrate one way forward, as unions are perhaps the best tool at our disposal for enshrining worker protections from AI. Another is the legislation currently sitting on Gavin Newsom’s desk in California that would mandate that studios obtain consent before using AI to create digital replicas of performers and workers. Yet another is the bill that died last year when Newsom vetoed the effort to mandate that a human driver be present in an autonomous trucks, that was supported by the Teamsters and others. And the list is growing still.
FORGE THE TECHNOLOGIES OF CONSENT
Each of these efforts aspires to forge mechanisms of consent, and aims to place the power of who decides how AI will be used in a job in the hands of the worker—the one who knows that job best, after all—allowing them to benefit, rather than just management. Such tools will be imperfect, and perhaps insufficient, but they’re a good start, and perhaps the most promising arena for establishing lasting worker power over AI. I’d love to see more work done in this space, more thought developed in more corners given to how we can build policy that prevents wanton and harmful automation of work, and moves decision-making power over the tech into the hands of labor; how to truly empower workers to use AI how they see fit—or to *democratize* it, to put it in Silicon Valley terms.
To wit: AI might be a useful copilot for a long haul truck driver—*if* there’s a human on board as a safety backup, and *if* “AI” doesn’t just offer tech companies an excuse to move the driver to an offsite where he must monitor the progress of multiple trucks at once and intervene in cases of danger, lowering safety standards and degrading jobs in the process. AI might be useful to video game workers on the side of software engineering, who like using it to automate the generation of rote code, but pose an an existential threat to concept artists who work down the hall—how do we ensure any benefits of AI automation are enjoyed by the coder, while protecting the artist, so jobs and game quality alike do not suffer?
AI might, theoretically, be useful to nurses to assist in, say, diagnosing patients—if the addition of AI systems does not constrain nurses’ ability to override faulty diagnoses or provide a hospital or healthcare provider an excuse to hire fewer workers. What serious protections might be put in place such that if AI is used, it’s with worker consent, and as an augmentation, and not an automation tool?
MANUFACTURING THE TECHNOLOGIES OF CONSENT (AND COMPENSATION)
Similarly, lawsuits that allege AI companies have violated the copyrights of artists and journalists by training systems on their works without consent or compensation are proceeding. Meanwhile, tech companies and startups are attempting to facilitate ways to help creators license, and presumably, profit from their work; most such efforts, like Adobe’s, have been maligned as woefully insufficient so far, but the idea’s interesting at least.
It does raise the question here: What is the future we want, with regards to ensuring artists are fairly paid for their work? Probably not a Spotify-like system that pays out pennies, that risks taking shape now. In many ways, freelance and independent artists and workers are the most vulnerable to AI and automation; I hope this drives many to organize, but the options for precarious workers in the here and now are few. We desperately need good mechanisms for helping writers and artists and creators earn the compensation they’re due for their work, and to sustain a good living, protected from plagiarism and precarity. All ideas welcome here.
TOWARDS THE HORIZON: 4-DAY WORKWEEKS, UNIVERSAL HEALTHCARE
Beyond that, we should be thinking about proactive moves too—the time has never been better to advocate for a 4-day workweek, for instance. We’re just taking the AI companies, who say they’re in the process of ushering in a sublime productivity revolution, at their word! What better way to ensure all workers, not just CEOs, share the benefits of this presumably enhanced efficiency? While we’re at it, the same case might be made to agitate for universal healthcare again, too. The case should be obvious: In a world of hyperabundance, everyone gets access to free healthcare. It should be the very first thing. Anything else is immoral. Otherwise you get Elysium.
This is simply to say, for the last two years, investors, policymakers, and corporate enterprises have taken the tech companies at their word when they’ve pitched AI as a revolutionary and transformative technology—they should be methodically pushed to support commensurate revolutionary and transformative social policies, then, no? Unless, of course, they want to admit that they are full of shit.
Now, look; this is of course a haphazard and deeply incomplete list of ideas, but again; what better time to get the ball rolling? There are more people, users, and workers thinking about technology and how it’s shaping their lives—and willing to push back openly when that shape is turning out worse—than anytime perhaps this century so far. Generative AI is in this sense a burden and a beacon; a vast and dull force embraced by management to automate and oppress creative work, that may yet spark a broader reimagining of how we coexist with technology in general. And how we want to make it work for us, and not the other way around.
Happy Labor Day, and please do drop thoughts, ideas, readings, and/or schemes for realizing a world where technology is democratically developed, worker-first, and wielded for the public good. Until next time, hammers up.
BONUS LINK: You’ve really got to read Ted Chiang on why AI isn’t going to make art in the New Yorker. It’s a beautifully argued piece, and places another nail in the coffin of the AI industry’s ‘AI will democratize creativity’ claims.
The story is a mess format-wise now that VICE has been mismanaged, bankrupted, and sold off for scraps, but it’s technically still readable here.
Argh, let me try that again.
As a retired software engineer, I got interested in the claims Microsoft wa making about the ability of Copilot to write code, so I tested it. It failed, rather spectacularly, in tests on writing well known algorithms in Swift code, C++, and even pseudocode. In on case it left out half the algorithm, in another it emitted random gibberish instead of correct syntax at one point. Certainly not ready for serious use.
As a father, I’ve had to watch my younger son be turfed out of his career as a freelance graphic designer. The jobs just dried up and disappeared, as generative AI took over cheaply but poorly.
But as a longtime believer in fairness, and a believer that we now have the ability to create a post-scarcity economy for every member of the human race, just not the will to do it, I think that even as we protect the lives and livelihoods of the workers against encroachment by AI replacements, we need to be very concerned about the large number of people especially in the Global South who work for a few dollars a day to make the AI datasets work and to put guardrails on them. Without them the mechanical Turk won’t work, but the AI companies, mostly through 3rd party entities to prevent accountability, want to squeeze them even harder than the workers they employ directly.
And as someone who wants there to be a human-habitable planet for my granddaughters to inherit, I think we need to find ways to limit the use of energy and water to power the training of the AI engines that seem to be so profitable for just a few people.
How do we do all these things at once? Damned if I know, but I’ll keep thinking about it, and I’m definitely open to suggestions.
I don't know about you, but I'm not demanding a right to get paid to work for someone else; I'm demanding the right to be able to live comfortably even if nobody cares to hire me for enough $$$ to live comfortably on, or even if I am, for whatever reason, unable to work, or even if I'm busy doing something nobody wants to pay me to do, like raising my own kids: https://diy.rootsaction.org/petitions/end-poverty-demand-a-ubi-equal-to-what-congress-pays-itself