The artists fighting to save their jobs and their work from AI are gaining ground
From a milestone in a class action lawsuit against the AI image companies to animators pushing to contain AI in the workplace, it was a good week for human artists
Greetings and welcome to another installment of BLOOD IN THE MACHINE, the newsletter about Silicon Valley, AI, and work. This thing is free to read, so sign on up below. It is, however, an endeavor that takes many hours a week. If you get value out of this work, it would mean a great deal if you became a paying subscriber—chipping in a few bucks a month would allow me the resources to continue and expand said work. Anyway! As always, cheers, and keep those proverbial hammers at the ready.
It’s been an exceedingly rough year and change for working artists. Few groups have been more vulnerable to the rise of generative AI than they have. Illustrators have watched their commissions plummet as commercial clients embrace AI art generators. Concept artists and asset designers at video game companies are being laid off, and those who remain are being asked to use AI to fill the production gap. Existential dread feels rampant in the field.
Which is why it’s nice to report a little good news on this front, for a change: In some key arenas, artists are actually gaining ground against the AI companies chipping away at their livelihoods and the corporations using AI to automate their work. This week, a judge ruled that a lawsuit filed by a group of artists alleging AI image generation companies have infringed on their copyright can proceed, and move to the discovery phase. This means, of course, that the companies have to turn over information about how their models are trained, whether that includes copyrighted work, and how much of it, as well as relevant internal communications. It’s a day the AI companies desperately hoped never to see.
Meanwhile, this week gave every indication that the animation industry is organized, energized, and ready to tackle the threat AI poses to workers head on, in a way few fields have been before. The Animation Guild began negotiations with the Alliance of Motion Pictures and Television Producers, which represents the Hollywood studios and major entertainment companies, and securing AI protections is on top of its list. According to organizers, the Stand With Animation rally TAG held last Saturday to prepare for the negotiations was the biggest in its history. The battle lines are drawn, in other words, and AI is on the other side. (Across town, it’s worth noting, video game actors are on strike, too—fighting for protections to ensure AI won’t be used to replace them.)
All told, it’s been an encouraging week for artists, not to mention workers everywhere who might hope to insulate their livelihoods from onerous and/or top-down implementations of AI. It’s too early to call it a sea change, but these events do seem to rhyme with certain writers’ (ahem) theories, that the generative AI boom is easing. But they’ve certainly offered some encouragement to artists who may have needed the boost.
“I’m ecstatic,” says Karla Ortiz, one of the plaintiffs, along with fellow artists Kelly McKernan and Sarah Anderson, in the case against Midjourney and Stability AI, when I ring her to talk about the milestone. “I definitely had a nice big bottle of sparkly wine.”
The lawsuit Ortiz is part of is one of the most-watched legal challenges to the AI industry, and there’s little ambiguity here: This is a major victory for the artists. It’s sure to be a long and hard road ahead, but many predicted the lawsuit wouldn’t get past this crucial hurdle. Now that it has, they can get down to the issue at the root of the question: Can AI companies legally ingest copyrighted materials found on the internet to train their models, and use them to pump out commercial products that they then profit from? Or, as the tech companies claim, does generative AI output constitute fair use?
As Ortiz points out, it really boils down to the matter of consent.
“There was a big question as to whether generative AI companies utilizing the works and data and names of people without their consent—whether that was okay in the eyes of the law,” Ortiz tells me. “There was the big lingering question of whether all of this was going to be thrown out before we got to explore all that in the courts. And it’s a really big deal because the judge found many of our claims plausible enough to allow us to continue to pursue this case,” she says. “And to say 'hey, we believe it’s not okay for multimillion and in some cases billion dollar companies to use our works without our consent, without our permission, without our knowledge or credit or compensation and so on—to make extremely profitable models that generate imagery that feels like us, and also competes in our own markets. We’re here saying that’s deeply unfair, that’s not okay.”
Their overall goal, she says, remains the same: To set a clear precedent for what can and cannot be done with artists’ workers. “What’s okay and not okay to use,” she says, “and just because it’s on the internet doesn’t mean that it’s publicly available for any major tech company to profit from.”
Each of these, of course, are key points in the raging debate over AI output. The AI companies and their backers argue that any publicly available data is fair game to be used as training data, and that because the resultant output from the models is transformative, it constitutes fair use. Artists say not so fast; the models have not only ingested copyrighted works without a contract of consent, but they can often replicate artists’ works almost pixel for pixel—hence the nickname they’ve garnered, “plagiarism machines.” And the fact that the output is then entered into markets where it competes with the artists’ work violates fair use doctrine.
As a result of all this messiness, some companies have been reluctant to use the AI image generators too widely or too publicly. This development may deepen that corporate wariness.
“The implications of this particular order are very very interesting,” Ortiz says, in that “our copyright claims were allowed to continue on companies that have used some of these models and datasets as third party software.” In other words, it’s not just Midjourney or the major AI generator companies themselves who may be liable, but any company that licenses or uses the systems in a commercial context. “So it’s telling the world, ‘hey, You know, actually, if you use this—and at this point everyone knows, almost everyone is aware how these models are built—there’s a potential your company could be held liable, enough to get to at least the discovery stage, on copyright infringement. Hopefully that sends a message to companies trying to use these exploitative models.”
Ideally, Ortiz says, it will lead to more companies taking a pause on adopting the technologies, while this gets worked out in the courts—giving artists a little room to breathe. “I’m ecstatic that we’re allowed to take this next pivotal step,” she says, and that they’ve been given “the opportunity to pursue and to vindicate our rights—and embark on that journey firmly. I’m really, really excited for what’s next.”
“And I’m glad I get to have front row seats,” she adds with a laugh.
So that’s the big deal news on the legal front for artists. The big deal news on the organizing front for artists is that the Animation Guild is mobilized and ready for its battle against companies that want to use AI to automate their work. The art of organizing around threats AI poses to workers’—and artists’ in particular—livelihoods has clearly matured, and the issue has clearly lit one hell of a spark among artists and creative workers.
I had a piece in WIRED this week profiling Mike Rianda, the director of The Mitchells vs. the Machines, and he and his fellow animators’ efforts to protect their livelihoods from AI as they enter into negotiations.
Here’s a relevant part from that story:
Rianda is convinced that executives are quite serious about using AI to cut jobs and save on labor costs as soon as they can. “Having a little bit of a vantage point of being in rooms with executives,” he says, “[I’ve heard] comments like, ‘Look, in future, I really do think it could literally just take half the jobs and say, good-bye.’”
The Animation Guild (TAG) spent much of 2023 studying AI’s impact on jobs, forming a task force dedicated to analyzing AI in animation, watching the way that AI took a central role in the Writers Guild of America (WGA) and Screen Actors Guild–American Federation of Television and Radio Artists (SAG-AFTRA) strikes, and meeting with experts at organizations like the AI Now Institute…
And Rianda doesn’t think it matters much if the AI is any good or not. “They'll profit for two years until the bottom falls out and everyone's like, ‘This is horseshit.’ But by then, it's too late. The job's already gone. They're not going to be like, ‘Oh, you know what, let's go back to the old way and pay everyone super fairly.’”
So Rianda has become one of the most recognizable faces in the push for protections against AI in his industry, joining TAG’s organizing committee, reaching out to colleagues to get involved, and lighting up Twitter with proclamations like: “My opinion is that the standard should be studios cannot replace a SINGLE artist with AI. Period. Without that, AI will start replacing 'small jobs' + will begin to hollow out our industry one job at a time.”
On Saturday, the guild held a Stand With Animation rally in Burbank. Hundreds, if not thousands, of animation workers gathered in the parking lot of IATSE Local 80, toting signs that were, fittingly, well drawn and framed, featuring characters like Bender from Futurama and Bob from Bob’s Burgers, with slogans like “AI Can’t Replace Artists” and “Leave Animation to the Humans (Because AI Can’t Do It).” Anti-AI sentiment was easily the predominant trend.
When I interviewed writers and actors at the picket lines of the WGA and SAG-AFTRA strikes last year, there was a mix of sentiment around AI, which, while largely negative, encompassed anxiety, uncertainty, equivocation, and anger.
The crowd in Burbank was the most uniformly and passionately anti-AI I’ve ever witnessed.
It took me by surprise a little, I’ll be honest—and I’m online watching discourse over AI play out every day. I was not expecting such unified, vociferous, and concentrated opposition. The (actual, non-caricature) Luddites would be proud, especially because, like the Luddites, these artists are in no way acting irrationally or as reactionaries: They understand exactly how AI can destroy their trade and their livelihoods, and they are opposing its use in those contexts, strategically and methodically. They will be mocked and derided by AI advocates who see their opposition as an obstacle to profiteering, just as the original Luddites were derided by factory bosses. But the artists know they aren’t opposing technology for the sake of opposing progress or anything like that. They just don’t think AI should be used to automate the creation of art.
I snapped some photos at the rally, which was packed.
Now, it needs to be said that neither of these developments, on the legal front or the organizing front, are conclusive. They’re tentative gains, a snapshot of promise. There’s a long ways to go before artists can feel any real or lasting protection from tech companies and corporations seeking to profit off of or automate their work. But it’s a start; a class action lawsuit getting further than many in our tech-obsessed culture thought could go, and thousands of animators united, standing together, shouting their say about how automation technology should be used in their work. Neither the legal case nor the organizing are campaigns against the technology itself, after all, but efforts to wrest consent over how the technology is used in their lives. Over agency.
For all the crowing from Silicon Valley about how AI “democratizes” art, here are the artists, refusing to be steamrolled, in the streets, democratizing the use of AI.
It is, I might venture to say, a beautiful thing.
I was also happy to see that the case wasn't dismissed and has advanced into discovery phase. Things can still change. They did for piracy.
🥳