It's open season for refusing AI
There's been a wave of successful efforts to ban, reject and shut down AI.
Bernie Sanders and Alexandria Ocasio-Cortez made some waves when they proposed a nationwide moratorium on data center construction. It’s not a subtle policy idea, and it’s not meant to be. “We cannot sit back and allow a handful of billionaire Big Tech oligarchs to make decisions that will reshape our economy, our democracy and the future of humanity,” Sanders said in a statement accompanying the bill. “We need a federal moratorium on AI data centers.”
Sanders’ and AOC’s proposal builds off of years of increasingly energized, widespread and bipartisan opposition to data center development on the municipal and state level. Several of those efforts have successfully shut down or delayed planned data centers. The movement has grown so broad, and so concerning to the AI industry, that a group was launched just to track it. Eleven states, from deep red to dark blue, are currently considering data center moratoriums; Georgia, Vermont, Michigan, Virginia, North Dakota, and South Carolina are among them. The mayor of Denver, Colorado just passed one. The Seminole Nation of Oklahoma became the first tribal council to enact a moratorium on data center development.
It’s not just data centers, either. It’s a trend I’ve noticed over the last few weeks: Across the AI economy, workers and consumers have taken to refusing the technology in direct and robust ways.
Thanks for reading BLOOD IN THE MACHINE. This newsletter is made possible by paid supporters who chip in a few bucks each month (or $60 a year) so I can keep the lights on to do this reporting and writing. If you find value in BITM, and in helping to keep posts like this free for all to read, please consider upgrading to a paid subscription. Many thanks, and hammers up.
Wikipedia, one of the largest, best-known, and most-visited websites in the world, recently announced it was banning AI-generated content. The new policy states that:
Text generated by large language models (LLMs) such as ChatGPT, Gemini, Claude, DeepSeek, etc. often violates several of Wikipedia's core content policies. For this reason, the use of LLMs to generate or rewrite article content is prohibited.1
The policy was approved by a 40-2 vote. In recent months, Wikipedia editors had faced a significant rise in AI-generated content, much of which was riddled with errors. The problem had gotten bad enough that a group of editors, calling themselves WikiProject AI Cleanup, had to band together to systematically delete the deluge of AI-generated mistakes.

Editors had had enough. Following the lead of the German language Wikipedia team, the English Wikipedia decided that banning AI-generated content outright was the best move. It was also the overwhelmingly popular one.
I could go on. I think I will. The video game studio Capcom was recently compelled to declare that it “will not implement any generative AI assets” in its games. Previously, executives had suggested they were experimenting with AI, and the studio’s latest Resident Evil game had been showcased in a demo of Nvidia’s new DLSS 5 (Deep Learning Super Sampling) tech, which, as Games Industry notes, “was criticized for adding an AI sheen to character models.”
In other words, Capcom understands the cultural and economic climate in the gaming world as one that demands a promise to refuse to use AI art altogether.
It’s a similar story with publishing. One of the ‘big five’ publishers, Hachette, recently became the first major publisher to cancel the publication of a novel for containing AI-generated writing. A number of commentators and critics had already pointed out on social media that Shy Girl, by Mia Ballard, contained prose with all the telltales of AI-generated text. In a New York Times article, the head of an AI writing detection company, Pangram, concluded the same. When the NYT reporter reached out to Hachette for comment, the publisher pulled the book.
“Hachette remains committed to protecting original creative expression and storytelling,” a Hachette spokeswoman told the Times. The spokesperson added that Hachette requires all submissions to be original to the authors, and demands they disclose any AI use in the writing process. While much of the resulting discourse has focused on the longterm inadequacies of such policy, raising questions like “how much AI can a writer use without penalty” and “what constitutes reasonable AI use in writing?” a consensus answer seems to have emerged about what the answer to that question is among writers themselves, and it is something close to “None.”
There are pro-AI detractors, of course, but at least for the time being, the publishing industry has attuned itself to a policy of de facto AI refusal. It’s done so both for material reasons—it’s still unclear whether publishers can claim ownership over works generated in part by AI, or how many people actually want to read AI-generated books—and for cultural and political ones. Namely, to avoid more outrage and blowback from authors and readers alike.
Alex Preston, a book reviewer for the New York Times, discovered this after he was dropped as a contributor for using AI to generate a review that overtly plagiarized from another review in the Guardian. The specific policies may be nebulous, but the moral standards are still clear enough; use AI, risk torching your career.
Matters are a little less clear cut, perhaps, in journalism, where demands for volume are high, pay is low and many tech and business journalists are more interested in emulating the efficiencies of the companies they cover than practicing their own craft. Over the last couple of weeks, there’s been a (somewhat bizarre, really) run of stories about journalists boasting about their AI usage. These include a content farmer for Fortune who “writes” 600 stories a year, a former Verge editor who uses AI to create drafts for his tech news Substack, and a Washington Post columnist who says she uses AI to research columns and to generate ideas.
There have been many good responses to this journalism-in-AI moment—the labor journalist Hamilton Nolan, Today in Tabs’ Rusty Foster, and New Yorker literary critic Becca Rothfield all contributed—but I think Mariam Kabas, the independent reporter, probably summed up the consensus sentiment best. She titled her rebuke, aptly, “Refusing to accept an AI-poisoned future of journalism.” There is no pride, she wrote, “in relying on a machine to do deeply human work.”
She opened her piece with an anecdote:
In a November conversation at the Urban Consulate in Detroit, the great writer and thinker Tressie McMillan Cottom was asked by host Orlando P. Bailey, “Do you have a daring idea for us to ponder and sit with for our collective future?” McMillan Cottom replied with this: “When people try to sell you on the idea that the future is already settled, it’s because it is deeply unsettled. I think that this promise of an artificial intelligent future is really just a collective anxiety that very wealthy, powerful people have about how well they’re gonna be able to control us in the future. If they can get us to accept that the future is already settled—AI is already here, the end is already here—then we will create that for them. My most daring idea is to refuse.”
Today, I refuse.
Indeed. Refusal certainly seems to be the practice, while not always named, animating so many of these actions. Refusal of AI in a field, occupation, enterprise, or art form we have been told is doomed to upheaval.
And these are just the events of the last couple of weeks. I didn’t even mention how educators at the University of Edinburgh called on its administration to drop its OpenAI contract in an open letter signed by 470 teachers there. Or how the Hacks star Hannah Einbinder called AI creators “losers” who are “trying to rob real creative people of their gifts.” Or that Oberlin’s Luddite Club wrote a letter rejecting their school’s proposed “year of AI exploration”; the hand-typed missive was published last year, but went viral in recent weeks.
Hell there are probably some high profile acts of refusal I’ve missed.
Generative AI has never been popular in a majoritarian sense, even as the tools have accrued millions of users. Since the boom began in 2023, polls have consistently found that Americans are more concerned than excited by the technology. That remains true today, and perhaps truer than ever. A new Quinnipiac poll headlined its findings by stating that “Americans’ AI Use Increases While Views On It Sour.” 76 percent of those polled didn’t think AI output is trustworthy, while 55 percent think AI will do more harm than good. Just a third said it will do more good than harm.
The margins of increased usage, meanwhile, comport with what you would expect to see as more employees were compelled by management to use AI in the workplace—a trend we know is taking place—and the gradual normalization of new technology that is being promoted by tech oligopolies and true advocates alike. But just reflect on that polling, which is by no means an outlier; every Pew poll I’ve seen over the last couple of years has found basically the same thing. The more people use AI, the less they like it, and the more concerned they are. It’s no surprise then that we’re beginning to see mass, overt refusal of AI in one context after another.
And what’s particularly interesting to me is how the politics appear to be moving from rejection to refusal. Max Read has written about the ever-proliferating hype and backlash cycles of AI discourse, and how last year marked the beginning a “backlash to the AI backlash.” After LLM products launched in 2022, it was to a symphony of hype about imminent super-intelligence and a jobs apocalypse that lasted through much of 2023; when neither materialized in 2024, widespread backlash set in. The products made wrong-fingered slop, were wrong all the time, and so on. By 2025, there were enough use cases and new demos, and certainly enough investment capital, to shoulder a renewed full court PR press; for the industry to mount a backlash to the backlash. There’s a case to be made that this particular cycle may have reached its apex with the Claude Code episode, which was made to demonstrate that AI is proficient at generating code and novel apps.
Now the dust has settled from the backlash to the backlash a bit2, and I think we’re seeing something of a new mode of AI protest emerging. As tempting as it is to call it a backlash to the AI backlash to the AI backlash, it feels different, and maybe more robust than that. Where the last backlash fixated on products, companies, and their harms and shortcomings, this new spate of refusals feels grounded in more categorical terrain. The questions we are litigating now are less ‘is this AI product good or bad’ and more ‘we have seen what it can do, and do we want AI to exist in this space at all?’
Do we want a half dozen tech giants spending hundreds of billions of dollars on data centers around the world to build what is essentially the same technology, spiking energy costs, creating noise and pollution, and even, according to one study, dramatically raising the temperature around the complexes due to an advanced heat island effect? So that those tech giants can make good on their promises to automate jobs en masse and remake the social contract to their liking? A great many people who live closest to said projects have decided they do not, and rather than cut deals or hedge bets, they have chosen to refuse them.
Do we want AI text and image generators producing our journalism, safeguarding our stores of knowledge, creating our art? Even if it can? If not, then it makes good sense to refuse outright those products’ entry into those arenas. To ban AI-generated content from Wikipedia, from publishing, from video games. And to ban the companies aspiring to enrich themselves by taking over all that knowledge and content production from setting up shop in our backyards.
There is great power in refusal. The Luddites are mocked today because elites worked hard to distort their legacy—it is too inconvenient, too dangerous, even—but they were cheered as folk heroes and left the industrialists deskilling their jobs terrified by refusing outright to submit to rank automation. The writers and actors in the WGA and SAG-AFTRA who went on strike in 2023 rallied millions to their cause by drawing a line in the sand and refusing to let their work be turned over to studio bosses with enterprise ChatGPT accounts.
Looking back at the events of the last few weeks, I can’t help but wonder if we’re seeing a reawakening of our capacity for this sort of mass refusal. As it becomes clearer by the day that AI promises to be an implement of automation; of worker exploitation and knowledge degradation; an enormous energy and resource consumer; a tremendous engine of wealth transfer.
There are more signs yet: Stop GenAI mutual aid groups that pull no punches in calling for exactly that. The similarly monikered Stop the AI Race that protests in the streets of San Francisco. The New Yorker writer Rothfield, who has written “you don’t have to use AI—stop it right now” is organizing a campaign to keep AI out of publishing, academia, and journalism. I have been asked to join a number of ‘Writers Against AI’ groups in recent days, and to sign a call to ban outright generative AI products aimed at children. There is a lot of political potential in straightforward refusal here, as well, as Sanders and AOC have gleaned; labor, academia, and public interest groups might benefit from taking note.
Now, sure, “AI” is a nebulous descriptor that can be applied to a great many technologies and tendencies, many of which are not being operationalized to automate creative jobs and deskill workers. (This a fact that is often wielded to AI industry advocates’ advantage to blunt or deflect critique: Do you hate spellcheck too??) But the truth is, it’s clear enough to most people what Silicon Valley’s project boils down to, in practice, when its executive class talks about AI. It is to move as much chatbot product, slop, and job automation as it can. It’s entirely possible—as well as moral and popular—to refuse this project.
There are two exceptions: Wiki authors may use AI for copyediting, and for translating Wikipedia articles from another language into English, if guidelines are properly followed.
It showed that the technology is impressive in a lot of ways, and that it’s possible to create apps from scratch without much coding experience, but apart from further deskilling software engineers and automating code production, it remains unclear how much the business landscape will actually change.





It is such good news that so many of my fellow humans are saying a hard "no" to this hazardous technology. How quickly it came--and how quickly it seems to be leaving,or being forced out. I lost a good chunk of my academic editing business when the chatbots showed up--why pay a human $50 an hour when ChatGPT will rewrite your paper or even your dissertation for nothing? It will be interesting to see how long before grad students start showing up at my door again!
Might I also suggest that we go a bit lower on the food chain to nip this whole thing in the bud and that is to oppose and ban Chip Factories, like Micron, from our communities . It's the chips that are the building blocks of these data centers that create the AI.
And the processing of these chips requires as much, if not more electricity, water and land, not to mention the enormous amounts of chemical wastewater that needs to be "treated" before being returned to our rivers and lakes - that make them as environmentally destructive as, if not more so than, data centers, but seem to be below the radar in this scenario ...