The most powerful takedowns of generative AI, from those who know its impacts best
It's knives out for AI: Engineers, artists, educators and other workers are raising the alarm more loudly than ever before. Here are 8 of their must-read broadsides against the tech.
Greetings, and welcome to another edition of BLOOD IN THE MACHINE, the newsletter about the humans caught in the gears of AI + big tech. I’m traveling this week—if any of my fellow machine breakers are in France this July, give me a shout—and under a deadline for a forthcoming project, so today’s post is something a bit different: A curated reading list of protests, critiques, and polemics by experts and workers who are watching generative AI tear at the fabric of their own fields.
Thanks to all who’ve subscribed, and especially all those who’ve paid for a subscription—it means a great deal, and makes the continuation of this work possible.
The knives are coming out for generative AI. There’s already been ample backlash against the technology and the companies selling it but over the last few weeks, the heat’s gone up a level or six. We’ve moved from ‘artists and writers are protesting the threat to their industries’ to ‘even respected computer scientists are penning blog posts titled “Destroy AI” and engineers at data consultancies are going viral by saying “I Will Fucking Piledrive You If You Mention AI Again.”’
The anger and frustration, in other words, is unmistakable. For one thing, generative AI has sucked all the oxygen out of the room. Silicon Valley has become Gen AI Gulch, an industry monolithically dedicated to the technology, and the explicit pitch from its leading purveyors is that they’ll deliver freshly opaque algorithmic management tools and automate away creative labor of every stripe: The production of art, the creation of films, the writing of text, the completion of coursework. (Next week, I hope to write more about “the democratization of creativity,” which is how OpenAI CTO Mira Murati described generative AI’s benefit, and has fast become one of the industry’s logically impenetrable buzz phrases.)
That’s the promise, which many find alarming enough, but so far, the tech *doesn’t even work reliably*. So we’re getting a Google flooded with bad and specious AI “overviews,” a continued stream of hallucinated content, and creative workers who are losing their jobs despite it all. Large corporations buying up hundreds of thousands of enterprise subscriptions for ChatGPT despite it all. Militaries adopting the tech despite it all.
That’s why I think we’re watching the simmer of discontent rise to a boil: More and more people are skeptical, even incredulous. Folks from every background are pitching in to raise their voices, to say ‘why this?’ Why is this being foisted on our platforms, our workplaces, our lives? (The answer is probably ‘corporations are desperately hoping the tech can give them leverage over workers’ but I digress.)
Hence today’s post, which is one I’ve wanted to do for a minute: A roundup of some of the most pointed and persuasive critiques of generative AI, from those best positioned to understand its impacts on the ground. We’re talking artists, educators, engineers, cybersecurity experts, neuroscientists, illustrators, consultants, designers and science fiction writers. If you want to truly understand how a technology is playing out, it’s best to hear it not from columnists or tech writers or AI boosters — but directly from those on the front lines of where AI is being adopted.
So, I wanted to gather the sharpest of these testimonials together in one place, as a resource for those thinking about how generative AI might effect their own professions, workplaces, and lives. Not everyone may agree with all of these critiques, but even generative AI advocates should have to reckon with the points they make. And note that a lot of the anger you’ll find here comes from the monolithic, predestined aura in which Silicon Valley has shrouded the technology; it’s not that these workers and researchers hate generative AI itself, necessarily, but how it’s being rammed into every opening the tech companies and enthused executives can find.
ALSO note that there are tons of great critical voices on generative AI: Timnit Gebru, Emily Bender, Margaret Mitchell, (their “On the Dangers of Stochastic Parrots” is a classic of the genre) Alex Hanna, DAIR, 404 Media, the AI Now Institute, the This Machine Kills boys, the Trashfuture crew, Ed Zitron, Joy Boulamwini, Reid Southen, Ed Newton-Rex, Karla Ortiz, Cory Doctorow, the list goes on and on and on. Gary Marcus wrote one of the first and most prescient takedowns of OpenAI’s brand of generative AI (“GPT-3, Bloviator: OpenAI’s language generator has no idea what it’s talking about,” in MIT Tech Review). Most of those folks have a whole oeuvre of tech criticism, which I very much encourage you to dig into—for the below, I aimed to mostly include voices who saw generative AI hitting their fields on the front line, in realtime.
Without further ado, here are 8 must-read broadsides against the most pervasive tech trend of the day, from those who know its cost best.
I Will Fucking Piledrive You If You Mention AI Again
By Nikhil Suresh, data scientist and engineer
Most organizations cannot ship the most basic applications imaginable with any consistency, and you're out here saying that the best way to remain competitive is to roll out experimental technology that is an order of magnitude more sophisticated than anything else your I.T department runs, which you have no experience hiring for, when the organization has never used a GPU for anything other than junior engineers playing video games with their camera off during standup, and even if you do that all right there is a chance that the problem is simply unsolvable due to the characteristics of your data and business? This isn't a recipe for disaster, it's a cookbook for someone looking to prepare a twelve course fucking catastrophe.
by Ali Alkhatib, computer scientist and data ethicist
I’m gravitating away from the discourse of measuring and fixing unfair algorithmic systems, or making them more transparent, or accountable. Instead, I’m finding myself fixated on articulating the moral case for sabotaging, circumventing, and destroying “AI”, machine learning systems, and their surrounding political projects as valid responses to harm.
In other words, I want us to internalize and develop a more rigorous appreciation of those who fuck up AI and its supporting systems.
by David Polumbo, illustrator
It has become standard to describe A.I. as a tool. I argue that this framing is incorrect. It does not aid in the completion of a task. It completes the task for you. A.I. is a service. You cede control and decisions to an A.I. in the way you might to an independent contractor hired to do a job that you do not want to or are unable to do. This is important to how using A.I. in a creative workflow will influence your end result. You are, at best, taking on a collaborator. And this collaborator happens to be a mindless average aggregate of data.
See also: “AI degrades our work, nurses say.”
Keep ChatGPT out of the classroom
by Liz Shulman, high school English teacher
Using ChatGPT in school leads students to believe they don’t need to think of their own ideas. As a result, they risk feeling unprepared, anxious, and insecure inside and outside of school…
All kinds of writing — all kinds of thinking — begin in the imagination. Writing requires fits and starts and a willingness to fail, to start again, to refine a thought and back it up. Of course, not all of our students aspire to be writers, but they all deserve the opportunity to be thinkers — to believe in themselves and build confidence, to observe and analyze their world.
ChatGPT eliminates the symbiotic relationship between thinking and writing and takes away the developmental stage when students learn to be that most coveted of qualities: original.
… Yet educators are being encouraged to use it with teenagers, who are at a crucial stage of distinguishing between right and wrong, between an original thought and a borrowed one.
I want my students to know their ideas have value, to understand they can rely on themselves and think through problems. It will become increasingly difficult for them to develop these skills the more ChatGPT mimics it for them. Understanding who they are begins with knowing what they think. ChatGPT might be able to do a lot, but it can’t do that. Keep it out of the classroom.
Also see: “The First Year of AI College Ends in Ruin” by Ian Bogost, and “Learning About and Against Generative AI Through Mapping Generative AI's Ecologies and Developing a Luddite Praxis” by Charles Logan.
Will A.I. Become the New McKinsey?
by Ted Chiang, science fiction writer
I would like to propose another metaphor for the risks of artificial intelligence. I suggest that we think about A.I. as a management-consulting firm, along the lines of McKinsey & Company. Firms like McKinsey are hired for a wide variety of reasons, and A.I. systems are used for many reasons, too. But the similarities between McKinsey—a consulting firm that works with ninety per cent of the Fortune 100—and A.I. are also clear. Social-media companies use machine learning to keep users glued to their feeds. In a similar way, Purdue Pharma used McKinsey to figure out how to “turbocharge” sales of OxyContin during the opioid epidemic. Just as A.I. promises to offer managers a cheap replacement for human workers, so McKinsey and similar firms helped normalize the practice of mass layoffs as a way of increasing stock prices and executive compensation, contributing to the destruction of the middle class in America.
See also: “Generative AI closes off a better future,” by Paris Marx.
Here lies the internet, murdered by generative AI
By Erik Hoel, neuroscientist
Now that generative AI has dropped the cost of producing bullshit to near zero, we see clearly the future of the internet: a garbage dump. Google search? They often lead with fake AI-generated images amid the real things. Post on Twitter? Get replies from bots selling porn. But that’s just the obvious stuff. Look closely at the replies to any trending tweet and you’ll find dozens of AI-written summaries in response, cheery Wikipedia-style repeats of the original post, all just to farm engagement. AI models on Instagram accumulate hundreds of thousands of subscribers and people openly shill their services for creating them. AI musicians fill up YouTube and Spotify. Scientific papers are being AI-generated. AI images mix into historical research. This isn’t mentioning the personal impact too: from now on, every single woman who is a public figure will have to deal with the fact that deepfake porn of her is likely to be made. That’s insane.
Also see: A.I.-Generated Garbage Is Polluting Our Culture
Beware a world where artists are replaced by robots. It’s starting now
By Molly Crabapple, artist and author
AIs can spit out work in the style of any artist they were trained on — eliminating the need for anyone to hire that artist again. People sometimes say “AI art looks like an artist made it.” This is because it vampirized the work of artists and could not function without it.
John Henry might have beaten the steam drill, but no human illustrator can work fast enough or cheap enough to compete with their robot replacements. A tiny elite will remain in business, and its work will serve as a status symbol. Everyone else will be gone. “You’ll have to adapt,” AI boosters say, but AI leaves no room for an artist as either a world creator or a craftsman. The only task left is the dull, low-paid and replaceable work of taking weird protrusions off AI-generated noses.
While they destroy illustrators’ careers, AI companies are making fortunes.... Generative AI is another upward transfer of wealth, from working artists to Silicon Valley billionaires.
Also see: director, actor, and filmmaker Justine Bateman’s “AI in the Arts Is the Destruction of the Film Industry. We Can't Go Quietly”
It’s the End of the Web as We Know It
Judith Donath, educational software design expert, and Bruce Schneier, cybersecurity guru
If we continue in this direction, the web—that extraordinary ecosystem of knowledge production—will cease to exist in any useful form. Just as there is an entire industry of scammy SEO-optimized websites trying to entice search engines to recommend them so you click on them, there will be a similar industry of AI-written, LLMO-optimized sites. And as audiences dwindle, those sites will drive good writing out of the market.
I’m sure I’ve missed plenty of good ones and I want to continually update this compendium here, so send any solid candidates on over and I’ll add them to the list. As always, if *you* have experiences with generative AI impacting your work or personal life, I’m all ears and always happy to talk. In upcoming pieces, I hope to dig into how AI is impacting healthcare workers, video game industry workers, delivery workers, and beyond. Leave a comment below, shoot me an email, or message me on the app.
A couple media appearances — last week I was a guest on the Offline podcast with Max Fisher and Movement Memos with Kelly Hayes; both were great and very different chats, the former about the reality of generative AI tech on the ground right now, the latter about what social movements today might learn from the Luddites. Check them out if either sound up your alley.
Alright! Until next time — keep those hammers up.
For a while now, I have thought (and suggested) we need to have a sort of 'required Hippocratic oath' for people working in IT. I proposed this in a talk at the BIL-T conference in 2020 (and DADD in 2021). My draft (better versions are of course possible) was:
As no society can exist without shared convictions, and as the most beneficial convictions are factually truthful ones, and as the convictions of members of society are strongly influenced by the information that a person consumes, I declare:
• I will not work on systems that have the effect of damaging society by weakening the flow of factually truthful information or by amplifying the flow of factually untruthful or misleading information
• I will not work on systems that damage people’s security of mind
• I will not remain silent if I know of such systems being created or used
As Brian has so clearly pointed out, the evil of GenAI is more economic. Taking an ethical stance against that is in fact a political stance. And 'capital' will fight hard, as it did late 19th first half of the 20th century, and I suspect this was a major factor in the world wars that followed.
So... we can write songs, record and perform them ... and be better off selling original recordings offline... 🤔👋... what was the point of the internet again?