26 Comments
User's avatar
Ralph Haygood's avatar

"In fact, the left appears to be so successfully engaged in matters related to AI that one can't help but wonder if allegations about its supposed ignorance of the technology are motivated by a desire to change the very terms of the debate.": Given the source - "Dan Kagan-Kans in the effective altruist AI newsletter Transformer" - I assumed that was the motivation even before reading further. Why would any decently informed person expect good-faith arguments from Sam Bankman-Fried's crowd?

"... the famous stochastic parrots paper posited that AI is not really intelligent, it's a next-token prediction machine. 'The left' has metabolized this conception of AI, and uses it as an excuse to write off AI's import, which is growing by the day." In other words, Kagan-Kans is stupid, dishonest, or both. (No, I'm not going to be nice about this. It's long past time for that.)

In the first place, there is no, repeat *no* reasonable doubt that what currently passes for AI isn't intelligent as most people understand that admittedly nebulous term. Here's Rusty Foster on the subject two days ago:

"I've watched all three of my children learn what a cat is, and in each case the number of pictures of a cat they needed to see was not 'all of them.' It was like, two or three? Half a dozen, tops. I helped them learn to speak and read fluently, and the number of Reddit posts required was not 'every Reddit post.' I don't need to know what mechanism underlies human intelligence to rule out the possibility that it's the same as what a large language model does. The whole trick underlying the apparent magic of modern A.I. is simply giving it tons of data. Give it the whole internet. Give it every book ever written. This is required - it does not work with less training data ..."*

(https://www.todayintabs.com/p/a-i-isn-t-people)

Moreover, this assessment and then some is widely shared by many people who have credible credentials to know what they're talking about. See, for example, a piece I've cited here before:

"Artificial intelligence (AI) systems with human-level reasoning are unlikely to be achieved through the approach and technology that have dominated the current boom in AI, according to a survey of hundreds of people working in the field. More than three-quarters of respondents said that enlarging current AI systems - an approach that has been hugely successful in enhancing their performance over the past few years - is unlikely to lead to what is known as artificial general intelligence (AGI). An even higher proportion said that neural networks, the fundamental technology behind generative AI, alone probably cannot match or surpass human intelligence."**

(https://web.archive.org/web/20250305233251/https://www.nature.com/articles/d41586-025-00649-4)

It's high time for relentless jeering at fools like Kagan-Kans who refuse to acknowledge reality. What they're engaging in is cargo-cult futurism in the service of profiting from intellectual property theft, and it should be called out as such every time they spout it.

In the second place, rejecting the asinine and offensive claim that what currently passes for AI is intelligent absolutely doesn't imply writing off the social ramifications of the embrace of "AI" by greedy and vicious members of the boss class, as this post (Merchant's) amply demonstrates.

"not just consider the core technology, which at this point is nearly impossible to assess apart from its owners and developers.": It never is; technology is never just technique.

I learned this early, because I was educated as a physicist. (I have degrees in the subject from Irvine, Santa Barbara, and Cambridge.) During the second half of the 20th century, nuclear weapons cast a long shadow over physics. (One of my instructors and research supervisors, Fred Reines, was a veteran of Los Alamos.) I came of intellectual age knowing about what was done to Robert Oppenheimer, and I saw the even-then appalling quality of the people to whom Oppenheimer et al. in the USA and Andrei Sakharov et al. in the USSR gave those weapons. Recently, the quality of those people in the USA has plummeted to a new low.

Motives and character matter. As Foster remarked of "AI", "[T]he evil is not the technology - it's the dreams of the people trying to sell it to us." And of the people eagerly buying it, for whom other people are just things to be used and discarded.

*As Foster noted, a basic LLM can be created with 200 lines of Python. That isn't hyperbole. See also Sebastian Raschka's book "Build a large language model (from scratch)". The differences between that Python and the logic underlying ChatGPT, Claude, or Gemini are just scale, user interface, and gargantuan quantities of (mostly stolen) training data and (grossly underpaid) "reinforcement learning from human feedback".

**If you're wondering what might lie beyond neural networks (without even getting into how drastically simplified the elements of "AI" are compared to organic neurons), see, for example:

"Once thought to support neurons, astrocytes turn out to be in charge"

(https://www.quantamagazine.org/once-thought-to-support-neurons-astrocytes-turn-out-to-be-in-charge-20260130/)

Brian Merchant's avatar

I read Rusty's essay, too (should have linked in the edition!) and thought it was great. Thanks for sharing it here, cheers

Mary Wildfire's avatar

I wonder a bit about the accuracy of "the left" label. If we're winning, I think it's because plenty on the right have concerns about some of the same things: they care less about environmental issues, but they do care about electric bills going up, or potential water scarcity; they're horrified by chatbots that encourage teen suicide or offer sexualized content to kids; and I think they sometimes are offended by the way data centers are shoved into communities, or AI is shoved on them at work--they might not have minded this sort of undemocratic decision-making when the objections were all about the environment and what they saw were jobs. It seems to me that what's happening is the industry is trying desperately to get these things built ASAP but the public is finding out about the downside faster than they expected--we haven't all lost touch with reality yet!

Brian Merchant's avatar

True, the industry is good at digging its own hole, but don't discount a lot of the work that's been done by environmental and community activists to articulate and quantify harms from data centers, unions to confront labor impacts, and left-liberal critics to help spread this message. Some may seem like common sense, but lots of hard work has gone into exposing the raft of harms enabled by Silicon Valley! I do take your point, that a lot of this is 'common sense' — and good if it seems so! Another sign the left/liberal position stands to benefit and advance, if it can organize.

Mary Wildfire's avatar

If it can organize. If it can't it's because we're reeling, trying to decide whether to fight data centers and unregulated AI, or EPA legalizing dicamba, or the razing of forests and seafloors, or maybe organize to protect immigrants in our communities from ICE, or maybe try to stop this country from attacking Iran, or Venezuela, or Cuba, or get justice for the Epstein victims, or try to rescue the surviving Gazans...throwing all this at us at once is intentional, figuring they can win if they keep the left half of the country in reaction mode, knowing most of the Democrats in Congress will do little about any of it so they can win on any of it they care about as it's only ordinary people fighting back.

Casey Mock's avatar

Great as always, Brian, and much needed! While I think you’re correct re: the sentiment on the left broadly — and your point about the left needing to accommodate more complexities of political economy in their analysis is fantastic — there’s also not alignment between popular sentiment and most/all leaders in the Democratic Party. Many potential candidates for 2028 are hedging, for various reasons, one of which is they don’t want to face the ire of the flush-with-cash Silicon Valley PACs this year, and they hope that if they aren’t mean to Silicon Valley capital the way Biden was perceived to be, these donors will come back into the Democratic fold.

And what’s also interesting is that many of the political economy things you mentioned are also being taken up on the populist right. MAGA isn’t a monolith here, and many conservatives are also incensed about the anti-humanism of Silicon Valley CEOs, the implicit attack on the dignity of work, and exploitative businesses targeting children too.

Brian Merchant's avatar

Yes — this is what I mean to say when I think the left should recognize and use its political capital. Lots of Dems beholden to Silicon Valley interests, stuck in the Obama era in terms of thinking about tech etc, part of the project has to be to push them to recognize there's more political power in what you might call a tech populism, hard as it is as that pushes beyond the consultant class etc.

Also yes — but the right will at best pay lip service to most of this, especially with regard to dignity of work. I can see more GOP activity along child safety / censorship, that's already happening, but the right won't lift a finger for workers. There is thus perhaps political opportunity, especially on the labor front, especially as job automation talk heats up, in a populist approach there

Brian Roach's avatar

Also worth remembering that it's not just "the left" who are opposed to AI. In this month's cover story of The Atlantic one of the most vocal and eloquent AI skeptics is Steve Bannon!! Yes, I died a little inside reading quotes from him and thinking "yea wow he's spot on here" and maybe it's just the whole "a stopped clock is right 2x a day" but AI skepticism can really cross political lines!

Madame Patolungo's avatar

Ross Douthat is another one who seems to become someone else entirely on the matter of AI. Compared to Ezra Klein he presents as the much more knowledgeable of the two.

Brian Roach's avatar

Yea he seems to come from the Catholic social teaching/Pope Leo angle in his opposition to it! And he's not the only one!

Also, Ezra Klein is an idiot. In my humble opinion...

Madame Patolungo's avatar

I try to be polite but I cannot dissent!

Madame Patolungo's avatar

Brian, thanks as always for your unpacking of the diverse public intellectuals now working to push "the AI debate" in the direction of public interest. I think you are entirely right to emphasize POLITICAL ECONOMY and right as well to speak of "the left" in terms of a broad coalition: I like to think of it as a popular front. This view allows academics and others to debate among themselves about theory and practice (e.g., how AI systems are being developed and what the impact might be on, say, language or education, or creative cultures) while keeping up with the necessary political developments both near and far.

A few additional resources for your readers from your friends from _Critical AI_ (a journal edited at Rutgers but published by Duke UP).

Here's an article that should help non-technical people understand how LLMs developed and came to be recognizable as "stochastic parrots" (or as we prefer "probabilistic mimics"- to be fair to real-live parrots!).

https://read.dukeupress.edu/critical-ai/article/doi/10.1215/2834703X-11205147/390862/Beyond-Chatbot-K-On-Large-Language-Models

Here's the more technical (but still quite readable) complement to the above introduction: it explains how LLMs derived from methods for machine transcription and why that matters; and it explains how reinforcement from human feedback is the principal technical means for implementing LLMs as personified chatbots and why that matters. (This one is behind the paywall but there is a pdf on the internet). https://read.dukeupress.edu/critical-ai/article-abstract/doi/10.1215/2834703X-11256853/390860/The-Origins-of-Generative-AI-in-Transcription-and?redirectedFrom=fulltext

Here's our "living document" on Teaching Critical AI literacies (which also functions as a good "explainer" and includes many shoutouts to Brian and others) https://docs.google.com/document/d/1TAXqYGid8sQz8v1ngTLD1qZBx2rNKHeKn9mcfWbFzRQ/edit?tab=t.0

And here are two review essays both now free to the public. The first will make clear why Bender and Hanna's book is not at all reducible to the 2021 formulation of "stochastic parrots" alone (even as OpenAI and Anthropic futurists seek to disparage that formulation). Our reviewer ties the argument to two different "left" positions: Fredric Jameson's and Yannis Varoufakis's position on "technofeudalism." https://read.dukeupress.edu/critical-ai/article/doi/10.1215/2834703X-12096000/406211/Don-t-Believe-the-AI-Hype-Big-Tech-s-Failure-of

The second is a brilliantly written takedown of the AI first position on teaching:

https://read.dukeupress.edu/critical-ai/article/doi/10.1215/2834703X-12095991/406208/A-Pedagogy-of-the-Inevitable

I also really like this short piece on "The Politics of Prompt Engineering"

https://read.dukeupress.edu/critical-ai/article/doi/10.1215/2834703X-12095946/406205/The-Politics-of-Prompt-Engineering

This is a lot to share and all of us are already struggling to keep with this technology: especially the push to make an "AI first" mentality into a necessary for reflex for any activity (the very kind of hype that Bender and Hanna take down so effectively and that these above essays eviscerate).

Thanks very much Brian. You are indispensable to this work!

Brian Merchant's avatar

Excellent stuff, thank you for sharing!

Godfrey Moase's avatar

The biggest strike in Australia in recent years centred around algorithmic management in supermarket warehouses in late 2024. It was massively popular.

bluejay's avatar

I've said for a couple years that AI would be good at automating management/C Suite. The risks are low too.

Celine Nguyen's avatar

I really enjoyed this newsletter, bc it actually challenged a lot of my own framings of the issue (I have moved in the last month from being very skeptical of AI being useful, to feeling very invigorated by the possibilities and a little sad/frustrated at how many humanists I know are not exploring how it can change their work).

After reading this, I do actually agree the left is effectively setting the stakes of the AI debate when it comes to job loss and wealth concentration (which I do feel is the key issue with AI tech rn)…and that the left “has actually accrued significant political capital around AI.”

I still, however, think that most left-leaning intellectuals/activists are separating the political/economic harms of AI versus the technical capacities. And I do think that people are relying on somewhat older research for the latter. To take Emily Bender et al’s stochastic paper as an example…imo it was fundamentally right for the time and really, really useful (I’ve referenced it a lot); I also think that AI technology has changed a lot since it was published in 2021! Part of my frustration (which is also expressed in the newsletter about how the left should take AI more seriously) is that a 5 year old paper is now no longer usefully descriptive of AI’s environmental impact or language capacities. It provides guidance for understanding the current moment but it can’t be where understanding stops. You could still reach the same ethical conclusions about AI (that it relies on a lot of human labor, from data set labelling in the past to RLHF in recent years, and that this labor is under recognized) by updating the paper’s model of the world to what’s actually happening today.

The point where I disagree with you more—I do think that a lot of left commentators (hard to discuss ‘the left’ in aggregate) are still making the mistake of seeing AI as useless, because they haven’t used the latest models and are working off of last year’s (imo) much less helpful ones. (Though maybe I’m reading the wrong people?) Many seem to be conflating the ‘is it useful’ and ‘is it threatening’ questions, and trying to claim that AI is NOT useful at all in order to make AI less threatening. (This is an argument Leif Weatherby hints at, in my reading—it’s not helpful to consistently focus on, here’s what AI still can’t do! when trying to figure out what the purpose of human labor is.)

But the question of whether AI is useful is a different debate than whether AI technology, within our current economic/political structures, is actively threatening to most people’s livelihoods (the answer seems obviously yes, and even tech that is incapable of automating away labor can still lead to job loss, if an exec THINKS it’s possible).

A serious left critique of AI does involve looking at ownership structures and power and proposing a different way for how these models can be built. But I also feel it should include some thoughts on open-source models; smaller/local/less environmentally expensive models; how the US government’s support of AI right now is, yes, a Trump project but is also caught up in fears of economic competitiveness relative to China (a Democratic admin would have to wrestle with this problem as well and could very plausibly lean towards direct subsidy, clearing the path towards more data centers…another reason the left should get organized).

But I also feel increasingly that more left critiques of AI could incorporate a Nick Srnicek–style optimism, or at least curiosity, about…can we use the tech for anything interesting?? Srnicek and Benanav are taking this on, as you’ve said, but I’ve personally felt that their ideas are passed around much less than more reactive “AI is dumb and hallucinates and can’t do anything useful” perspectives, even now. I still see a lot of people conflate AI with ChatGPT! But AI is more than what OpenAI is producing as a product…And there are some really interesting projects using AI for positive intellectual and social ends—I think some of the historical research/teaching projects in this newsletter (by a UCSC historian) get at that: https://resobscura.substack.com/p/what-is-happening-to-writing?triedRedirect=true

Brian Merchant's avatar

Really appreciate this comment. I think the use question is a valid one. Something I maybe failed to articulate in the piece is that I do think there is this gulf in the left-liberal imagination of AI and its usefulness for a reason — I think we can be sympathetic for how it's hard, tactically and philosophically, to expend much intellectual energy on the question of 'what could AI be useful for?' when it's bearing down on your community, your livelihood, your world, in a specific formation. While I really like the Benanav and Weatherby contributions, it's hard not to see them as a bit utopian, given how politically disempowered a true left is (something I also at one point intended to mention in the piece but you know how these things go), and how few options it can realistically offer those being deskilled right now. But I agree that's the challenge! How do we do all of the above; resist deskilling and automation in spheres we think it's important to protect, and guide AI to whatever productive uses it might confer — I tend to think the answer lies in advancing structures that truly empower workers themselves to make those calls, but that's another cascading set of challenges.

T.F. Johnson's avatar

My... deep and very loud salt over the IP bootlicking on that article aside, I may as well give my two cents on the use of imagegen in art by lefty artist-types.

Like, while I don't approve of corpo use and we need to fight it whenever we can in that aspect, I think that also the weird indie experiments are more valid (even if the spam/slop is a major infrastructural problem, I will admit, ditto for the potential effects on commissions tho that has... its own issues), and I think I can name some names doing good work.

The artist Redslug trained a LORA on their own work, and it's pretty good, and if you don't believe in the whole "fruit of the poisoned tree" thing wrt copyright on the base program (WHICH YOU SHOULDN'T) it's a pretty cool thing that shows a lot of potential.

The artist Reachartwork/AICurio does a lot of work on the tech side of things with imagegen, using it in visual art to compensate for physical health issues. She uses a lot of inpainting in her work, running a locally hosted LORA on her own machine, and she's done some neat projects like the Infinite Art Machine and an online art collective for leftist artists working with imagegen, Are We Art Yet. She's also gotten a lot of harassment over using it for her physical health issues :(

And also Trent Troop, who works under the name of therobotmonster/deepdreamnights and with video under the name Radio Free Ultramerica. He's a creative with a background in multimedia forms from everything to cartooning, to video editing, to tabletop game design, to toy design to puppetry, but sadly a lot of health conditions make it super difficult for him to do.

But, he does work with imagegen/videogen, and it runs rings around most things that a lot of people working with AI do, like in terms of comics he's advanced techniques he used to use with public domain comic art to make a fictional comic issue from the 70s, complete with manually re-inking and re-coloring panels and adding deliberate print errors to emulate the style of the time.

In terms of video, his theme song for a non existent ROM Spaceknight TV show from the 80s is great, as is "After-Cool Action Block," or his most recent one "If The Goo's Alight."

Hell, he even made a LORA specifically for adapting Jack Kirby's style directly to a photo-realistic style, out of frustration with how no filmmaker really has done it, doing something really new with something old in a way that's super clever.

There's a few others, but those are all the big ones that come to mind. Note they are all unified in that they're artists with pre-existing skills and using it to enhance their ability to work, centaurs vs reverse-centaurs and all that, and they're all indie and not being forced to shart out an AI Universes Beyond set for the Hasbro CEO to get more cocaine money instead of letting them put the 3e SRD under the CC-BY license AS THEY PROMISED.

And while I get why people are just as hard on indie artists using it as they are on megacorps, for fear of "normalizing" it (Which I think is dumb, but my reasons would take even longer), but I just wish people would have more nuanced perspectives on small creatives using it instead of unleashing a moral panic...

...But I just mean that about small creatives, for megacorps, yeah, do not give them an inch, they will use it to screw over creators just like they did with CGI and Photoshop, KNOW YOUR HISTORY PEOPLE!

Madame Patolungo's avatar

I'm replying both to Celine and Brian. I am coming at this from a different angle than Benanav (whose NLR essays I love) and Weatherby (I haven't read the linked piece but I know his recent book). I come at this as an educator and a researcher in critical AI studies. I work closely with technologist researchers both in academia and in industry (though not the Big 7).

Celine: there is a lot of work on open source models and smaller models, so I'm not sure who you are reading. The open stuff isn't necessarily a full-on embrace: e.g., https://www.nature.com/articles/s41586-024-08141-1

As to utility: there are two issues. First, what kind of AI are we talking about? Most "left" academics, including Bender and Hanna, know that there is plenty of use for well-scoped ML models, potentially trained on DL or even transformer architectures (e.g., Alpha Fold).

The real issue is that most AI hype involves gen AI because that's the kind that is driving the bubble for the simple reason that the biggest tech companies are the ones who have the data, compute, and access to venture capital to build it and its massive data center infrastructure. Matt Stoller (BIG substack) has estimated that there about 10-20 individuals (men) who are now making the decisions about the trillion+ in investment in "AI" - it's all about scaling chatbots and their potential agents with the longer term vision centering on reducing employment, or changing the organization of workers (including professional and creative workers) to an AI-mediated gig structure; or its about potentially pivoting to an targeted ad model familiar from the surveillance capitalism structure. So that's what "AI" is for those directing a trillion plus on it.

There are of course other "AIs" that matter to people on the left among others: military uses; algorithmic pricing, algorithmic fraud detection, surveillance, etc. These are mainly not gen AI and they may have utility to someone but not in the broad public interest.

Second, what kind of utility are we talking about? Does, say, Claude Cowork help people to code? Absolutely yes. But it mostly helps people who know what they are doing. The same could be true for LLM-based systems fine-tuned for helping, say, doctors to diagnose. The problem is that these modest use cases--carefully developed systems designed to help experts who readily distinguish between a useful output and a confabulation do not satisfy investor ambitions. So while we can imagine such tools, we there are not many pathways to seeing them developed.

For example, I can say confidently as an educator, that ChatGPT style tools have very few benefits to students and a host of documented harms grows every day! They haven't been designed by people who actually teach! The Brookings Institute recently came to the same conclusion - perhaps you saw their January 2026 report?

So what other use cases are we talking about?

Stephen S. Power's avatar

The best way to understand the issue of AI is to replace it in any piece or statement with "Jesus" because the people pushing it are on an evangelical crusade and they simply can't understand why someone wouldn't believe in their peculiar god--and if these heathens continue to resist, if they will not submit, then they will eventually have to have this god forced on them for their own good.

T.F. Johnson's avatar

...The point about "stealing IP" made me mad because, oh my god, I am so sick of that talking point and I wish people who, at their most fundamental level even before AI, are primarily motivated by their absolute terror of fanartists/fanfic/general-unauthorized-fanwork and want to make that everyone else's problem would shut the hell up about it!

I care a lot about shrinking the legal construct that is "IP,"( as opposed to copyright) significantly smaller than the bloated nightmare than it is now, and I am so deeply angry that this issue set public acceptance of my views back by decades because the (mostly-unauthorized!) realistic Pokemon guy had a hissy fit on Twitter and copyright ghouls hopped on to stoke the fire as everyone went insane!

The only good response from an IP perspective I've seen is Cory Doctorow's "We need to expand the 'imagegen is automatically PD' precedent," all other expansions (And yes, court cases count) would be bad, that sort of "fruit of the poisoned tree" doctrine pushing against it over copyright would end up re-defining fair use as what goes into a work rather than the end product, which would be like a nuke to so many important cultural forms, and even right now the IP-ization of data is being used to block the Internet Archive from sources like Reddit and a bunch of major news sites!

Fair use is already turbofucked, it needs to be expanded, not made worse by the agitation of a bunch of wannabe Vivziepops and Andrew Hussies!

I am so deeply angry that RJ Palmer (A man who LITERALLY made his name on fanart) is single-handedly responsible for people going so insane about copyright because he had a hissy-fit on Twitter, and if I meet him I will have words! Mostly swear words!

The people pushing on this IP narrative have literally been caught doing happy dances over the Internet Archive getting hosed and one of them literally said that the Internet Archive stuff was a "test run" for anti-imagegen lawsuits under the banner of copyright! Meaning that, given they consider hosing one of the most vital public institutions for the net a "test run," whatever ways they want to expand copyright will be vile at best! Because this is a blatant powergrab by legacy media! These people are not our friends!

I find it hard to say much in response other than "Oh my god, shut up! Shuuuuuuuut! Uuuuuuuuuup!" because you uncritically parroting that talking point makes me so mad!

And this is me speaking as someone who frequently gets into arguments with the full-on copyright abolitionists about the viability problems there! I put pretty much all of my art into the Creative Commons because that is my deal! This is something that means a lot to me!

T.F. Johnson's avatar

Like, I suppose my point is, it depends on what parts of "the left" you define as worth caring about if you say the left is "winning," given that us on the anti-IP left are currently getting turbofucked on this issue, in all of our holes!

Including the nose ones! Especially the nose ones!

Kevin McLeod's avatar

Neither Left, Right (or middle) have any idea what AI is, or how it is failing.

For that, you'll have to abandon politics, which is stuck behind the same threshold.

There's only a prehistorical animal relying on words, trapped at AI, and the next-gen Sapiens that views AI as a bottleneck words hands us all.

https://eventperception.substack.com/p/language-the-case-against-ai

etheaded's avatar

The Luddite position is not winning, it is denial, rejection of reality, no different from anti-vaxxers or flat-earthers.

Madame Patolungo's avatar

You are assuming that the Luddite position does not have a strong grasp of reality on which it predicates its politics. I think you are very much mistaken.