The tide is turning against OpenAI
Opposition from workers and creators is beginning to hit the infamous generative AI company hard
In the year and a half since OpenAI has burst onto the scene, it’s done a speedrun of the typical tech unicorn arc: It shot onto the main stage with the hit product ChatGPT, won CEO Sam Altman a world-beating press tour, attracted the interest and adulation of the media, then new business partners and investors, then an array of critics and lawsuits, then saw its boardroom drama become international news, and now, rounding out a cycle that used to take many years in oh 18 months — we have arrived, firmly, at The Backlash.
Not that there hasn’t been ample opposition or criticism to the company’s designs until now, far from it. But there are signs that the ire is beginning to overwhelm the hype. Recent reports show public trust in AI is declining sharply. And OpenAI just had a week that was bad enough that we may yet look to it later on down the road as a pivot point — when the tide began to turn against the biggest and most prominent generative AI company in Silicon Valley.
Perhaps the biggest indicator came when OpenAI chief technology officer Mira Murati sat down with the Wall Street Journal tech columnist Joanna Stern, for an interview that, let’s say, did not go well. When asked what sort of data OpenAI had used to train its much-ballyhooed new video generator Sora, Murati balked. We used “publicly available and licensed data,” she said — so, pretty much… everything on the internet that hadn’t been nailed down or locked in a bunker?
“So, videos on YouTube?” Stern pressed her.
“I’m actually not sure about that,” Murati said.
Now, the CTO of a major tech company claiming to not know whether it had tapped the world’s largest depository of online videos to train their online video generator instantly struck most observers as fishy, to say the least. It meant one of only two things: Either Murati was lying, to avoid saying something that might incriminate OpenAI — the legality of ingesting various kinds of training data is, as you may know, hotly contested at the moment — or that she was an incompetent and clueless CTO. The smart money is on the former.
There were two remarkable things about this: First, the mere fact that a CTO of a ~$90 billion tech company would be so wildly unprepared for such a basic question in an interview with the business paper of record. It suggests that OpenAI and its executives have become so used to a kid gloves treatment by the mainstream press that Murati, a chief technology officer, was sincerely not expecting to be pressed on the matter of how OpenAI’s technology actually worked. She and Altman are more accustomed to fielding abstract questions about the Power and Threat of AI and whether their product will transform the world, than harder, more basic questions about their business, and whether, say, they may have used data from non-consenting or copyright-protected sources.
The second is the reaction. The clip went viral, and then became a story in itself. This is largely because concerns about the ethics of training models on “publicly available data” — ie any works visible online produced by artists, writers, editors, and creators — without asking or compensating them, in order to automate the generation of derivative works for profit, have become a central and now utterly mainstream to the AI debate.
It’s why Stern asked the question in the first place: After months of organizing, lawsuits, and protests, workers and advocates have made it an issue the companies can no longer dismiss. Who gets to use — and to commoditize — the art, text, and code created by others is now arguably the single biggest ethical question around AI, not, as OpenAI would prefer, science fictional musings about whether it will turn into Skynet.
To wit: Days before the interview dropped, OpenAI staked out a presence at SXSW, the influential and once-hip tech conference held annually in Austin, Texas. Before a premiere of a new Ryan Gosling movie, the organizers showed a video montage of SXSW speakers extolling the virtues of AI, exhorting people and businesses to leverage its power, to embrace generative AI, and not to resist it. Reader, the crowd booed. (OpenAI’s Peter Deng featured prominently in the video, saying at one point, "I actually fundamentally believe that AI makes us more human," for which he was, yes, booed.)
Note that this was not a crowd gathered at an art exhibition, or an illustrators’ trade show, or a comic book convention — this was an audience at one of the nation’s most prominent and influential tech conferences. For the buzziest tech of the moment to get shouted down at *SXSW* speaks volumes about the scale and nature of the animosity generative AI has amassed. The tech is seen, here, as exploitative by tastemakers and *by technologists*.
That animosity has been long percolating: The successful writer’s strike, which proved surprisingly popular with Americans, centered corporate abuse of generative AI as a key issue, while artists have filed lawsuits alleging copyright infringement and penned open letters demanding editorial operations ban the use of AI. The Author’s Guild has filed a lawsuit, too, and called on the AI companies to compensate writers whose books have been used to train the tech. By last Fall, a majority of Americans thought that creators should be paid if AI models were trained on their work — only 10% did not —according to a survey conducted by the National Opinion Research Center at the University of Chicago, and trust has only appeared to have plunged since then.
This week showed that, thanks to these increasingly organized cumulative efforts, resistance to AI has reached a fever pitch —it is robust, far-reaching, and impeding OpenAI’s ability to continue successfully generating a mythology about itself. The likes of Amazon, Facebook, even Uber enjoyed years of good vibes and flattering press coverage before they began to curdle, or at least complicate, in the court of public opinion — in its speedrun, OpenAI may have already hit that milestone.
The public has been better prepared to regard tech critically, for one thing (which is why, perhaps, OpenAI and the generative AI cohort felt they had to amp the hype dial up to 11 in the first place, to try to brute force a skeptical public into beholding the wonder and power of technical systems that can generate images and texts that resemble those made by humans, just slightly weirder and usually of lower quality). That public has seen, and to some degree internalized, what happens when we fail to contest big tech’s power — we get authoritarian regimes using social media for deadly propaganda, gig workers forced to sleep in their cars because they can’t make rent, Amazon workers bruised and battered and urinating in bottles to sate relentless productivity demands. We get steamrolled. Which may be why, last fall, 79% of Americans said they did not trust corporations to use AI responsibly, per a Gallup poll.
So, those who are worried that generative AI systems amount to yet another means of degrading labor, of transferring power from workers and creators to a Silicon Valley firm owned and obliquely operated by billionaires, and who are fighting to turn the tide, might take some satisfaction in these shifting sands. It will continue to be an uphill battle, to put it lightly, and OpenAI now has a war chest of capital, political influence, and stature in Silicon Valley to rival all but the most established giants. (Incidentally, OpenAI announced its new board last week, which includes, once again, Sam Altman, who was briefly ousted due to concerns about his trustworthiness, as well as the CEO of gig company InstaCart, a veteran lawyer who represented both Oliver North in the Iran-Contra affair and Bill Clinton in his impeachment hearings, the former head of the Bill and Melinda Gates Foundation and Pfizer board member, and Larry Summers, the former US Treasury Secretary and president of Harvard.)
But artists, creators, writers, critical academics, organized labor, plaintiffs in much-scrutinized lawsuits against, and groups like the Freelance Solidarity Project and the Center for Artistic Inquiry and Reporting have all played a role in transforming the terms of the ongoing AI project. They’ve demanded a seat at the table, and in many ways — mostly informal ones, so far — they’ve begun to win it.
It’s too early to tell, but OpenAI’s best days — or at least its easiest — may already be behind it.
[Some BLOOD IN THE MACHINE updates and *personal news* as they say.]
This was, somewhat to my surprise and delight, a big week for Blood in the Machine: It was selected to be the BBC’s Book of the Week, and an abridged audio edition has been airing on the network. It’s available online and as a podcast, so you can check that out — I haven’t listened yet, but I hear it’s well done!
BLOOD was also chosen as the New Book Network’s Book of the Day last Tuesday, and I recorded a nice and lively interview with Michael G. Vann.
The book was also featured in a couple nice posts on Kottke.org, a blog I have been reading off and on for over a decade — loved seeing it shouted there, especially in this post, in which Edith Zimmerman, made this wonderful illustration of the cover:
Finally, tech and culture blog mainstay Boing Boing *also* shouted the book last week. Love to see it. I don’t know what was in the water last week, but I’ll take it.
The week before that, or maybe the one before, who can keep track, I dropped by Factually, the Adam Conover show, and we had a great chat:
Finally, I am pleased to share that I am officially the journalist in residence at the AI Now Institute, and will be working on an investigative project looking at the business of generative AI — more on that soon. Suffice to say, I am thrilled to roll up my sleeves here.
That’s all for now, I suppose. I finished this post on my phone since Substack crashed and I had to run an errand, so chalk any errors or malformed thoughts up to that and that alone. Cheers! And remember: No General but Ludd means the poor any good.
I'll repeat here what I said regarding the Murati on Linkedin: Trick I was taught many, many moons ago by an experience CI officer: when you run the clip, hold your hand on the screen so that all you see are her eyes. Try it, and make up your own mind about her degree of candor.
Great writeup - similar to what Gerben said, from what i can tell it doesnt totally feel like malicious hype train buildup to me (not that i'm in the trenches and can say for sure) - at least not at base concept - but certainly seems like a way to capitalize on the current wave of tech hype.
The anecdote i heard was that VC's had a vested interest in starting a new hype wave in that post-covid dip where nobody was spending money; causing a problem for the VC firms with their own investors that want to see returns.
VC's were supposedly just sitting on this money because they didnt know if there was going to be a recession and this technology seemed to come out at the same time that they needed something to pump. Not sure if this matches up with anything you've heard, you'd probably have an easier time finding out than anyone like me would.
Additionally, i've heard that the outcome of these copyright lawsuits might not be decided for several years. Heard anything similar to that effect?