The struggle over AI in journalism is escalating
Politico journalists are taking on management for using AI products that risk damaging their reputation and replacing their work. Their fight is emblematic of an embattled industry.
Greetings all,
I’m back from a nice and good week off, taken at a time when it was, let’s call it, “very much needed.” Thanks to everyone who shot notes in support of my decision to just go dark for a week. Although next time, maybe I’ll schedule some old evergreen stories to fill the void—I’ve seen seasoned newsletter writers take that approach, and it may deter people from unsubscribing while I’m away from the keyboard a few days, citing “low volume” as the reason. I’m still figuring all this stuff out, so thanks again to all you good people who are bearing with me.
This week, we’ll dive into a subject close to home: Working journalists’ escalating struggles with AI, through the lens of one major newsroom’s efforts to stop media executives from unleashing AI indiscriminately on its editorial operations. Plus, a new bill proposed in New York might actually have the teeth necessary to push back on executives trying to automate journalism.
Thanks as always to everyone who reads and supports this work, which I am contractually obligated to note is made possible entirely by paying subscribers. If you find it valuable, and would like to help me keep some 90% of it free for the public, and of course if you’re able to, please consider doing so below. I’m exceedingly grateful to all who do. You make reporting out stories like today’s, which took ample research and many days and numerous interviews, possible. Cheers and onwards.
Edited by Mike Pearl.
Last year, in the middle of one of the biggest news events in politics, journalists at Politico watched with surprise as a new feature appeared on the website. Amid the outlet’s coverage of the Democratic National Convention, at the top-right of the homepage—prime real estate on one of the nation’s most-read political outlets—a new AI product was generating quotes and summarizing the news.
“We were all surprised and confused,” says Ariel Wittenberg, a public health reporter at Politico’s E&E News, and the Unit Chair of Politico and E&E News’s editorial union, the PEN Guild. Editorial staff were told only an hour before the AI product was debuted, and were given no opportunity to ask questions about how it would work, why it was there, or why it was being launched at that time.
The AI promptly generated a post that misspelled Kamala Harris’s mother’s name. The entry was taken down without comment or correction from an editor, in apparent violation of Politico’s editorial standards. Weeks later, Politico’s management deployed the AI tool again, this time in an even higher-profile setting: The vice presidential debate between JD Vance and Tim Walz. The feature again trampled editorial guidelines, this time transcribing verbatim Vance’s comments about “illegal immigrants”—a term that Politico writers are not allowed to use, and editors are not supposed to publish.
Furthermore, the digital real estate that the AI product was occupying at management’s behest would have otherwise been given to a human reporter’s story or blog entry. “It’s not a spot that’s normally left blank or filled with advertising,” Wittenberg says. “Meaning if not for the AI, it would have been human-driven reporting in that spot.”
Undeterred, this year, Politico’s management launched another AI product, an AI-generated “report builder” for premium Politico PRO subscribers. The product is supposed to draw from Politico’s archives of reported material to automatically generate bespoke reports. This brand of report writing is the sort of task previously carried out by human writers with domain area or institutional expertise. It’s also, as staffers who tested the tool have found, error-prone, and generates reports naming organizations that don’t exist and that include myriad factual inaccuracies.
“It’s wholly behind the paywall, but when we have asked it things, it’s giving us back some pretty glaring errors,” Wittenberg says. “I asked it about ‘The Impact of President Biden’s Oil Policies,’ and it wrote me a whole page-and-a half thing, and every single policy it mentioned was a policy of Trump’s. And it cited real stories at the very bottom, from our members, the implication being that if someone is reading this, and it’s erroneous, not only does our AI not know the difference between Biden’s policies and Trump’s, but maybe the authors of the cited articles didn’t, either.”
Wittenberg and her colleagues’ surprise and confusion over the sudden implementation of AI was twofold; it was not just that Politico’s management had so suddenly deployed the product at all, but that its appearance was in direct conflict with the editorial union’s contract.
In early 2024, after nearly two years of negotiations, the PEN Guild ratified its first collective bargaining agreement. And that contract contained some very straightforward guidelines about the use of AI: Notably, that if Politico’s management wants to introduce new AI products that “materially and substantively” impact journalists’ job duties, it needs to give 60 days notice. For the AI summarizer, Politico’s bosses gave its editorial staff 60 minutes instead. Second, the contract stipulates that any use of AI must be consistent with editorial standards—standards that, as evident above, were quickly violated.
“To have AI generation on the front page on the biggest political events, not just of the year, but of the last four years, to choose that as the moment that you’re going to cede our space and our expertise to AI, and to have the AI not even follow our standards?” Wittenberg said. “It adds insult to injury.”
On July 11th, the PEN Guild took Politico’s management to arbitration over assertions that it had violated the AI provisions in the union contract it agreed to just last year. Guild members say they’re not against AI—just that they want the technology used in line with Politico’s editorial values, and in a way that benefits their work instead of harming it. They began circulating a petition titled “Tell POLITICO Management: AI should work with journalists, not against us!”
Less than a week later, Mathias Döpfner, the CEO of Axel Springer, the parent company of Politico, Business Insider, and many other media outlets—it’s Europe’s largest publisher—issued a mandate for every one of his employees to use AI. In a long, impassioned speech, the executive explained that it was now an expectation, not a suggestion, that every journalist at Politico, E&E News, Business Insider, and his other media properties use AI in their work.
“Nobody in the company has to explain in the company why she or he is using AI to do something—whether to prepare a presentation or analyze a document,” he said, according to media reporter Oliver Darcy. “You only have to explain if you didn’t use AI. That’s really something you have to explain because that shouldn’t happen.” Döpfner added that, “We would never say this article was made with the help of AI. We always use all sources of information, and in the future more and more AI.”
This divide, between media executives, enamored with tech companies’ promises of new efficiencies and labor savings and eager to embrace AI, and journalists, who work for them and must abide by their directives, and are tasked with using the often unreliable automation software on the ground, often in high-stakes situations, is only widening. While media executives make headlines with sweeping declarations about the AI future, frustrations, anger, and tensions among many rank-and-file journalists are rising.
After all, journalism is a field where both speed and accuracy are paramount, and AI tools, with well-known reliability and ethical issues, even at their best, can complicate the equation. Trust in the industry has already eroded, and fears like those articulated by the PEN Guild, that a rush to embrace AI tools will lead to sloppier, mistake-riddled output that further corrodes journalists’ reputation—and risks replacing them in the process—are spreading. It does not help matters that journalism is also a field in existential peril—newsrooms are shrinking, primarily thanks to the tech giants that have taken control over distribution infrastructure and eaten into advertising revenue—and AI is the latest technology that executives have found hope for salvation in.
Much of media’s executive and managerial class remains bullish on AI, despite stumbles, shortcomings and even deeply embarrassing and high-profile scandals. That AI has both-sides’d the KKK at the LA Times, generated summer reading lists full of made-up books for the Chicago Tribune, and led to comically bottom-of-the-barrel content at Sports Illustrated seems to have dissuaded few executives from continuing to sing the praises of the technology.
Many—if not most, at this point—media companies have already inked deals or formed partnerships with AI companies, trading access to archives of written, AI-trainable material for cash and access to AI products. The executives who signed those deals are no doubt looking to get their money’s worth. Axel Springer was one of the first to ink such a deal with OpenAI, back in 2023; now it’s one of the most bullish on AI in journalism. Jim Vandehei, the CEO of Axios, which has a partnership with OpenAI, is another loud AI bull, recently putting it this way: “we're betting [AI] approximates the hype in the next 18 months to three years. And so are most CEOs.”
Axios, it might be noted, laid off 10% of its staff in 2024, in its first major round of job cuts since the company’s founding. In May 2025, Business Insider laid off 20% of its staff, and its CEO declared a pivot to AI and the events business in the same announcement.
Not all media executives are as openly aggressive in their embrace of AI, but even the leadership of more conservative legacy outlets like the Atlantic (which inked a “strategic product and content partnership” with OpenAI in 2024) and the New York Times (which is suing OpenAI for copyright violation, but has partnered with Amazon for an AI licensing deal) advocate for journalists to use AI.
Meanwhile, it’s no exaggeration to say that AI companies are destroying the conditions for online journalism. The biggest culprit is Google AI Overviews, which have dramatically reduced the amount of traffic that the search giant sends to publishers—by as much as 70%, according to cloud hosting company Cloudflare—serving web browsers AI-generated snippets of their answers while excluding obvious links. But ChatGPT, Claude, Perplexity, and other AI chatbots, which are supplanting search engine use with query results that also often yield few to no links, are responsible too.
In a widely read story, “The Media's Pivot to AI Is Not Real and Not Going to Work” published at the independent—and notably AI product-free—404 Media, co-founder Jason Koebler writes:
AI is destroying traffic, ripping off our work, creating slop that destroys discoverability and further undermines trust, and allows random people to create news-shaped objects that social media and search algorithms either can’t or don’t care to distinguish from real news. And yet media executives have decided that the only way to compete with this is to make their workers use AI to make content in a slightly more efficient way than they were already doing journalism.
That is, if it’s even slightly more efficient at all. How AI is supposed to supercharge journalists’ work remains frustratingly opaque for many—especially those directed to use it every day. There are no doubt use cases, like deploying AI for transcription, to summarize a document, or to take a first pass at proofreading, and data journalists have used AI tools long before the LLM boom, but many journalists are finding that the supposed efficiencies generated by such uses are often offset by new tasks, like rechecking transcriptions for accuracy, ensuring AI texts are free of hallucinations, and editing output for clarity.
“My general sense here is that ‘let's use AI for stuff!’ is something bosses love saying but then the rank and file workers can't actually figure out good uses for,” one journalist who works for an Axel Springer outlet told me at the time of the Business Insider layoffs. “Which I suspect is happening in lots of industries. Bosses panicking that if they don't lean into AI they'll be left behind, but AI not actually being particular useful for workers.”
The staffer adds, “And I think because Axel Springer made this licensing deal with OpenAI there's extra pressure to be using it.”

So how are working journalists and advocates of a healthy free press dealing with those executive pressures, and the chaos sewn by AI interests? Organizing. It might not garner as many headlines as executive statements, but there is a lot of pioneering work being done to confront and constrain AI overreach in two key arenas: Organized labor and legislation.
The PEN Guild’s contract, for instance, contains strong language that limits the ways that AI can be used by management, and seeks to ensure that AI will benefit journalists and not just executives—if that contract is honored, of course. PEN isn’t alone, though its contract is one of the most powerful. The News Guild counts more than three dozen newsroom union contracts that now cover AI provisions. As the Guild puts it, “There are some gold standard examples that cover three priorities: protection of bargaining unit work, clearly defining the scope of AI and requiring interaction and oversight by bargaining unit employees to create work products.”
In other words, 1) stopping media executives from “replacing” journalists with AI, 2) ensuring journalists and management agree on how and how much AI is used in a newsroom, and 3) ensuring journalists have input and oversight into any AI products introduced into a newsroom.
As the Guild notes, organized newsrooms are fighting for, and winning real-world protections against how management can use AI:
The New Republic won language that says generative AI “may be used by bargaining unit employees as a complementary tool in editorial work, but it may not be used as a primary tool for creation of such.” And further, it states that AI shall not result in layoffs, to fill vacant positions or result in reducing pay for Guild-represented workers. Other contracts have similar language that sets these clear lines that the employers may not cross when introducing or expanding the use of AI in the workplace. While some contracts may not completely prohibit AI from reducing or eliminating bargaining unit work, they provide for transfers to other roles with appropriate training and enhanced severance for employees who do not continue employment.
And now, there’s an effort to codify such protections into law. This July, New York state legislators proposed a bill that would require news media employers to “fully disclose to workers when and how any generative AI tool is used in the workplace as it relates to the creation of content,” as well as labeling AI-generated output for consumers, requiring notice, consent, and human oversight of AI tools—and restrictions on using AI to replace journalists or their work. New York, of course, is the nerve center for the American media industry, so the impacts of such a bill could be profound. (It’s also worth noting that this is exactly the kind of bill that Meta, Google, OpenAI, and their allies in the GOP were hoping to ban with their AI law moratorium amendment that only narrowly failed to make it onto the recent budget bill.)
“This bill makes sure that workers are not simply informed when employers want to introduce a new technology, but have a real say in how it will be used—and the power to say no," says Mishal Khan, a senior researcher at the UC Berkeley Labor Center. "This bill provides a crucial example of unions leading the way in crafting ambitious public policy that benefits all workers in the industry. It also offers a template for regulation in other sectors where new technologies continue to be rolled out with almost no guardrails."
, the former president of the Media Guild of the West, ex-LA Times staffer, and always-media critic, says that workplace disclosures of AI use as proposed in the bill are probably “happily welcomed” by journalists and thinks the requirement of human review of AI-generated materials is a good principle, though he worries about first amendment issues with implementation. He also points to the bill’s provision requiring publishers to negotiate with their journalists over AI training on those journalists’ output as an interesting idea:One of the big discussions in the publisher world is over how to get fair compensation (or any compensation) from AI developers that are scraping everybody's websites like crazy. But I don't hear any conversation by publishers about sharing that value back with the journalists producing the work. When publishers strike licensing deals with AI developers, they tend to be cagey with their own newsrooms about the full contents of those deals. This is why God invented collective bargaining and the work stoppage.
Pearce also lauds the principle of consumer disclosure as a good one. “Consumers hate the idea of undisclosed AI use in their news,” he says, pointing to a survey from Trusting News that finds that 93.8% of news consumers want AI use to be disclosed by the publisher. More than half want to know the details of which tools were used, and how, too.
That the public is skeptical of AI—not just in news, but in general—that so many journalists are dubious of its effectiveness or very much still experimenting with it, that AI companies are eagerly eating into digital publishers’ market share should fuel efforts to temper Silicon Valley’s and enthusiastic media executives’ ambitions for mass AI saturation.
The road to seriously reining in AI is steep, of course. Journalists and news workers must not only confront a tech industry willing to lobby for banning state laws around its technology period, but Silicon Valley-friendly government figures like Gavin Newsom, who can kill good bills designed to strengthen local news and protect journalists from Silicon Valley with the mere threat of a veto, as he did last year. But organizing, along with good state laws, remains the best bulwark against an AI-enthusiastic elite, and the best hope of preventing a full collapse of American journalism.
“Workers deserve to have a say on anything that can impact their livelihood or working conditions,” says Ariel Wittenberg, the reporter and PEN Guild unit chair. AI is no different.”
Wittenberg says Politico and E&E News’ journalists are proud that their contract has AI protections that both safeguard their jobs and working conditions, and also ensures Politico continues to follow its own ethical standards when it launches new technologies. “It's disappointing that management has violated this agreement,” she says, “but having a contract allows us to stick up for our members, our standards and our readers.”
Media executives might want to remember that; most readers prefer outlets that center human writing and reporting.
“The only journalism business strategy that works, and that will ever work long term is if you create something of value that people (human beings, not bots) want to read or watch or listen to, and that they can’t get elsewhere,” 404 Media co-founder Jason Koebler says. “I think it isn’t smart to align ourselves with companies developing technologies they want to replace us. So at 404 Media, we haven’t.”
Wittenberg concurs. “AI that truly supplements our work can be really helpful,” she says. “But it shouldn’t supplant us, or our standards.”
That’s about it for this week. Here’s the PEN Guild’s petition to keep Politico’s AI use accountable one more time, if you want to sign on to show support.
Now that I’m back and up and running on full cylinders, I’ll return to a more frequent publishing schedule moving ahead. Thanks as always — and hammers up.
Thank you for this great reporting!
This whole rush to implement AI feels exactly like ”digitalization” all over again. And it’s very evident that the most successful cases of digitalization in any workplace is when it’s done together with the employees, not done TO the employees and over their heads.
We can just exchange ”use AI” to ”use a computer” and it’s clear how stupid it is as a mandate. It’s to broad and vague. AI can be used for many things and in many ways, but most of the time it’s uncertain if it leads to any improvements to efficiency, productivity, quality etc. Which has been the case with digitalization as well. For sure many things have been made easier with the use of digital technology. But many things have also become complicated, complex, bulky and annoying.
I’m saying this as a tech journalist, educator, and public speaker about AI since 2017.
Not to be nostalgic for old media, but at least they had to employ a person to lie to you.
You can almost forgive everyone for believing conspiracy theories at this point when every source of information is lying to you with AI anyway (and not telling you), why bother trying to figure any of it out?