What's really behind Elon Musk and DOGE's AI schemes
DOGE is feeding workers' emails into an AI to decide if they should keep their jobs. It's a cruel joke in the service of something much worse.
The “What did you do last week?” emails were bad enough. Sent out at the behest of Elon Musk’s DOGE team, the terse note exhorted two million government workers to justify their jobs with a list of five “bullets” of things they accomplished, or, per Musk, face termination. The federal workers I spoke to found the note either anxiety-producing or infuriating, and the agency-to-agency confusion over how or whether to respond only heightened the intensity. Then, on Monday, NBC broke the news that DOGE planned on feeding the responses into large language model, to use AI to determine whose jobs were “mission critical.”
The whole spectacle is an insulting and dehumanizing display of power, lazily coated in the aesthetics of AI and technological progress—in other words, it’s government by Grok. Because look, of course it’s ridiculous. There is no AI system in existence that can analyze a single email and credibly determine whether someone’s job is necessary or not. It’s a fiction, and one that, given just a little bit of scrutiny, collapses under the weight of reality.
“The inescapable reality of any human organization is that everyone is doing unique stuff in response to unique circumstances,” the computer scientist and anthropologist Ali Alkhatib says. As such, “you can't systematically evaluate people, or their work, like this. I'm not saying you can't evaluate people at all, but you certainly can't throw everyone's answers to this nonsensical question ‘what did you do last week’ into a an algorithmic woodchipper and meaningfully compare their answers against each other.”

But, stupid or not, it’s a powerful fiction. It joins the echelon of other AI projects helmed by Musk and his cohort, like the “AI-first strategy” DOGE is implementing, the government chatbots they’re building, and the systems designed to automatically remove pronouns and DEI verbiage from government websites. The very idea that DOGE’s AI can streamline and automate the government is already being used to justify the hollowing out and the reshaping of the federal workforce. Leaning into the reputation of generative AI, which has been touted as the so-powerful-it’s-terrifying future by Silicon Valley and the media, and into his meme-agency’s mission of locating efficiencies, Musk has sold his operation as the future, and he has done so emphatically enough that GOP is more than happy to run with the charade.
After all, the “AI systems” bit gives the DOGE enterprise plausible deniability. Fury is mounting over the mass firings even in red districts, where voters are railing against GOP politicians at town halls. And the broader fantasy of autonomous DOGE AI systems, the most recent included, can be seen as a means to justify the cuts while obfuscating or deflecting blame from Musk or the Trump administration.
Which is why, despite the laziness and stupidity of these projects, I do think it’s crucial that we understand *why* Musk and DOGE are going on about AI-first strategies, building agency-specific AI systems, and promising to use AI to decide who gets to keep their job and who doesn’t. The question isn’t: Aren’t these systems totally unequipped to do the work DOGE says it can do with them—and thus isn’t it a dumb idea to use AI for government?—but why, given that both those things are true, Musk and DOGE want to use them anyway.
A brief reminder that this work is made possible entirely by paying supporters—a big thanks to all of you—and if you too find this reporting and analysis valuable, you too can become a paid subscriber for $6 a month. That’s less than the cost of say, a ballpark hotdog. You can read more about my mission here—help me take Musk, DOGE and the rising tech oligarchy to task. Thanks again, and hammers up.
Here’s how the DOGE AI rhetorical machinery works in practice: Elon Musk or his team make splashy announcements or proclamations on X (The Everything App) or directly to the government agencies they’re intent on downsizing. The GOP leadership absorbs these talking points in meetings with Musk, or through his X feed, which is by now required reading of every working Republican party member, factotum and aspirant.
A good recent example: Mike Johnson, the Republican Speaker of the House, just said on C-SPAN that Musk’s technology was discovering programs that were “hidden” within the bureaucracy, rooting out waste with cutting-edge acumen. "Elon's cracked the code,” Johnson said. “He's now inside these agencies. He's created these algorithms that are constantly crawling through the data and as he told me in his office, data doesn't lie. We're gonna be able to get the information. We're gonna be able to transform the way federal government works at the end of this, and that is a very exciting prospect. It’s truly a revolutionary moment for the nation."
Johnson’s monologue is, of course, also a fiction. He’s one of those rare guys that seems to actively relish performing the fast-talking, obviously duplicitous caricature of a TV politician. He well knows that the records of all the “waste” DOGE is uncovering was perfectly visible in widely accessible documentation that every department is required to publish. The GOP was able to get the information at literally any time; but, like DOGE’s sham AI job evaluator, the point isn’t functionality; it’s the fiction of apolitical automated machinery, at work on behalf of the people. In reality, it’s a pretense-generator. If “data doesn’t lie” then the system can be made to fire whoever you want.
“There’s been a long historical pattern in many industries of using new technologies not just to automate tasks but to simultaneously insulate management from having to take responsibility for their decisions, particularly their anti-labor ones,” Mar Hicks, a historian of technology at the School of Data Science at the University of Virginia, tells me.
Hicks continues:
“In fact, there are examples where automation has been brought in specifically for this purpose even when it’s clear that automation isn’t working as expected or intended, or isn’t fit for the purpose. But by the time workers and ordinary people have fought it out and gotten the faulty systems out of the way, it’s too late: the damage has been done—or perhaps more accurately the systems have provided cover for management in exactly the way intended.”
And look at the timing here. Musk and DOGE already successfully fired some 20,000 federal workers—the bulk of which were still in their 1-year probationary periods, and thus easier to legally fire—and offered buyouts to 75,000 more. Now the cuts are not only protested by constituents, but less straightforwardly legal; mass layoffs must be carried out under a reduction of force (ROF) and others must be performance-based. So it would be handy if someone intent on firing tens of thousands of people had a system that could carry out performance reviews with the push of a button, and which might lend the illusory appearance of justifying them.
It is illusory, too—that point could not be more crucial.
“These kinds of systems only barely work when everyone involved really deeply appreciates how carefully they need to tread when they make decisions based on the output of this system,” Alkhatib says. And care, of course, is the furthest thing from the equation right now; obviously we can expect no meticulous, participatory collection and examination of data in any DOGE office.
If they do persist in using the AI, Alkhatib says, “these systems will still spit out crap, and if you're determined enough to soldier through with all the chaos it produces, you will churn out workers and churn in new employees who know how to write Mr Beast style summaries of their week. And to a half-awake middle manager like Musk, the system will seem to be working.”
The point is not to build a system that leans on institutional knowledge to create a futuristic, more efficient automated government. The point is to use the very notion of AI and automation as instruments of disruption, and of consolidating control.
Meanwhile, Hicks points out that tech CEOs and AI advocates have repeatedly underlined AI’s capacity to automate work and output in a “morally autonomous” way that nonetheless is designed to profit them directly:
Even though what we’re seeing right now with AI and rogue billionaires influencing the federal government is an extreme example, it’s not without precedent. Most of the “thought leaders” and investors in the generative AI and “AGI” space have been pretty up front about how they see these technologies as morally autonomous in ways that insulate the people who deploy them from critique and liability, despite being the people deploying and profiting off the systems.
Hicks notes that, “We saw this with how they used (and arguably stole or misappropriated) the copyrighted texts used to train the models, and we’re also seeing it with the deployment as well.”
We need to think long and hard about the role that Silicon Valley and the generative AI boom it’s encouraged have played in leading us to this moment—where the Speaker of the House of the United States is endorsing a tech billionaire’s phony technologies as a means of auto-gutting the government, and AI is touted at every level of the DOGE operation, which still has the full backing of President Trump.
We’re now in year three of the tech sector’s full court press exhorting generative AI as the vessel that will deliver us Artificial General Intelligence, which will be able to automate every manner of job. And it’s entirely feasible Musk and co would have been able to invade the federal government, promising techno-fantasies of automation and efficiency, without the broader social imaginary of a looming hyper-powerful AI; without generative AI being on the tips of everyone’s tongues. But the AI hype, from which DOGE borrows its logic and ideas, has made Musk’s fantasies seem all the more plausible and eroded potential resistances. It’s made Musk’s campaign to gut the government a lot easier. In order to halt his campaign of governmental destruction, we have to dismantle the idea that AI can or should replace government workers’ jobs.
Thanks for reading, and if you’re a paid supporter, for making this writing possible. Very much appreciate it. I also wanted to quickly note a couple forthcoming events. If you’re in the Bay Area tomorrow, Weds 26th, I’ll be at Stanford, moderating what is sure to be a fantastic talk, The AI We Deserve, with Evgeny Morozov, Audrey Tang, and Terry Winograd.
I’ll also be heading to SXSW the following week, where
On March 7th, I’ll be interviewing the great crypto critic Molly White in a featured event.
On March 8th, I’ll be at the Light House, talking about AI, labor and the future of work with the Tech We Want.
On March 10th, I’ll be joining the venerable 404 Media crew to chat about AI slop and independent media.
Hope to see you there, cheers — and hammers up.
AI is a great way for people in power to hurt people and look like innocent bystanders.
It is the anthesis of respnsibility and freedom
AI control grid brought to you by Vance, Thiel, Musk:
[b]E-GOD: Purpose of DOGE: Dismantle Existing System of Governance & Build Back Spiderweb AI Infrastructure[/b]: https://old.bitchute.com/video/EUer0GMfzKvl [7mins]
Time code: 4:30 Vance: "Accept that This entire thing is going to fall in on itself. And so the task of conservatives right now is to preserve as much as can be conserved and so when the inevitable collapse of the country comes ensure that conservatives can help --> build back <-- the country."
They're all in it together. Even "conservatives" are using the term "build back" better. In order to build back, you must first destroy:
6uild 6ack 6etter Episode 1: https://old.bitchute.com/video/5XkkXCvnKM6Z [2mins]
6uild 6ack 6etter Episode 2: https://old.bitchute.com/video/lW5xHt4RySSb [2mins]
The AI 15 minute panopticon prison they are constructing is not a world anyone is going to want to live in...