This is the gentle singularity?
Sam Altman says the AI utopia is already here — in a manifesto aimed at raising money from the Saudis. On the real vs imagined manifestations of modern AI.
When Sam Altman published his latest blog post “A Gentle Singularity”, my first thought was, ‘ok so how much is OpenAI trying to fundraise this time?’ It was a half-assed joke to myself, not initially intended for public consumption, an occupational hazard of spending too much time observing the AI industry. See, Altman has a habit of making grandiose statements about the transformative power of his company’s technology (which he knows will be picked up by the tech media) whenever there is an express financial incentive for him to do so.
It’s a pattern stretching back years, one I’ve documented at length before. When OpenAI needs an infusion of cash, or wants to seal a deal, out come the promises of AGI. Just last February, Altman published “Three Observations,” the final of which was “the socioeconomic value of linearly increasing intelligence is super-exponential in nature.”1 That turned out to be a rather direct entreaty to Softbank, which was at that very time considering leading an enormous investment round in OpenAI, to pull the trigger. It was ultimately a successful one, too: Softbank inked a deal promising to deliver $40 billion for the AI company. But that was just a few months ago. Altman couldn’t be going back to the well so soon, so transparently, could he?
A quick message: BLOOD IN THE MACHINE is 100% reader-supported and made possible by my incomparable paying subscribers. I’m able to keep the vast majority of my work free to read and open to all thanks to that support. If you can, for the cost of a coffee a month, consider helping me keep this thing running. Thanks everyone. Onwards.
Of course he could. Proving that you can rarely be too cynical when considering the motives of OpenAI executives, it was soon revealed that the instincts behind my personal in-joke were correct, and Altman was already indeed gunning for more billions from investors. Just two days after Altman published “A Gentle Singularity,” the Information reported that OpenAI had entered into talks with Saudi Arabia’s Public Investment Fund and the United Arab Emirates’ MGX fund to help fill out OpenAI’s next funding round. There is always, you see, a next funding round to be filled out.
The gist of Altman’s latest mini-manifesto is that, whether you can feel the AGI or not, we are already in the midst of the early stages of inexorable, AI-made utopia, thanks to OpenAI’s software products. “We are past the event horizon; the takeoff has started,” Altman writes. “Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be.”
To the extent that Altman’s manifesto is “about” anything other than showing the Saudis and Emiratis that he can generate headlines in the press about his company at will, it’s a rejoinder to his chief competitor, Anthropic CEO and former OpenAI executive Dario Amodei, who made his own splash in the news cycle a couple weeks back when he warned that AI would soon be taking millions of jobs.
Sam Altman’s position on that matter has been rather fluid—as ChatGPT burst onto the scene two years ago, he warned of mass disruptive job loss, calling for a universal basic income program; more recently he has been articulating the more palatable position that actually the changes will be less destructive, and might not be all that noticeable really. Some jobs will come and others will go, and the future will look like the good parts of the present just with a new layer of super-productivity grafted on top. This is, essentially, Altman’s “gentle singularity.” (The singularity is a concept derived from mathematics and embraced by science fiction authors and tech industry folks, and is now generally understood to describe the moment when technological progress becomes so rapid that it becomes uncontrollable by humans.)
The new positioning offers a raft of advantages—it assures early OpenAI investors, clients, and backers that the major technological changes they bought into are here, even if they can’t quite feel them personally or detect them in their companies’ bottom lines yet. It suggests that those changes still stand to be profound, but no one has to really do much to prepare for them, aside from purchasing AI products.
Now, what Sam Altman actually believes is immaterial—or rather, it’s 100% material, in that it is entirely aligned at any given time with what will maximize investor buy-in. The “gentle” framing seems designed to promote OpenAI as the friendlier neighborhood job automator, signaling to corporations interested in AI enterprise products that OpenAI will treat such matters more diplomatically than the other AI companies in town. “A Gentle Singularity” is supposed to be something all reasonable future-forward people can get behind, our machines of loving grace harmoniously in tow.
It’s not. It may not be Altman’s worst fabrication yet, but it may be the most insulting. It’s not just that Sam Altman’s touched and humble reluctant prophet schtick is wearing impossibly thin, but the audacity of declaring a “gentle” singularity in service of soliciting funds from a nation that executes dissident journalists, as his company grafts itself onto Donald Trump’s department of defense on the brink of all-out war. Altman wants us to look out the window and be assured that this is the gentle singularity?
I’ll elaborate. Let’s take quick stock of what else has happened in just the week since Altman published his article, and bear in mind that this is all happening in a world in which the gentle AI revolution is underway right now:
-OpenAI has sought out funding from Saudi Arabia and the United Arab Emirates, two regimes with some of the worst records on human rights in the world. In particular, the Saudi Arabia Public Investment Fund, from which OpenAI is reportedly soliciting investment, has been linked to human rights violations by orgs like Human Rights Watch. Not only does PIF fund deadly megaprojects like NEOM, which has claimed the lives of 21,000 workers, but it was used to help facilitate the Saudis’ murder by of Jamal Khashoggi. Is this the gentle singularity? The same one on the verge of being financed by regimes that bonesaw dissenting journalists and overwork migrants to death in the desert?
-OpenAI’s Chief Product Officer, Kevin Weil, was sworn into the US Army’s newly launched “Detachment 201” which the military describes as “an effort to recruit senior tech executives to serve part-time in the Army Reserve as senior advisors… By bringing private-sector know-how into uniform, Det. 201 is supercharging efforts like the Army Transformation Initiative, which aims to make the force leaner, smarter, and more lethal.” Three days after that announcement, OpenAI was awarded a one-year $200 million contract from Donald Trump’s Department of Defense, to integrate AI products into the US military. I would not have guessed that a gentle singularity would have involved helping render militaries “more lethal” but then again I’m no gentle singularity expert.
-OpenAI is joining a closed door meeting with the DOE, fossil fuel executives, and the Emiratis to accelerate energy production for hyperscaling AI. The meeting, dubbed ENACT, has raised fears that ramped up energy production will primarily be fossil fuel-generated, increasing emissions and contributing to climate change. The gentle singularity, in other words, seems slated to (continue to) be powered by fossil fuels, exacerbating the climate crisis every step of the way.
-Speaking of closed door meetings, OpenAI and the other major AI players have successfully lobbied to keep federal and state regulators away from the technology. As previously reported in these pages, OpenAI is one of the chief parties behind the 10-year ban on state level AI lawmaking that has been included in the “One Big Beautiful Bill” being debated in the senate. The provision’s future is not certain, as a number of Republican senators appear to have come out against it, but the fact that it exists at all, has made it this far (the House of Representatives voted to include it in the budget reconciliation bill), and may yet become law is plenty disturbing.
There’s a part in Altman’s manifesto in which he expounds at length on the importance of “harnessing the collective will and wisdom of people” and allowing society to decide how to use AI. Yet OpenAI is pushing Republicans and the Trump administration to pass a law that bans states from doing exactly that.2 The gentle singularity is not intended to be subject to democratic input, it seems.
Finally, at the very moment that Altman was writing his post, it’s likely that the most notable public-facing use of AI came in the form of the torrents of slop used to make protests against state oppression look like war zones, and protestors like thuggish criminals.

Is this the gentle singularity?
Or perhaps this is:
Or this?
Those last two images have been shared directly by Iran or its major national media, and they are AI-generated fabrications of course. They depict fantasias of retaliatory violence after Israel launched its military assault there. Whether it’s American rightwing X accounts looking to help stir the pot and to justify a militarized response to protests in LA, or images of machine-generated carnage in the Middle East, AI is being used to ramp up tensions, glorify violence, and to substantiate hate and prejudice. The opposite, one might argue, of a gentle singularity.
I don’t point all this out merely to be snide, or even just to mock the premise of Altman’s blog post, though it very much does deserve to be mocked. Now more than ever, we need to think about “AI” not merely as consumer technology, but as an idea and a logic that is shaping political economy around the globe. And there’s nothing gentle about it.
The major players in AI (OpenAI, Meta, Google, Microsoft) are above all bent on concentrating power and capital—again, just take a scroll through the above list of OpenAI’s moves in just the last week or two—as rapidly as they can, and by brute force if necessary. They are doing so by partnering with governments embracing authoritarianism and crushing dissent, signing contracts with a military preparing for—or at the very least—abetting, war, and teaming up with fossil fuel companies in the time of climate crisis. And AI-generated art is a pillar, as Gareth Watkins put it, of the modern aesthetics of fascism. It can be used to bend depictions of reality to whatever whims one desires; warped as the product may be, it’s fundamentally truth-proof.
We cannot separate the AI products—text and image generators capable of producing cheap and voluminous content—or the companies building them, from these contexts. Or from the fact that they are at root automation products that are not yet close to being profitable, and thus demand new mass markets and exemption from regulation. Thus OpenAI heralding the wisdom of crowds to decide how to use AI out of one corner of its mouth while lobbying to shut down any lawmaking around AI period from the other. Thus OpenAI’s enthusiastic partnering up with an administration that uses its image-generating tools to mock and degrade the powerless.
Altman insists that the AI revolution is already here and underway, that it may not feel all that different to you yet, but that we are past the “event horizon.” As such, I feel obligated to ask, one last time: Is this the gentle singularity?
Verner Vinge and the Technological Singularity
Speaking of singularities, I’m a little surprised that we don’t hear more about the late mathematician and science fiction author Verner Vinge’s 1993 paper, “The coming technological singularity: How to survive in the post-human era.” The paper predicts “superhuman” intelligence within 35 years, or by 2028, which is pretty firmly in line with what Altman and co are talking about these days. You’d think it would at the very least make for an X post with some decent viral potential among the blue check set. I read a couple of Vinge’s novels back in the day, and including Rainbow’s End, a depiction of the coming singularity set on a college campus in 2025.
Thanks as always for reading all, and more very soon. Hammers up.
My favorite line in that post follows directly after: “A consequence of this is that we see no reason for exponentially increasing investment to stop in the near future,” Altman writes.
That part: “…focus on making superintelligence cheap, widely available, and not too concentrated with any person, company, or country. Society is resilient, creative, and adapts quickly. If we can harness the collective will and wisdom of people, then although we’ll make plenty of mistakes and some things will go really wrong, we will learn and adapt quickly and be able to use this technology to get maximum upside and minimal downside. Giving users a lot of freedom, within broad bounds society has to decide on, seems very important. The sooner the world can start a conversation about what these broad bounds are and how we define collective alignment, the better.”
Nicely detailed account of Altman BS machine. Basically, he’s just breathing another big puff of hot air into the generative AI bubble. Have you read Empire of AI by Karen Hao?
Brother, if this is the Singularity I am *very* disappointed.