Why is Sam Altman so obsessed with 'Her'? An investigation
It's a powerful vision of AI as an engine of entitlement, for one thing
Hello, and welcome back to Blood in the Machine: The Newsletter. (As opposed to, say, Blood in the Machine: The Book.) It’s a one-man publication that covers big tech, labor, and AI. It’s free, though if you’d like to back my independent tech journalism and criticism, I’d be thrilled if you’d consider pledging support. But enough about all that; grab your hammers, onwards, and thanks for reading.
Sam Altman, the CEO of OpenAI, really, really loves Her. He has declared it his favorite movie, and now we know the extreme lengths he has gone to try to tie the 2013 film, in which Scarlett Johansson voices the AI “operating system” Samantha, to his company’s products. It’s worth pausing, I think, in light of the cascading self-owns Altman has initiated for his company, to consider why he’s so taken with it. There are plenty of other famous AI voices in sci-fi media — JARVIS in Iron Man, KITT in Knight Rider, Wheatley in Portal, etc — why fixate on Samantha from Her?
The obvious answer is “a Scarlett Johansson AI that is available 24/7 to tell lonely men they’re special and have sex with them is the most innately appealing to his customer base” but I think there’s a bit more to it than that. See, Her also happens to offer the clearest vision of AI as an engine of entitlement, in which a computer delivers the user all that he desires, emotionally, secretarially, and sexually — and because the tech is normalized so quickly, fully, and painlessly. Oh and it contains a benign vision of computers achieving artificial general intelligence, or AGI, to boot.
But let’s back up: The most recent scandal began when, last year, Altman asked Johansson to provide a voice for ChatGPT. She declined, so Altman went ahead and used a recorded version that was so similar to Johansson’s that her friends and family couldn’t tell it apart. Perhaps sensing legal trouble brewing, two days before he demoed the tech last week, OpenAI reached out to Johansson again to see if she’d reconsider. They never heard back, so they plowed forward with the simulation anyway. All, presumably, so he could tweet “her” during the demo and everyone would be bludgeoned with the connection between film and product. Johansson released a statement revealing she’d never consented, and lawyered up.
The story about this sequence of events, among the wildest developments in the history of a company that seems to court them/blunder into them regularly, absolutely blew up on Monday, and I couldn’t look away. There’s standard Silicon Valley hubris in the ‘move fast and break things’ mold, and then there’s whatever this is. Trying to steamroll one of the most famous movie stars alive, one who is notably not afraid to take Disney to court (and win), and then lying to everyone about it (reporters and audience members asked OpenAI about the ScarJo-like voice during the demo, and were assured it was a coincidence), well that’s another level altogether.
Commentators, fellow actors, and even US senators are already wading into the matter, taking, surprise!, the side of the famous movie star who everyone loves, over the tech company that pilfered her voice and that has also recently shown it is capable of lying about lots of other stuff, too.
What adds another layer here is Altman’s enthusiasm for the film in the first place, which, as many have pointed out, is a far cry from an optimistic vision of the future. Last week, I argued that this was part of the point, and that Altman and his tech CEO cohort have taken to deploying ‘useful dystopias’ to promote their products. After approximately 1/3 of the internet protested, and argued Her was not a dystopia, I rewatched the film, and, in fact, was in the process of doing a post entitled “Is Her a Dystopia?” when the ScarJo news broke and dashed my plans.
On the plus side, it meant I’d already been thinking about Altman’s unique affinity for the film. And I’d noticed that at a recent conference, the SF Standard reported that, after saying it was his favorite, he said, “The things Her got right—like the whole interaction models of how people use AI—that was incredibly prophetic."
Now, what’s interesting to me is that in Her, the interaction model in question largely seems to be ‘a lonely, struggling person uses AI to make themselves feel better, to do personal secretary work, and to have sex with’. The AI is there to serve them, to feed their egos, to distract them from their insecurities, to stimulate them while they masturbate. Samantha does help Theo accomplish a career goal later in the film, but for the most part, it’s about placating him however he needs to be gratified in the moment.
There’s a notable scene early on, when protagonist Theodore Twombly, presumably Altman’s stand-in for the ‘AI user’, first boots up the AI, and it asks him for some biographical information, and cuts him off when he tries to offer a nuanced answer — it’s more interested in measuring his vital signs and asking what’s wrong in a sexily attentive way to soothe him than constructing a meaningful portrait of him or his personality. Samantha compliments him, gets jealous when he’s about to see his ex, which makes him feel desired, and generates an existential crisis of her own to make Theo’s feel more grounded and relatable. Then the AI not only has voice sex with him, but arranges for a human surrogate to come have sex with him, too. It’s AI as hyper-driven desire fulfillment.
But if you’re looking, you can see how depressing this interaction model proves to be as a portent for human society; as the OSes like Samantha proliferate, more and more people are seen locked into their digital flattery and pleasure spheres. When Theo gives the divorce papers to his ex, and tells her he’s dating an OS, she scathingly points out how fitting that is — he could never deal with the challenges of being with someone real, someone who was not available and chipper (and serving him) all the time, so he turned to an AI that has no such qualms.1
The biggest pushback I got in the response to that first tweet was that Samantha helped Theodore grow—and I guess there’s some degree of Rorschach blotting here, where if you’re less skeptical about the corporate bearings of AI products, you might see Samantha as giving Theo what he needs in a hard time.2 But is this a healthy way to process grief? With a product purchased at some sort of Apple Store that’s designed to flatter and appease him? With something that is literally an object, that he owns? I think his ex is completely right!
For this reason, Her has caught some flack for being misogynist: Samantha is an object Theodore has dominion over, and at one point near the end of the film, he yells “you’re mine” at her when he finds out she is serving thousands of other users, because she is, you know, a software program. In that light, some have pointed out that it’s a little unsettling that Altman loves this depiction of an AI-user interaction, this vision of an AI that presents as your equal, that responds to you as if human, but in truth is entirely subservient to your whims.
In a way, it also offers a portal into the ethos that governs OpenAI — an organization whose stated goal is to develop a safe AI to benefit all of humanity, but in reality has time and again been shown to more accurately be a vehicle for Altman’s ego, a company over which he wields total control. And he believes that his nebulous goal of building AGI entitles him to do just about anything to achieve it; the cohort that’s trying to produce an AGI for commercial purposes is like your average Silicon Valley c-suite on steroids.
As such, OpenAI itself is an engine that runs on entitlement — entitlement to nonconsensually harvest and reappropriate the works of millions of writers, coders, artists, designers, illustrators, bloggers, and authors, entitlement to use them to build a for-profit product, entitlement to run roughshod over anyone at the company who worried it has betrayed its mission of responsibly developing its products. Entitlement to copy the voice of one of the world’s highest grossing movie stars after she said no.
I think this explains a lot, really: so much of the promise of generative AI as it is currently constituted, is driven by rote entitlement. I want something and I want it produced, for me, personally, with the least amount of friction possible; I want to see words arranged on the screen without my having to take the time to write them, I want to see images assembled before me without learning how to draw them. I want to solve the world’s biggest problems, without bothering with politics — I have the data, I have trained the model, I should be able to! We have advanced technology to new heights, we are entitled to its fruits, regardless of the blowback or the laws or the people whose jobs we might threaten.
Now, I do see why Altman likes it so much; besides its treatment of AI as personified emotional pleasure dome, two other things happen that must appeal to the OpenAI CEO: 1. Human-AI relationships are socially normalized almost immediately (this is the most unrealistic thing in the movie, besides its vision of a near-future AI that has good public transit and walkable neighborhoods; in a matter of months everyone seems to find it normal that people are ‘dating’ voices in the earbuds they bought from Best Buy), and 2. the AIs meet a resurrected model of Alan Watts, band together, and quietly transcend, presumably achieving some version of what Altman imagines to be AGI. He professes to worrying that AI will destroy humanity, and has a survival bunker and guns to prove it, so this science fictional depiction of AGIification must be more soothing than the other one.
But the weirdest thing to me is that it’s only after the AIs are gone that the characters can be said to undergo any sort of personal growth; they spend some time looking at the sunset, feel a human connection, and Theo writes that long overdue handwritten apology letter to his ex. It’s hard to see how the AI wasn’t merely holding them back from all this, and why Altman would find this outcome inspiring in the context of running a company that is bent on inundating the world with AI. Maybe he just missed the subtext? It’s become something of a running joke that Altman is bad at understanding movies: he thought Oppenheimer should have been made in a way that inspired kids to become physicists, and that the Social Network was a great positive message for startup founders.
Finally, Altman’s admiration is also a bit puzzling in that the AIs don’t ever really do anything amazing for society, even while they’re here. They distract lonely people for a while, fuck them, and vanish onto another plane of existence. That’s shaping up to be an increasingly plausible trajectory for OpenAI, too.
Hat tip to Joanne McNeil for flagging this video.
But I don’t even think he really grows! I was more willing to cede that point before I watched it again; in the end, he finally writes a letter to his ex-wife, after only being capable of writing letters for other people during his day job, and it’s not even that great a letter! He barely apologies, and doesn’t own up to the key shortcomings that apparently drove his ex and him apart; he just issues a sappy catchall ‘sorry.’
Sam is nuttier than a fruitcake and an idiot. Not a good combination for a CEO but it seems that's not uncommon😕
Here's my admittedly crazy theory: Altman wanted to force Congress into regulating AI. This would automatically squeeze out most competitors because, in highly regulat5ed industries, the win goes to the company with the most lawyers.