In California, no AI bill is safe
The veto of a closely-watched AI safety bill proves one thing above all: Silicon Valley has far too much power over California.
Greetings greetings, hope everyone’s week has started out well enough, unsafe though you may all be from AI in California or beyond. As always, thank you for reading, subscribing, sharing, and/or supporting this project. Paid subscribers are what make this all possible—if you are able, a paid subscription, at less than the cost of a beer a month, means I can keep writing here—I’m grateful for each of you. Special thanks to everyone who subscribed over the weekend and left some extra-kind words of encouragement. Onwards, and hammers up.
The big AI news over the weekend was California governor Gavin Newsom’s veto of SB 1047, a bill that would have imposed a number of safety regulations over the tech companies that train and run large AI models. The bill, which would have held AI companies liable for major harms and catastrophic damages they caused, imposed safety testing, and other oversights, was, naturally, divisive.
But whatever your thoughts on the bill itself, unless you are, say, a VC hoping to maximize returns on your investment no matter the cost, there’s one takeaway from Newsom’s veto just about everyone can agree on: Silicon Valley has accumulated far too much power over even the supposedly left-leaning California. Yet again, big tech threw its weight against a bill, crushing a law meant to rein it in, and demonstrating that even the governor of California would rather weather a round of bad press than alienate Silicon Valley and its donor class.
Like a lot of observers, I was pretty ambivalent about the bill itself, which was kind of a strange beast. It was co-drafted by folks from the Center for AI Safety, who were concerned with x-risks, or catastrophic and/or existential threats posed by advancing AI systems. As such, it was primarily written to try to prevent future AI-made disasters, and did so by making tech companies liable for such disasters if they were to occur. It only applied to companies of a certain size—worth $100 million or more—and would have established a public oversight board to try to evaluate and prevent AI risks before they occurred.
None of these are necessarily bad things, but for those of us who aren’t all that worried that the real threat of AI is that it will build a killer chemical weapon, it’s priorities seemed skewed, and risked blowing right past the real problems AI is creating, right now, today—the ways AI programs are entrenching systemic biases and racism, degrading and hollowing out labor, and so on.
Still, there was some good stuff in the bill and the central idea—that we might actually hold a tech company accountable for the damage it did was a tantalizing one, and might have shifted the way we approach technology development in some pretty important and fundamental ways. No wonder the VCs hated it and wanted to burn it alive. Which they did! With help from Google, OpenAI, and Nancy Pelosi, who made the rare move of voicing public opposition to a Democrat-sponsored bill.
I think this blowback surprised a lot of the bill’s early architects and supporters—after all, the CEOs of OpenAI and Google and so forth have spent the last two years going around saying AI if done wrong could be an existential threat and calling for responsible regulation, saying they welcome regulation, that kind of thing.
So CAIS wrote some regulation, and wrote it expressly to address exactly what that all these executives and AI prime movers were saying they were worried about—catastrophic future risks. They simply weren’t counting on the vast majority of those executives to be full of shit, to be talking up the immense power of their systems as a publicity strategy. Or, more charitably, infinitely more concerned about improving their market position, attracting more investors, wooing more clients, and scaling as quickly as possible, than anything regarding “AI safety” with any modicum of sincerity.
That’s why you saw VC partners at Andreessen Horowitz out there mocking other tech industry folks in the AI risk community who had pushed for the bill—because they actually believe in the risks the Sam Altman and co have articulated to Congress and beyond. You also see this in the exodus of employees from OpenAI, where researchers and staff who are legitimately concerned about x risk say they’re leaving the company for ethical reasons. Many have noted that they’ve lost confidence in Sam Altman and the c-suite to take AI safety seriously. So you’ve got something of a schism brewing within the tech industry itself, with the x-risk factions, surprise, losing out to big tech.1
They join journalists, truck drivers, gig workers, and so many other groups whose interests have been steamrolled by big tech and its lobbying machine in Sacramento—just this year, Google tanked a bill built to save local journalism. Word was, Newsom threatened a veto at the industry’s behest, and it died on the table. Last year, he vetoed a bill that would have required safety drivers in autonomous trucks on behalf of industry—mere weeks before a self-driving car pinned and dragged a pedestrian for blocks in San Francisco. The list goes on.
There are exceptions—Newsom signed a bill to protect actors’ digital likeness, one to confirm that deepfake child porn was still child porn in the eyes of the law, and one to combat election misinformation—but they are largely either low-hanging fruit that won’t much threaten Silicon Valley’s bottom line, or tweaks to existing laws. When push comes to shove, and a bill meaningfully challenges Silicon Valley’s power, you can now all but count on that bill to die.
This will continue to be the case until a few things happen: A grassroots movement grows enough power to challenge the Valley head on, the current governing elite leaves office, and/or the California legislature starts mounting challenges to the governors’ vetoes. State lawmakers haven’t voted to overturn a governors’ veto in 43 years; and by declining to do so, they have allowed power to concentrate in the governor’s mansion—and in the tech giants of Silicon Valley.
Elon Musk supported the bill, but almost certainly for purely selfish reasons; he has supported AI regulation for years, but is also far behind in the game with his own AI company and would relish any wrench thrown in the gears of Google or OpenAI.
I suspect the only thing that would fix politics in California (and Texas for that matter) for the state to be broken up into six smaller states.
VC Tim Draper took a crack at it several years ago; presumably to shatter Democratic control over all those votes, but I suspect it would have had the opposite effect of what he wanted; instead driving the Senate and executive branches further to the left.
AI companies: hey so this thing we’re building is so powerful it could one day kill us all
Gov: oh, so we should be regulating this thing then?
AI companies: are you crazy?