De-democratizing AI
How Silicon Valley and the GOP teamed up for a campaign to ban state AI laws
Greetings all,
Today, we dive deep into the GOP’s radical campaign to ban US states from passing any laws that govern AI. Even in a political moment as fraught as ours, this one stands out. We’ll get into:
How a proposal to ban AI lawmaking wound up in the budget reconciliation bill the same week that AI execs took a trip with Trump to Saudi Arabia
How the GOP plans to try to sell its state AI ban, according to the GOP
A look at what the implications are for AI in general
An interview at the end with California Assemblyman Isaac Bryan, author and co-sponsor of some of the AI bills Silicon Valley wants dead
I’m not going to lie, this was a dark week, and a tough one to report through—I meant to publish this on Tuesday, then Wednesday, and the new twists and revelations in how this campaign came together just kept piling up. It took many, many hours to research, investigate, and write this story. To that end, Blood in the Machine is 100% reader supported, and made possible by paid subscribers. If you find value in this work, and if you can, your support would be immensely appreciated.
On Sunday, May 11th, Republicans added a sweeping amendment to the 2025 budget reconciliation bill that would ban all US states from enacting any laws regulating AI for ten years. Reconciliation is a common way for a party to try to push through controversial or unpopular legislation that might not survive a regular Senate vote (budgets can’t be filibustered, and need just a simple majority to pass). Even so, this amendment, put forward by the Kentucky congressman and energy and commerce committee chair Brett Guthrie, managed to shock.
The amendment drew admonitory headlines, consternation in Democrats, and anger and disbelief on social media. The outcry is well deserved. The bill’s language is not ambiguous. It says that “no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act.”
Take a minute to absorb what’s being proposed here: No state may enforce any law or regulation of AI. A total ban of state lawmaking on what is routinely touted as the most transformative commercial technology of our generation. And because we can safely assume there will be no serious efforts to regulate AI by the GOP-controlled Congress or by a Trump administration intent on helping the US AI industry dominate, this is, in effect, an effort to ban any lawmaking around AI whatsoever, for the next two to four years, while Republicans have a stranglehold on power.
All of this is, needless to say, profoundly undemocratic. Both in approach—the act of sliding a bill with such severe repercussions into the reconciliation process, where it won’t receive a proper public hearing—and intent: to prevent the public from having a vote on how pervasive Silicon Valley technologies are impacting their lives. Worse still, Gutherie’s amendment is the culmination of a multi-pronged lobbying effort from the major AI companies. That effort’s aim, as reported by Politico, was to shut down state laws that might constrain AI firms’ and investors’ ability to profit off of AI products—especially California’s.
AI industry pitchmen are fond of saying that AI is a powerful tool for “democratization.” It has instead become a force for the opposite.
On Tuesday, at about the same time that the proposed language seeking to ban state AI regulation was officially being introduced in Congress, a bevy of tech billionaires including Sam Altman, Elon Musk, Nvidia CEO Jensen Huang and Amazon CEO Andrew Jassy were at lunch with president Trump in Saudi Arabia. There, the tech titans cut billion dollar deals with Gulf State royalty and the Trump Administration. Trump announced a $142 billion defense and AI services sale to Saudi Arabia. DataVolt, a Saudi Arabian company will spend $20 billion on data centers in the US. Amazon is investing $5 billion in Humain, Mohammed bin Salman’s AI startup. Nvidia is selling billions of dollars worth of chips to Humain. Meanwhile, OpenAI is mulling a StarGate project in the United Arab Emirates; MGX, the Emirati investment firm, is already a backer of its fledgling Texas data megacenter.
And on and on it goes. I hope this fact escapes no one: While the executives of AI firms are abroad in Saudi Arabia, cutting billion dollar deals to expand their operations with nations boasting some of the worst human rights records in the world, their lobbyists and partners back home are trying to make it impossible to pass any laws governing their AI products at all.
With states’ rights to legislate AI under assault, I reached out to lawmakers to see how the move in DC was reverberating back home.
“The tech industry was incubated, cultivated, and continues to grow and innovate here in California,” says Isaac Bryan, a California assemblyman who has authored a state bill that limits the ways AI can be used for surveillance in the workplace—one of the bills that the GOP amendment would ban. (Bryan also happens to represent my district in the CA assembly.) “California deserves the right, and has the expertise, to lead. We’ve been establishing meaningful guardrails and regulations around these advancements so that we center people as we continue to innovate.”
But now there’s a gulf between who gets a say in AI policy, Bryan says, and who doesn’t. “There's the needs that everyday folks have,” he says, “and there's the needs that our tech billionaire class has—and those are the only ones being addressed.”
Samantha Gordon, a program director at TechEquity, a nonprofit group of tech workers that advocates for housing and labor issues, and that has backed a number of California AI bills, tells me that widening gulf is by design. “This amendment is the direct result of a campaign by Google, Meta, OpenAI, and venture capitalists like Andreessen Horowitz—and their dozens of trade associations—to bulldoze through the public's safety in order to continue to make risky bets on a precarious and potentially hazardous technology,” Gordon says.
Public polling shows bipartisan support for more regulation of AI, after all, not less. And yet, as Gordon puts it, “if this amendment passes, not a single state in America could protect people from AI systems that unfairly deny their medical care, keep their nursing homes understaffed, revoke their unemployment benefits, or inflate their rent.” It’s part of what she says is a “cynical campaign” the tech industry is waging “to override the will of the public.”
Now, there’s a good possibility that this aggressive language won’t survive the Byrd Rule—a law that restricts what can be included in the reconciliation process to measures that affect spending levels and revenue—but it might. And GOP leadership, which now counts Silicon Valley insiders and AI bulls like Musk, Andreessen, and David Sacks among its inner circle, may deem it worth the legal challenges. And even if the language does gets stripped we cannot afford to ignore what it tells us: Top Republicans and top players in the AI industry can now move as a united front. The time of AI industry leaders paying lip service to AI as a technology that “benefits all of humanity,” a line that has been withering on the vine for a while now, is gone. In its place is a cold calculus bent on using the technology and its logic to accumulate as much power as possible.
So let’s run down why it is the AI companies are so intent on stopping these state-level bills, why the GOP is so interested in helping them, and how this changes the very way we should think about AI as a technology.
You may have noticed in the above language in the bill goes beyond “AI” and also includes “automated decision systems.” That’s likely because there are two California bills currently under consideration in the state legislature that use the term; AB 1018, the Automated Decisions Safety Act and SB7, the No Robo Bosses Act, which would seek to prevent employers from relying on “automated decision-making systems, to make hiring, promotion, discipline, or termination decisions without human oversight.”
The GOP’s new amendments would ban both outright, along with the other 30 proposed bills that address AI in California. Three of the proposed bills are backed by the California Federation of Labor Unions, including AB 1018, which aims to eliminate algorithmic discrimination and to ensure companies are transparent about how they use AI in workplaces. It requires workers to be told if AI is used in the hiring process, allows them to opt out of AI systems, and to appeal decisions made by AI. The Labor Fed also backs Bryan’s bill, AB 1221, which seeks to prohibit discriminatory surveillance systems like facial recognition, establish worker data protections, and compels employers to notify workers when they introduce new AI surveillance tools.
It should be getting clearer why Silicon Valley is intent on halting these bills: One of the key markets—if not the key market—for AI is as enterprise and workplace software. A top promise is that companies can automate jobs and labor; restricting surveillance capabilities or carving out worker protections promise to put a dent in the AI companies’ bottom lines. Furthermore, AI products and automation software promise a way for managers to evade accountability—laws that force them to stay accountable defeat the purpose.
OpenAI already won a major victory in beating back state level policy earlier this year, after Assemblywoman Diane Papan, who had proposed a bill aimed at preventing nonprofits from restructuring as for-profit companies—which OpenAI was in the process of trying to do—gutted the language of her own bill and replaced it with essentially an entirely new one. The strange move came after pushback from OpenAI, and just three days after OpenAI closed its deal with SoftBank for $40 billion, a large portion of which is contingent on the removal of that nonprofit structure. It’s almost quaint to think back to 2023, when Sam Altman made a performative show of asking Congress to regulate his company—he’s spent the two years since fighting tooth and nail against every meaningful regulation that would affect his business.
The Trump administration, meanwhile, has adopted the industry’s zeal for deregulation; in part, of course, because there’s significant overlap between the industry and the administration. One of Trump’s first actions was to dissolve Biden’s framework for governing AI, and to institute a new set of priorities aimed not at safe, equitable AI but at helping the US AI industry achieve dominance. Vice president, and former venture capitalist, JD Vance used his first speech abroad to call for an end to international AI regulations. Marc Andreessen, who’s advising the administration on tech policy, and who wields nearly as much influence as Musk, has long advocated for less regulation—his fingerprints are all over the Guthrie amendment.
In an effort to try to better understand the GOP’s aims here, I called up Guthrie’s office, and spoke on background with a rep on the energy and commerce committee. Evidently, their plan is to argue that because the Trump administration is modernizing agencies like the Department of Commerce and the Federal Trade Commission with AI, banning states’ ability to regulate AI is a spending-related matter. If, for instance, California passes a law that, say, requires an AI company to comply with transparency laws, and it becomes more expensive as a result, then the federal government will have to spend more on AI services.
This strikes me as an enormous stretch, as such logic could be deployed to ban state lawmaking around just about anything. You could, say, ban states from making laws that seek to regulate the housing market, on the grounds that they might effect the price of maintaining federal buildings, or ban statewide labor laws because they impact the cost of paying federal employees, and so on.
It seems that the talking points around promoting this amendment will roughly be:
-It will encourage innovation and efficiency, preventing AI companies from having to deal with a patchwork of state laws
-States like Colorado and California that have passed or are preparing to pass AI regulations are not truly prepared to do so
-This effort is actually intended to benefit little tech, not big tech, because any regulations would harm little tech more
-A “light touch” is imperative so we can beat China in the AI race
A lot of these ideas can be traced back to Congressional committee hearings held by Gutherie and Ted Cruz in recent months, which were attended by Altman, former Google chief Eric Schmidt, Scale AI CEO Alexandr Wang and others. The notion that the US must “beat” China in the AI race at any cost was a frequent theme, and this was where Schmidt’s now-infamous declaration that AI needs to be given as much energy as possible (and not to worry, AI will solve the climate crisis) was made. It left an impression.
“Eric Schmidt said we need to use energy [to develop AI] because it’s going to produce the solutions to climate change,” Brett Gutherie told the Washington Post, weeks before he introduced his amendment banning state lawmaking on AI.
In the interview, Guthrie argues that “the most existential threat to America” is “losing the battle for AI” to China.1 That, Guthrie says, is why we can’t “go down the path some other continents have,” as Europe has, and adopt even modest regulations on AI or the tech sector.
It’s unclear whether Guthrie and the GOP—or Sam Altman and Eric Schmidt, for that matter—believe there’s an AI race with China of existential proportions, or if it’s simply a useful line to justify calling for limitless investment, and placing AI outside the democratic process. Ultimately it doesn’t matter. It’s serving GOP and Silicon Valley interests in providing the imperative for unfettered AI development, even halfheartedly.
What’s clear is that the GOP, AI executives, and Gulf State princes all have a common belief in AI—as a means of accumulating capital, undercutting labor, and concentrating power. And the terms of AI development and deployment are on the cusp of being set entirely by oligarchs, billionaires, and their allies in the ruling party. And those parties are intent removing any impediments—like the democratic process—from their pursuit of power and profit.
“Politicians are letting billionaires call the shots and all of us will be the ones who pay the price,” as TechEquity’s Samantha Gordon put it. Bryan says it might come to people taking to the streets; there’s so much at stake. “As goes California, as goes the country. And so they're trying to get ahead of us,” he says. “But I don't think they've got the expertise, and they certainly don't have the American people behind them in an effort like this.”
Over in the Senate, Ted Cruz has announced that he will introduce an amendment like Guthrie’s, where he’ll make the push for the ban on states making their own AI rules to become law.
This story was edited by Mike Pearl. Eliza McCullough contributed research.
Interview with Assemblyman Isaac Bryan
In reporting the above piece, I spoke at length with California state assemblyman Isaac Bryan, the author of the state bill AB 1221: Workplace surveillance tools, the co-sponsor of a number of other AI-focused bills, and who also happens to represent my district in Los Angeles.
I thought the full conversation worth sharing, so I’m sharing a lightly edited and abridged version of it below.
BLOOD IN THE MACHINE: Thanks for taking the time to talk — so, you’ve sponsored some of the bills that are being considered in the California state legislature, to prevent employers from using AI unethically in the workplace, for one. How are you thinking about these bills now, as the GOP is taking aim at your capacity to even pass such laws?
The reality is we still have a preservation of states' rights and the ability for states to set policy guidelines and regulations, particularly on issues that impact residents of their state disproportionately. The tech industry was incubated, cultivated, and continues to grow and innovate here in California. And I think many of us, all of us, most of us, believe in that innovation, believe in that creativity, believe in that advancement. It's also why California deserves the right, and has the expertise, to lead. We’ve been establishing meaningful guardrails and regulations around these advancements so that we center people as we continue to innovate.
The challenges of this administration in Washington—they often shoot from the hip and misfire as they have rapidly on several different occasions with several different policy fronts and especially on things related to the economy. I can't think of how many executive orders and how many tariff ideas and how many other things have come from this administration only to be kind of rolled back or changed when they ran into the pragmatism of reality.
The budget reconciliation process certainly isn't done yet and I know we continue all to an ever-encroaching minority in the House. And so I expect for California's voices to be heard. But I think all states should be deeply concerned about new levels of preemption. That should be a bipartisan kind of conversation about where the federal government can step in, should step in, and where it absolutely should not.
What we're going to do in California is continue to lead in the ways that we have. Balancing the needs of everyday people, the needs of the emerging industries, the desires of those of us in the state house, along with the goals of the governor, and strike those balances where we can. And if the federal government continues to encroach on that, we'll continue to file lawsuits as we have and have done successfully, both in the past Trump administration and currently.
There are two levels of audacity here. First, that they would even attempt such a thing — this is a party that has, in the past, quite loudly expressed their belief in states' rights; as recently as last summer. And now to try to do away with them on such a key issue altogether. But then, secondly, to do this in the budget reconciliation bill, to have such a far-reaching and potentially impactful measure put through in reconciliation. That really, to me, without giving this bill a proper hearing, it really underscores how undemocratic this maneuver is, around such an important topic.
I couldn't agree further. I mean, if there's one thing that the GOP has been consistent about, it has been their hypocrisy. They are for any type of legislative maneuvers, any type of distribution of checks and balances and powers that favor them in any given moment. There's very limited consistency, and we've seen some strict constitutionalists in the GOP absolutely flipped the script to justify the actions of the current administration and leaders in Congress. This is no exception.
But, you know, this is a moment for those in the federal government who value the preservation of our civic institutions over the preservation of self and self-interest to rise and stand up. And I think you're seeing that kind of consistency from the left. You're seeing that kind of measured and steady hand from the last two Democratic presidents, and hopefully that kind of longer term view of balancing powers and making sure that the American people are heard, the people who are most impacted by these kinds of changes. And in this particular space, on AI, that is Californians.
Whether the Trump administration likes it or not, I recognize that he took a far-reaching group of billionaire CEOs and particular tech CEOs to Saudi Arabia just the other day to meet with the prince. And I think all of that has important diplomatic motivations, but it's a very strange thing to have people struggling to keep a roof over their head and watching the exorbitant wealth being generated by tech billionaires, and the preservation of that wealth by this administration, supersede the needs of everyday people.
The richest man in the world, a tech billionaire himself, was serving as a surrogate president for the last several months. I think it's a scary time for folks.
I could not ignore that irony either. The same day that the amendment to try to effectively wipe out AI regulation in the United States, the CEOs of these companies are in Saudi Arabia inking billion dollar deals.
It's interesting, too. It seems like the only bridges that Trump can build are between tech billionaire CEOs who don't like each other, right? It's not a well-kept secret that Musk and Altman don't like each other. Their views around OpenAI have spilled out into the public, and yet they both seem to find comfort, security, and safety in this current president.
It's a shame that the everyday folks across this country who are struggling right now and deeply afraid of how these kind of economic decisions will impact or limit their choices as they try to provide food for their kids and keep a roof over their heads and buy new school clothes and backpacks. There's the needs that everyday folks have, and there's the needs that our tech billionaire class has—and those are the only ones being addressed.
I know it's always a tricky line in California, because the tech sector is largely based here. And it's a key constituency. Does it concern you at all that these tech companies, that this is essentially what they have been lobbying for? That Sam Altman and Google and IBM have been pushing for an exemption to state lawmaking, specifically because I think that they worry about having to comply with rules that might be put forward in places like California?
Yeah, it's deeply concerning because I think there's no place better positioned to understand the tremendous positive things, both for society, for social living, but also for wealth generation in an industry that pays taxes to the state. Nobody understands that better than California. But we also deal with the harsh and very real realities that as these kinds of innovations take place in a way that displaces workers, with an intentionality of increasing productivity through the laying off of everyday people trying to earn a living, that there's a balance that's got to be struck there.
And even as we generate new forms of state revenue, and are able to increase the state's wealth through this new industry, we will also have a disproportionate growth in liabilities, as people will need unemployment and health care and other social safety nets because their ability to earn a living has drastically changed during this spike of innovation. So we've got to be mindful of that balance—we also want to make sure that tools don't become predatory, or increase the opportunity for data and information to be leaked.
It's unconscionable to me the way that the current federal administration has treated people's private and sensitive federal data like some sort of play thing for Elon Musk and Big Balls—only because I can't remember the guy's actual name.
On the DOGE team.
Exactly. To me, how much we've allowed for our lives to be captured through these algorithmic systems, and through these innovations, and then to have that data decisively unprotected in this present moment—and so we've got to do almost all of the above in California. And I think when we lead in this sector the kind of decisions we'll make will strike the appropriate balances that allow for others to follow.
And that's the real fear: As goes California, as goes the country. And so they're trying to get ahead of us. But I don't think they've got the expertise, and they certainly don't have the American people behind them in an effort like this.
Yeah. These are proposals aimed at limiting some of the harms of AI, making sure that workers don't get steamrolled and can't be surveilled at will, and giving some power back to workers when these tools are used in their workplaces. Now, I think that when a tech company sees that, they see something that’s going to be inconvenient and costly. But can you talk a little bit about why it's important to have things like SB7 or AB 1221, which, I would describe them as hardly radical, but more like common sense proposals in the era of AI, as the technology that stands to affect more workers.
They're absolutely common sense proposals, and they're working through the legislative process, taking amendments through the process, learning, bringing stakeholders to the table.
It's interesting too, if I was some of these tech CEOs, I actually wouldn't want this power to, the ability to regulate and make thoughtful decisions in the hands of somebody [Trump] who's decided that thoughtfulness is not a characteristic that they want to exhibit through their leadership.
I mean, he has haphazardly taken a sledgehammer and swung from left to right on a range of issues. And even in these issues, I think he's going to wake up one day and realize that the Teamsters, the only labor union that backed him at the federal level, have a deep interest in not being pushed out by automated trucks, which is a conversation that's been going on here in California, and there's been some back and forth between legislators and the governor on how to land this correctly, even here. But that's another base of a constituency that he will eventually hear from. They probably don't get in through the door as quickly as the billionaire tech CEOs.
So I think this is California's responsibility. These are the kinds of state rights that we can and should have. We are allowed to govern in the interests that protect our economy and the people who rely on us. To have that preempted or suggested to be preempted in this way surely has got to be unconstitutional and we should do what we can to find out.
Let’s talk for a second about why you co-sponsored these bills in the first place, and what stands to be lost.
Our bill, AB 1221, we offered this bill because we don't want to lose the humanity in the workplace.
There are some AI tools now that, register your emotional feeling for the day, your gait, movement and the way that you walk. They make corrective decisions, recommendations, disciplinary recommendations, all without human intervention. This has gone far beyond cameras in the store to make sure theft doesn't occur. It is very invasive. It has increasing ability to show biased attitudes, biased behaviors that can be harmful to both protected classes and workers more broadly.
We just want to make sure that these tools are being used responsibly, that workers know which ones are being used on them and that any kind of disciplinary activity or things that impact somebody's ability to keep their job that those decisions are ultimately made by a person—which doesn't sound unreasonable to me. Like I said, it's about preserving the humanity in the workplace.
You mentioned legal challenges. Hopefully it gets struck down and doesn't pass the Byrd Rule in the Senate, and hopefully. But now we've also seen their colors, their intent, through this maneuver. That they're willing to even attempt this route; something that even just a couple years ago would have been considered audacious and extreme. How do you push back on this?
We stand up. We make sure our elected officials hear from us. We, hold rallies, hold town halls, hold people accountable, take to the streets when necessary, and defend states that are willing to step up and buck this administration for the good of the American people.
You know, this is not a partisan issue. It's about putting people first. And that used to be a shared value. But it's not everyday people who met with Saudi oligarchs a week ago, right? You needed a certain net worth to be invited on that trip.And you can't imagine that any kind of conversations that take place in that setting are good for everyday workers trying to keep a roof over their head and food on their tables. But there are more everyday people trying to earn an honest living in this moment than there are tech billionaires, and it's time for those folks to be heard.
Well, I think that's a great place to leave it. Thanks for your time.
Absolutely. Thank you.
OTHER BLOODY STUFF
Columbia Journalism Review asked me to participate in a roundup of how journalists are using AI—Spoiler: I am not. The whole thing is worth a read, with smart takes from fellow travelers like Jason Koebler, Khari Johnson, Susie Cagle, and others.
I was on the Majority Report with Emma Vigeland to talk about my piece on the AI jobs crisis:
Friend of the blog Steve Rhodes sent over this poem, which is worth a read:
Tune into System Crash this week, where we discuss the above, as well as the new Luddite pope, and a lot more.
Tune into a live chat on Friday, May 16th, at 10 AM EST / 1 PM PST with Karen Hao, where we’ll talk about her fantastic new book, Empire of AI.
That’s it for this week. Until next time, thanks for reading — and hammers up. Way up.
Here, for fun, is the rest of his quote from that interview: “If the most existential threat to America is climate change in your view or is it losing the battle for AI? So if you choose climate change we're not going to produce the energy, we're going to lose the battle to AI, and I will tell you China's producing the energy, they're doing one coal plant every two weeks, so not only are we going to lose the battle for AI, you're also going to see the battle on climate change.”
The good news on this is that the executive order approach to preempt state authority on AI regulation is quite clearly unconstitutional and will almost certainly fail to stand up to legal challenges. If Congress tries to impose a field preemption approach through law that has a better chance of standing up but will still probably lose b/c there is no necessary Commerce Claude or other authority for such field preemption. Clearly AI can and is already being regulated state by state without any necessary interstate commerce issues -- but this will be a trickier legal set of issues and may take years to resolve, which may be the hope of this approach since the "but China" AI accelerationists seem to base most of their insane AI accelerationism on the "but China" rationale.
Thank you for this extremely important, eye-opening piece. Hopefully Tam Hunt's belief that the approach won't stand up to legal challenges will prevail. I fear that legal judgments will be unenforceable at some point.