8 Comments
User's avatar
Celeste Sazani's avatar

This is the book I have been waiting for! I am so excited to learn of Karen Hao and all of the work she has done. This is exactly it. This is the type of strategic analysis and perspective we need during these early empire-grabbing days. Thank you Blood in the Machine for introducing me to her work and the work you do.

Zack Arnold's avatar

I love that you’re launching a podcast!

Ralph Haygood's avatar

"Because I do think that there's also this narrative around Open AI that like, well, it was all these well-meaning guys who were just like, you know, maybe they became corrupted by power.": Years of observation and experience have made me deeply doubtful that this ever happens. Rarely, if ever, are well-meaning people corrupted by power. Rather, power is sought by corrupt people, people who imagine they're entitled to power. Investigate their histories, and you'll usually find warning signs aplenty.

"And he's also very very good at understanding people": For "understanding", substitute "manipulating". That squares with my impression of him. He certainly isn't a scientist or engineer or skilled in any other way that I respect. He's merely a schemer with little or no conscience. It's a great folly of this society that it allows such people to become powerful.

"I believe that we cannot have large scale AI models' growth at all costs.": Keep in mind, the gargantuan scale is a manifestation of the fact that Altman et al. don't know what they're doing. They are not, in fact, anywhere near replicating human intelligence, which is why it takes them zillions of training cases and obscene amounts of energy to make something that doesn't work as well as an ordinary human brain with far less input data and lower power consumption than a washing machine. Note that many people with credible claims to know what they're talking about reject Altman et al.'s assertions that "AGI" will somehow emerge from bigger LLMs; see, for example:

https://web.archive.org/web/20250305233251/https://www.nature.com/articles/d41586-025-00649-4

(The original is paywalled, so the link is from the Wayback Machine.)

From the piece:

"Artificial intelligence (AI) systems with human-level reasoning are unlikely to be achieved through the approach and technology that have dominated the current boom in AI, according to a survey of hundreds of people working in the field. More than three-quarters of respondents said that enlarging current AI systems - an approach that has been hugely successful in enhancing their performance over the past few years - is unlikely to lead to what is known as artificial general intelligence (AGI). An even higher proportion said that neural networks, the fundamental technology behind generative AI, alone probably cannot match or surpass human intelligence."

Also worth noting:

"And the very pursuit of these capabilities also provokes scepticism: less than one-quarter of respondents said that achieving AGI should be the core mission of the AI research community."

Simon Peng's avatar

Great interview! It’s fascinating to hear how calculating a guy like Altman is about getting to where they are now (and how easy it seems it is to manipulate someone like Musk by repeating his own words back to him). I’ll second your point that Sam Altman’s jab at Hao’s books is the best endorsement she could have received. 😂 It’s on my wish list now for sure!

I know you already podcast in other places, but I love the idea of you having the occasional interview/podcast here and taking a more Luddite-specific angle. The topic of how we can resist these things seems to rarely come up in an interview like this, which I feel like contributes to that doomer mentality so many people have. You’ll iron out the technical stuff, I’m sure, so I look forward to the next one! ✊🔨🔥

Titus Levi's avatar

Notably... no mention of China.

My point is not about AI competition. It's about ignoring a substantial chunk of what is happening in the world's largest national economy (in PPP) and a highly influential society in terms of tech and geopolitics. Sure, you can't cover *everything*, but that's an awfully big elephant in the room to ignore.

Leon S's avatar

Loved this interview Brian and and Karen (read it, sorry didn't watch). This connection to Empire building is so spot on. Will definitely be reading the book. You both rock!

AMHz's avatar

Colonialising the worldwide industrial and technological phenomenon of current state of AI and its players is cute, but not relevant. AI dev is not a Silicon Valley prerogative. It’s global, increasingly global, and - of course - is being developed at least 50% within China, with all the ramifications, good or bad, that this entails. You can’t colonialize the AI story in this simplistic ‘empire’ framework, in doing so you have conveniently left behind the real civilisational shift that’s going on - the absolute, guaranteed speed and scale of exponentially increasing AI power and capability. As we move forward, this aspect will make the shakers and movers and personalities of Silicon Valley look like mere doormen, in its wake, as it expands well beyond physical, political and social borders. The tools are ALREADY here to perform the ‘social good’ miracles that the author envisions should be a focus. You are missing the point, tools - if that’s what we’ll call them - effects, will be so earth shattering in every dimension that we’ll be so far away from wondering about pithy out-dated lines of ‘empire vs’ stories and concerns, that none of this argument’s talking points will matter, nor make any sense. It does not matter - who begets this technology, poor people or rich people, the end results will be the same - the most radical upending of the role of humans on planet Earth. We are just assistants in the maternity ward at this point, including Altman and all those children, or the quibbling liberatzis. None of this naval gazing is going to matter. We’re WAY beyond that. Another overlord is arriving.