This is the book I have been waiting for! I am so excited to learn of Karen Hao and all of the work she has done. This is exactly it. This is the type of strategic analysis and perspective we need during these early empire-grabbing days. Thank you Blood in the Machine for introducing me to her work and the work you do.
"Because I do think that there's also this narrative around Open AI that like, well, it was all these well-meaning guys who were just like, you know, maybe they became corrupted by power.": Years of observation and experience have made me deeply doubtful that this ever happens. Rarely, if ever, are well-meaning people corrupted by power. Rather, power is sought by corrupt people, people who imagine they're entitled to power. Investigate their histories, and you'll usually find warning signs aplenty.
"And he's also very very good at understanding people": For "understanding", substitute "manipulating". That squares with my impression of him. He certainly isn't a scientist or engineer or skilled in any other way that I respect. He's merely a schemer with little or no conscience. It's a great folly of this society that it allows such people to become powerful.
"I believe that we cannot have large scale AI models' growth at all costs.": Keep in mind, the gargantuan scale is a manifestation of the fact that Altman et al. don't know what they're doing. They are not, in fact, anywhere near replicating human intelligence, which is why it takes them zillions of training cases and obscene amounts of energy to make something that doesn't work as well as an ordinary human brain with far less input data and lower power consumption than a washing machine. Note that many people with credible claims to know what they're talking about reject Altman et al.'s assertions that "AGI" will somehow emerge from bigger LLMs; see, for example:
(The original is paywalled, so the link is from the Wayback Machine.)
From the piece:
"Artificial intelligence (AI) systems with human-level reasoning are unlikely to be achieved through the approach and technology that have dominated the current boom in AI, according to a survey of hundreds of people working in the field. More than three-quarters of respondents said that enlarging current AI systems - an approach that has been hugely successful in enhancing their performance over the past few years - is unlikely to lead to what is known as artificial general intelligence (AGI). An even higher proportion said that neural networks, the fundamental technology behind generative AI, alone probably cannot match or surpass human intelligence."
Also worth noting:
"And the very pursuit of these capabilities also provokes scepticism: less than one-quarter of respondents said that achieving AGI should be the core mission of the AI research community."
Great interview! It’s fascinating to hear how calculating a guy like Altman is about getting to where they are now (and how easy it seems it is to manipulate someone like Musk by repeating his own words back to him). I’ll second your point that Sam Altman’s jab at Hao’s books is the best endorsement she could have received. 😂 It’s on my wish list now for sure!
I know you already podcast in other places, but I love the idea of you having the occasional interview/podcast here and taking a more Luddite-specific angle. The topic of how we can resist these things seems to rarely come up in an interview like this, which I feel like contributes to that doomer mentality so many people have. You’ll iron out the technical stuff, I’m sure, so I look forward to the next one! ✊🔨🔥
This is the book I have been waiting for! I am so excited to learn of Karen Hao and all of the work she has done. This is exactly it. This is the type of strategic analysis and perspective we need during these early empire-grabbing days. Thank you Blood in the Machine for introducing me to her work and the work you do.
I love that you’re launching a podcast!
I question transhumanism as well
https://open.substack.com/pub/noelkeith/p/tranquil-piece-of-mind-vol-2-no-3?r=4c7psw&utm_medium=ios
"Because I do think that there's also this narrative around Open AI that like, well, it was all these well-meaning guys who were just like, you know, maybe they became corrupted by power.": Years of observation and experience have made me deeply doubtful that this ever happens. Rarely, if ever, are well-meaning people corrupted by power. Rather, power is sought by corrupt people, people who imagine they're entitled to power. Investigate their histories, and you'll usually find warning signs aplenty.
"And he's also very very good at understanding people": For "understanding", substitute "manipulating". That squares with my impression of him. He certainly isn't a scientist or engineer or skilled in any other way that I respect. He's merely a schemer with little or no conscience. It's a great folly of this society that it allows such people to become powerful.
"I believe that we cannot have large scale AI models' growth at all costs.": Keep in mind, the gargantuan scale is a manifestation of the fact that Altman et al. don't know what they're doing. They are not, in fact, anywhere near replicating human intelligence, which is why it takes them zillions of training cases and obscene amounts of energy to make something that doesn't work as well as an ordinary human brain with far less input data and lower power consumption than a washing machine. Note that many people with credible claims to know what they're talking about reject Altman et al.'s assertions that "AGI" will somehow emerge from bigger LLMs; see, for example:
https://web.archive.org/web/20250305233251/https://www.nature.com/articles/d41586-025-00649-4
(The original is paywalled, so the link is from the Wayback Machine.)
From the piece:
"Artificial intelligence (AI) systems with human-level reasoning are unlikely to be achieved through the approach and technology that have dominated the current boom in AI, according to a survey of hundreds of people working in the field. More than three-quarters of respondents said that enlarging current AI systems - an approach that has been hugely successful in enhancing their performance over the past few years - is unlikely to lead to what is known as artificial general intelligence (AGI). An even higher proportion said that neural networks, the fundamental technology behind generative AI, alone probably cannot match or surpass human intelligence."
Also worth noting:
"And the very pursuit of these capabilities also provokes scepticism: less than one-quarter of respondents said that achieving AGI should be the core mission of the AI research community."
Great interview! It’s fascinating to hear how calculating a guy like Altman is about getting to where they are now (and how easy it seems it is to manipulate someone like Musk by repeating his own words back to him). I’ll second your point that Sam Altman’s jab at Hao’s books is the best endorsement she could have received. 😂 It’s on my wish list now for sure!
I know you already podcast in other places, but I love the idea of you having the occasional interview/podcast here and taking a more Luddite-specific angle. The topic of how we can resist these things seems to rarely come up in an interview like this, which I feel like contributes to that doomer mentality so many people have. You’ll iron out the technical stuff, I’m sure, so I look forward to the next one! ✊🔨🔥