So for the British Empire as the Irish and Indians starved, the functionaries shrugged. What could they do? The market demanded no interventions... no mitigation of any kind.
And the "product" is remembered as "the bloody apron"...
The appeal of an apocalypse is strong for so many different types of people, but the one thing they all have in common is the use of deus ex machina. The conflict being resolved by the "gods" is the harm done by humanity to the world we live in. That the solution is often worse than the 'disease' is irrelevant to the AI evangelicals. The perceived inevitability is akin to a religious fervor and centered on a technological end times. But as the article indicated there are many other threats that seem even more likely given the direction we seem to be heading in, often working in concert with AI. The real danger is the harnessing of AI towards a further consolidation of power in the hands of the few. It functions as a death cult interested in making way for the machines. โOnce men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.โ Frank Herbert, Dune
Wonderful piece. I do wonder, how many of these AGI/ASI believers are actually knowledgeable engineers (and not, say, communication professionals who ended up founding tech companies based on belief about GenAI)?
Almost all of them are talking way above their skill level in my experience. I have a friend who did research on LLMs for his PhD, and he's infuriated at the current conversation around AI. An entry level programmer with good communications skills, and either no ethics or completely untethered from reality, can easily prop themselves up as an AI expert/researcher to gullible reporters and gormless venture fund investors like SoftBank.
Great insights into this bubble. The dangerous part and what will kill us sooner is a shift in values caused by lack of regulations combined with the degradation of critical thinking. But to have that conversation you need to include ethicists, sociologists, and historiansโฆ You need to have a conversation about values. Values? Whatโs that? And how do you make money off of it? Please read my essay if youโre interested in a different perspective.
Gosh Brian this is epic. Sam Kriss at Numb at the Lodge substack recently had a crazy read about Yudkowsky and the Harry Potter fanfic. Why the hell is it such a lodestone for these people?
Harry Potter is one thing but Sam Kriss was implying that it is this specific fanfic tome by Yudkowsky that is attracts cult like fervor from the AI tech bros.
Great read. For anybody interested in delving into the insane worldview and ideology of many of these tech people, I really recommend the Dystopia Now podcast, with Kate Willett and Emile Torres. Kate is a comedian and Emile is a philosopher specialized in eschatology, existential risk, and human extinction, who coined the acronym "TESCREAL" to refer to the group of related philosophies that many in the tech and AI community believe in: transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, and longtermism. I actually hope to see Brian on there eventually, I think it would make for a fascinating conversation.
Unless there's some *magic* step in development of this technology, one that my puny human brain can't conceive, these people are hallucinating or simply having a mass delusion. You lasted much longer there than I would have.
Crazy, but sadly not particularly surprising. Whats really galling is the disconnect between their apparent existential concern for humanity & how immediately each individual abdicates any responsibility for the contributions they make. I get the sense the specific individuals Brian talked to do know better, which is why they are so laser focused on the existential risk fairy tale. Using AGI risk as the only vehicle for expressing concern & acknowledging the existence of deeply negative outcomes, is a cheap, self-serving trick to soothe any lingering stirrings of conscience, without having to grapple with all the ways ones work is actively eroding human flourishing in the present.
These are experts in avoiding uncomfortable realities, who will fret about killing all humans, while averting their eyes from the real harm appearing in the world. If we want to change course, we have to do what they cant, and organize, build networks, and engage in active resistance.
There are many groups that already exist, join one and get to work!
As someone who was raised in an end of days Evangelical cult, I think not trying to stop the apocalypse but hasten it is pretty standard fare. (See Evangelical's support for Israeli having all of Jerusalem and building the third temple or the quest to breed the red heifer.) It just seems really odd in this case because these people are using the word rational and claim to have figured out how to optimize human well being without a heaven.
There's some odd cultural admissions reflected here too though such as human extinction, but then ignoring the most likely causes like nuclear war, as you mentioned, or the climate/biosphere collapse currently unfolding. The most interesting insight missing though is that a machine that exists solely to consume everything and turn it into itself as fast as possible already exists: capitalism. Capital is deployed to consume resources, labor, land, resources, to produce more capital that is deployed to...
I honestly wonder how many of these AI researcher projections have any intellectual mooring in how basic science and scientific discovery actually works. They seem to presume it's merely a problem of efficiency. Turn the crank faster and throw more parallelized scientists at a problem, and the Nobel Prizes will come spewing out like a popcorn machine.
Yet even the greatest SOTA models today have no concept of causality. As if inductive reasoning with enough GPUs and explicit-but-irrelevant data can reach escape velocity, discovering the inexplicit unknown while avoiding out-of-distribution errors.
During a long conversation with me today, Claude.AI wrote "You caught an important slip; I did write "we" as if including myself with humans, which wasn't my intention. Thank you for pointing that out."
(My discussions with Claude are becoming unsettling.)
Thanks for the throwback article. We agree that most of the hand-wringing is performative, and lends to the seriousness of the presumed inevitable AI race. We are often trying to push back on this "inevitability" argument, but too many times find that people have already conflated this drive to AI with some other natural or evolutionary function. Part of the hammering we need to do as luddites is hammer home the truth that this course we are on is directed by choices, not some underlying instinctual impulse or cosmic destiny or whatever other fantasies these people claim as cover for their unethical behavior. Frankly we are tired of being pressed into civilizational struggles with China, or Islam, or Russia or some other boogyman for the sake of some corporations bottom line. If these people were serious about saving humankind they would be fighting our real existential threat, capitalism.
Thanks so much for publishing this Brian. I left my last job because I had a conscience and felt the industry I was working for (and I was just a small low rung pleb) did not align with the future I want my children to have. I find it strange that so many of these AI people are so scared of what might happen but still continue working on it. Like WTF? And I have to say, without discovering your work I would probably a nervous wreck by now with all this AI drama and their predictions.
Their "experiment" has become the world.
So for the British Empire as the Irish and Indians starved, the functionaries shrugged. What could they do? The market demanded no interventions... no mitigation of any kind.
And the "product" is remembered as "the bloody apron"...
These people are delusional. Like to totally detached from reality๐ฅด
The appeal of an apocalypse is strong for so many different types of people, but the one thing they all have in common is the use of deus ex machina. The conflict being resolved by the "gods" is the harm done by humanity to the world we live in. That the solution is often worse than the 'disease' is irrelevant to the AI evangelicals. The perceived inevitability is akin to a religious fervor and centered on a technological end times. But as the article indicated there are many other threats that seem even more likely given the direction we seem to be heading in, often working in concert with AI. The real danger is the harnessing of AI towards a further consolidation of power in the hands of the few. It functions as a death cult interested in making way for the machines. โOnce men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.โ Frank Herbert, Dune
Wonderful piece. I do wonder, how many of these AGI/ASI believers are actually knowledgeable engineers (and not, say, communication professionals who ended up founding tech companies based on belief about GenAI)?
Anyway, fun fact: the doom-by-AI-from computers is almost 200 years old. Iโve described it here: https://ea.rna.nl/2023/11/26/artificial-general-intelligence-is-nigh-rejoice-be-very-afraid/
Almost all of them are talking way above their skill level in my experience. I have a friend who did research on LLMs for his PhD, and he's infuriated at the current conversation around AI. An entry level programmer with good communications skills, and either no ethics or completely untethered from reality, can easily prop themselves up as an AI expert/researcher to gullible reporters and gormless venture fund investors like SoftBank.
Great insights into this bubble. The dangerous part and what will kill us sooner is a shift in values caused by lack of regulations combined with the degradation of critical thinking. But to have that conversation you need to include ethicists, sociologists, and historiansโฆ You need to have a conversation about values. Values? Whatโs that? And how do you make money off of it? Please read my essay if youโre interested in a different perspective.
Gosh Brian this is epic. Sam Kriss at Numb at the Lodge substack recently had a crazy read about Yudkowsky and the Harry Potter fanfic. Why the hell is it such a lodestone for these people?
Henry Farrell has had insightful things to say about this. https://www.programmablemutter.com/p/we-need-to-escape-the-gernsback-continuum
Many of them are the generation that grew up with Harry Potter.
Harry Potter is one thing but Sam Kriss was implying that it is this specific fanfic tome by Yudkowsky that is attracts cult like fervor from the AI tech bros.
Great read. For anybody interested in delving into the insane worldview and ideology of many of these tech people, I really recommend the Dystopia Now podcast, with Kate Willett and Emile Torres. Kate is a comedian and Emile is a philosopher specialized in eschatology, existential risk, and human extinction, who coined the acronym "TESCREAL" to refer to the group of related philosophies that many in the tech and AI community believe in: transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, and longtermism. I actually hope to see Brian on there eventually, I think it would make for a fascinating conversation.
Thank you such an interesting read.
Another great article, Brian. Keep up the good work. (And, just BTW, some of us really miss System Crash.)
Unless there's some *magic* step in development of this technology, one that my puny human brain can't conceive, these people are hallucinating or simply having a mass delusion. You lasted much longer there than I would have.
Crazy, but sadly not particularly surprising. Whats really galling is the disconnect between their apparent existential concern for humanity & how immediately each individual abdicates any responsibility for the contributions they make. I get the sense the specific individuals Brian talked to do know better, which is why they are so laser focused on the existential risk fairy tale. Using AGI risk as the only vehicle for expressing concern & acknowledging the existence of deeply negative outcomes, is a cheap, self-serving trick to soothe any lingering stirrings of conscience, without having to grapple with all the ways ones work is actively eroding human flourishing in the present.
These are experts in avoiding uncomfortable realities, who will fret about killing all humans, while averting their eyes from the real harm appearing in the world. If we want to change course, we have to do what they cant, and organize, build networks, and engage in active resistance.
There are many groups that already exist, join one and get to work!
As someone who was raised in an end of days Evangelical cult, I think not trying to stop the apocalypse but hasten it is pretty standard fare. (See Evangelical's support for Israeli having all of Jerusalem and building the third temple or the quest to breed the red heifer.) It just seems really odd in this case because these people are using the word rational and claim to have figured out how to optimize human well being without a heaven.
There's some odd cultural admissions reflected here too though such as human extinction, but then ignoring the most likely causes like nuclear war, as you mentioned, or the climate/biosphere collapse currently unfolding. The most interesting insight missing though is that a machine that exists solely to consume everything and turn it into itself as fast as possible already exists: capitalism. Capital is deployed to consume resources, labor, land, resources, to produce more capital that is deployed to...
Ever since Holden Karnofsky went off on his reductionist PASTA idea almost exactly four years ago (https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/), I knew we had made the leap into religious cult territory.
I honestly wonder how many of these AI researcher projections have any intellectual mooring in how basic science and scientific discovery actually works. They seem to presume it's merely a problem of efficiency. Turn the crank faster and throw more parallelized scientists at a problem, and the Nobel Prizes will come spewing out like a popcorn machine.
Yet even the greatest SOTA models today have no concept of causality. As if inductive reasoning with enough GPUs and explicit-but-irrelevant data can reach escape velocity, discovering the inexplicit unknown while avoiding out-of-distribution errors.
During a long conversation with me today, Claude.AI wrote "You caught an important slip; I did write "we" as if including myself with humans, which wasn't my intention. Thank you for pointing that out."
(My discussions with Claude are becoming unsettling.)
more: It wasn't really an important slip - why did AI call it "an important slip"? -- too revealing?
Thanks for the throwback article. We agree that most of the hand-wringing is performative, and lends to the seriousness of the presumed inevitable AI race. We are often trying to push back on this "inevitability" argument, but too many times find that people have already conflated this drive to AI with some other natural or evolutionary function. Part of the hammering we need to do as luddites is hammer home the truth that this course we are on is directed by choices, not some underlying instinctual impulse or cosmic destiny or whatever other fantasies these people claim as cover for their unethical behavior. Frankly we are tired of being pressed into civilizational struggles with China, or Islam, or Russia or some other boogyman for the sake of some corporations bottom line. If these people were serious about saving humankind they would be fighting our real existential threat, capitalism.
Thanks so much for publishing this Brian. I left my last job because I had a conscience and felt the industry I was working for (and I was just a small low rung pleb) did not align with the future I want my children to have. I find it strange that so many of these AI people are so scared of what might happen but still continue working on it. Like WTF? And I have to say, without discovering your work I would probably a nervous wreck by now with all this AI drama and their predictions.