"There was, after all, a time in which, if a founder walked into a VC’s office on Sand Hill Road with a pitch for a big new company, and then the VC asked 'what is your company going to do?' and the founder said 'I can’t answer any questions about my company actually,' it would be clear the person was either delusional, or was trolling, and they would be shown the door. During a particularly absurd bubble around a technology with uniquely science fictional aspirations, however, investors might say, great, here is $2 billion."
Here's why: A recent article in Fast Company revealed that Andreesen doesn't make most of its money from the companies it invests in, it makes its money, like hedge fund, on the exorbitant fees it charges the investors it gets capital from. In other words, their incentive is not to support functional companies with customer-ready products and clear future profits, things a company is supposed to do; it's to attract more and more people willing to invest and pay them fees--and that requires telling these suckers a fanciful story about why customer-ready profits and future profits. So you're absolutely on point, it seems to me, that the story told about AI is far more important than any AI itself. Basically, we're not in a bubble, we're in a fairy tale.
Isn't that how hedge fund companies work, too, albeit also really leveraging the equity to increase the beta? If they win, they get the fees and the cut of the profits. If they lose, they just get their exorbitant fees. Great racket. The spiel is to claim they have positive alphas, and can best manage to stay on the winning side. The industry is littered with eventual busts.
We seem to be back to the Gilded Age, with charlatans and grifters selling snake oil, manipulating stocks, and criminal bankers.
I swear, it must be a requirement for investors in Silly Valley to have their bullshit detectors removed surgically and replaced with LLM trolling engines.
And since I want my share of the billions*
I think I’ll start a company whose mission is to build a super intelligence to hunt down the super intelligences that are dangerous to humanity. How could investors not want to throw money at that?
* No, I really don’t. I spent some years working there, and I’m not at all willing to even work *near* those people again.
How many billions of this 'AI' start-up capital actually leaves the investors' bank accounts? I'm wondering if these absurd valuations are part of the trolling that Brian is talking about here.
Hanlon’s Razor (“never attribute to malice what can be attributed to stupidity”) probably holds. I would estimate so for Aschenbrenner (who clearly is a special kind of simpleton), Sutskever (who seriously argued in an interview with a serious outlet about a year ago that we could get to superintelligence by adding “answer like you are superintelligent” to an LLM prompt), and Murati, who clearly runs a cabal of people who think the foundational randomness of GenAI is a bug, not a feature. Even Slippery Sam knows better (and has publicly said so).
I've read that the same sentiment of Hanlon's Razor has been stated by Napoleon and others. A very (lengthy and with some analysis) famous one is from Dietrich Bonhoeffer (Lutheran pastor, murdered by the Nazis), which I have quoted in full here because it is so good: https://www.linkedin.com/pulse/stupidity-versus-malice-gerben-wierda/
Here is a fragment:
"‘Stupidity is a more dangerous enemy of the good than malice. One may protest against evil; it can be exposed and, if need be, prevented by use of force. Evil always carries within itself the germ of its own subversion in that it leaves behind in human beings at least a sense of unease. Against stupidity we are defenseless. Neither protests nor the use of force accomplish anything here; reasons fall on deaf ears; facts that contradict one’s prejudgment simply need not be believed- in such moments the stupid person even becomes critical – and when facts are irrefutable they are just pushed aside as inconsequential, as incidental. In all this the stupid person, in contrast to the malicious one, is utterly self-satisfied and, being easily irritated, becomes dangerous by going on the attack. For that reason, greater caution is called for than with a malicious one. Never again will we try to persuade the stupid person with reasons, for it is senseless and dangerous."
Our brains are optimised for efficiency (20W) and speed (reactions as fast as 0.1sec) and that this combination can only be realised through 'mental automation' such as convictions, assumptions, beliefs. You cannot start from scratch or go deep all the time, after all.
Convictions steer our observations and reasonings (far) more than the other way around, but we tend to be convinced (😀) of the reverse (which probably is a necessity too, as we need to almost blindly trust our convictions for them to work as speed and efficiency enhancers). Magicians also make use of this a lot. Our convictions even steer our memory (stunning research where they got people convinced that they had done something they had not, after which the test subjects created their own detailed memories of the event).
This holds for all of us. It is how human intelligence operates. We're all 'stupid' in this way. We're all potential flat earthers. Even the most cognitively talented among us. Bonhoeffer again: "There are human beings who are of remarkably agile intellect yet stupid, and others who are intellectually quite dull yet anything but stupid. We discover this to our surprise in particular situations." Aschenbrenner, Sutskever, Hinton are prime examples.
Our convictions come from copying more than from observations and reasoning. Relations with the source and and frequency are more important than observations and reasoning for creating them. Hence, a 'social media influencer' who is convincing because they talk to you as if they are your friend and they talk to you often. Both are direct paths to influencing our mental automation.
We humans are capable of changing our convictions, but not that easily, and that is for good reason.
Another reason our convictions are stable (hard to change, resistant to facts) has to do with being optimised for 'tribal success'. That success requires stability and predictability of the tribe members, or coordination fails or becomes too costly or slow (transaction cost). E.g., your fellow hunter must be relied upon to do their task and not do something different all the time.
Real intelligence requires not just the speed and accuracy with which a human/machine makes estimations, or the level of which it can do discrete (logical) reasoning, but that it can do so without losing the capability of doubting its own convictions, observations, and reasonings. Am I capable of that? I doubt it...
Not a tech person, but I spent 25 years on Capitol Hill and can confirm "Against stupidity we are defenseless." Stupidity is a vehicle by which one can travel a long way, cheered by other idiots. So, there is little motivation to become informed...or smart.
It’s the difference between what Kahneman and Tversky described as “System 1” and “System 2” thinking. System 1 is applicable to the vast majority of situations in which human beings have ever found themselves, and requisite for survival. System 2 is really only applicable to a tiny fraction of cases, though they map onto logic and math in such a way as to make it highly valuable within human created systems (and frameworks like research). In my estimation, the trouble begins with the application of the wrong system to the case at hand — a category error, if you will — which leads to, er, suboptimal outcomes.
There is a relation indeed. Though it is also true that people engaged in trying to do System 2 can only do that (except in pure logic/math) on a foundation of System 1. Even in science there are paradigms (which are comparable 'mental automations'). Also "System 2" is for us humans also a requisite to survival (it too has come out of evolutionary pressures).
Worth the wait--the main story is head-spinning, the bonuses are interesting and I'm glad BITM is going out in other languages...and if you're right, many of the young are catching on. This seems slightly ironic, in that I can remember when people my age had to ask their grandchildren how to use their computers, and the kids were so tech-savvy...now will it be us oldsters that still stare at phones all day? Not me, we have a cellphone but only use it when traveling or to get the phone company to fix our landline.
Thanks for this comment. Our kids - 16 and 18, respectively - and their friends give us great hope. While they “OK Boomer” us with regularity (and often not without cause) they are thoughtful, empathetic people who look upon my knee jerk techno-doomism with bemusement and pity. I ask our oldest who just started his undergrad studies in is computer engineering about his take on AI. He scoffed and told me he refused to use the term as in his estimation AI is still profoundly stupid and likely to remain so indefinitely. He chooses to the old-school term “machine learning” as he feels that’s a much better descriptor of what LLMs actually do. Of course, he declaims all this with the bedrock confidence of a precocious college student. But he does his homework, and I’m inclined to put my faith in him, his brother, and their friends vs the techno-bro knuckleheads and nihilist opportunists Brian spotlights above
"being willing to aggressively announce themselves as scapegoats for the future, or by publishing long blogs retreading tales of the terrible power of AI"
Imo this just makes the people who do this look like colossal assholes. "This thing I'm building is going to destroy life for a lot of people and make those dystopian novels look like paradise, and yet I'm so magnanimous that I'M STILL GOING TO BUILD IT!"
Brian, you are a national treasure. I hope you can get some rest and I look forward—if those are the right words—to your next installment on the absurdity of our moment.
I have to disagree on the "peak of the AI bubble". Considering the posture of VCs, that seem to be making debt to invest and the circular investments.. I have the strong feeling we will have to endure this crap a bit longer. I am soooo much hoping the bubble bursts because it is making the workplace an incredibly weird place, together with the rest, but... cannot complain, I still have a job :)
"A recent college graduate whose only work experience was a stint at an infamously fraudulent crypto exchange and a job at OpenAI from which he was fired for leaking confidential information for personal gain, has maneuvered his way into managing $1.5 billion":
In other words, fools and their money continue to be parted. These particular fools are no less foolish for having lots of money from which to be parted. Never imagine that most of the rich sociopaths who blight the world are evil geniuses - evil, yes; geniuses, no.
"So we're doing an AI company with the best AI people, but we can't answer any questions.":
My, how South Sea bubble-y of Ms. Murati: "a company for carrying out an undertaking of great advantage, but nobody to know what it is"! Plus ça change, plus c'est la même chose.
(Charles Mackay reported that episode as factual in his book "Extraordinary popular delusions and the madness of crowds", whereas Andrew Odlyzko referred to it as apocryphal in his essay "Bubbles and gullibility":
I'm not competent to assess its historicity, but in view of the nonsense we're now witnessing, it's all too plausible.
By the way, I called it in a comment on a post here about a year ago. You (Merchant) wrote, "Murati is gone now too, offering no discernible reason for her departure." I commented, "Maybe she realized she can probably get more money by setting up her own scam than by continuing to be a cog in Altman's scam." It's nothing but scams as far as the eye can see.)
Another great post. I appreciated this bit most:
"There was, after all, a time in which, if a founder walked into a VC’s office on Sand Hill Road with a pitch for a big new company, and then the VC asked 'what is your company going to do?' and the founder said 'I can’t answer any questions about my company actually,' it would be clear the person was either delusional, or was trolling, and they would be shown the door. During a particularly absurd bubble around a technology with uniquely science fictional aspirations, however, investors might say, great, here is $2 billion."
Here's why: A recent article in Fast Company revealed that Andreesen doesn't make most of its money from the companies it invests in, it makes its money, like hedge fund, on the exorbitant fees it charges the investors it gets capital from. In other words, their incentive is not to support functional companies with customer-ready products and clear future profits, things a company is supposed to do; it's to attract more and more people willing to invest and pay them fees--and that requires telling these suckers a fanciful story about why customer-ready profits and future profits. So you're absolutely on point, it seems to me, that the story told about AI is far more important than any AI itself. Basically, we're not in a bubble, we're in a fairy tale.
Isn't that how hedge fund companies work, too, albeit also really leveraging the equity to increase the beta? If they win, they get the fees and the cut of the profits. If they lose, they just get their exorbitant fees. Great racket. The spiel is to claim they have positive alphas, and can best manage to stay on the winning side. The industry is littered with eventual busts.
We seem to be back to the Gilded Age, with charlatans and grifters selling snake oil, manipulating stocks, and criminal bankers.
Love the fairy tale analogy. So true!
I swear, it must be a requirement for investors in Silly Valley to have their bullshit detectors removed surgically and replaced with LLM trolling engines.
And since I want my share of the billions*
I think I’ll start a company whose mission is to build a super intelligence to hunt down the super intelligences that are dangerous to humanity. How could investors not want to throw money at that?
* No, I really don’t. I spent some years working there, and I’m not at all willing to even work *near* those people again.
It's pretty clear to me that VC investment is now less rational than sports betting. The ubiquity of sports betting is just the sign o the times
How many billions of this 'AI' start-up capital actually leaves the investors' bank accounts? I'm wondering if these absurd valuations are part of the trolling that Brian is talking about here.
Great issue! So glad that you do what you do. Voices of sanity and common sense: you on AI and Molly White on crypto.
Hanlon’s Razor (“never attribute to malice what can be attributed to stupidity”) probably holds. I would estimate so for Aschenbrenner (who clearly is a special kind of simpleton), Sutskever (who seriously argued in an interview with a serious outlet about a year ago that we could get to superintelligence by adding “answer like you are superintelligent” to an LLM prompt), and Murati, who clearly runs a cabal of people who think the foundational randomness of GenAI is a bug, not a feature. Even Slippery Sam knows better (and has publicly said so).
Fuck that; there's shitloads of malice everywhere.
Never knew of Hanlon’s Razor. Thank you!
I've read that the same sentiment of Hanlon's Razor has been stated by Napoleon and others. A very (lengthy and with some analysis) famous one is from Dietrich Bonhoeffer (Lutheran pastor, murdered by the Nazis), which I have quoted in full here because it is so good: https://www.linkedin.com/pulse/stupidity-versus-malice-gerben-wierda/
Here is a fragment:
"‘Stupidity is a more dangerous enemy of the good than malice. One may protest against evil; it can be exposed and, if need be, prevented by use of force. Evil always carries within itself the germ of its own subversion in that it leaves behind in human beings at least a sense of unease. Against stupidity we are defenseless. Neither protests nor the use of force accomplish anything here; reasons fall on deaf ears; facts that contradict one’s prejudgment simply need not be believed- in such moments the stupid person even becomes critical – and when facts are irrefutable they are just pushed aside as inconsequential, as incidental. In all this the stupid person, in contrast to the malicious one, is utterly self-satisfied and, being easily irritated, becomes dangerous by going on the attack. For that reason, greater caution is called for than with a malicious one. Never again will we try to persuade the stupid person with reasons, for it is senseless and dangerous."
Our brains are optimised for efficiency (20W) and speed (reactions as fast as 0.1sec) and that this combination can only be realised through 'mental automation' such as convictions, assumptions, beliefs. You cannot start from scratch or go deep all the time, after all.
Convictions steer our observations and reasonings (far) more than the other way around, but we tend to be convinced (😀) of the reverse (which probably is a necessity too, as we need to almost blindly trust our convictions for them to work as speed and efficiency enhancers). Magicians also make use of this a lot. Our convictions even steer our memory (stunning research where they got people convinced that they had done something they had not, after which the test subjects created their own detailed memories of the event).
This holds for all of us. It is how human intelligence operates. We're all 'stupid' in this way. We're all potential flat earthers. Even the most cognitively talented among us. Bonhoeffer again: "There are human beings who are of remarkably agile intellect yet stupid, and others who are intellectually quite dull yet anything but stupid. We discover this to our surprise in particular situations." Aschenbrenner, Sutskever, Hinton are prime examples.
Our convictions come from copying more than from observations and reasoning. Relations with the source and and frequency are more important than observations and reasoning for creating them. Hence, a 'social media influencer' who is convincing because they talk to you as if they are your friend and they talk to you often. Both are direct paths to influencing our mental automation.
We humans are capable of changing our convictions, but not that easily, and that is for good reason.
Another reason our convictions are stable (hard to change, resistant to facts) has to do with being optimised for 'tribal success'. That success requires stability and predictability of the tribe members, or coordination fails or becomes too costly or slow (transaction cost). E.g., your fellow hunter must be relied upon to do their task and not do something different all the time.
Real intelligence requires not just the speed and accuracy with which a human/machine makes estimations, or the level of which it can do discrete (logical) reasoning, but that it can do so without losing the capability of doubting its own convictions, observations, and reasonings. Am I capable of that? I doubt it...
Not a tech person, but I spent 25 years on Capitol Hill and can confirm "Against stupidity we are defenseless." Stupidity is a vehicle by which one can travel a long way, cheered by other idiots. So, there is little motivation to become informed...or smart.
It’s the difference between what Kahneman and Tversky described as “System 1” and “System 2” thinking. System 1 is applicable to the vast majority of situations in which human beings have ever found themselves, and requisite for survival. System 2 is really only applicable to a tiny fraction of cases, though they map onto logic and math in such a way as to make it highly valuable within human created systems (and frameworks like research). In my estimation, the trouble begins with the application of the wrong system to the case at hand — a category error, if you will — which leads to, er, suboptimal outcomes.
There is a relation indeed. Though it is also true that people engaged in trying to do System 2 can only do that (except in pure logic/math) on a foundation of System 1. Even in science there are paradigms (which are comparable 'mental automations'). Also "System 2" is for us humans also a requisite to survival (it too has come out of evolutionary pressures).
Thank you! Fascinating.
I'm pretty sure that the first thing the developer of a true AGI model would do is have it break the encryption of bitcoin and steal it all.
Worth the wait--the main story is head-spinning, the bonuses are interesting and I'm glad BITM is going out in other languages...and if you're right, many of the young are catching on. This seems slightly ironic, in that I can remember when people my age had to ask their grandchildren how to use their computers, and the kids were so tech-savvy...now will it be us oldsters that still stare at phones all day? Not me, we have a cellphone but only use it when traveling or to get the phone company to fix our landline.
Thanks for this comment. Our kids - 16 and 18, respectively - and their friends give us great hope. While they “OK Boomer” us with regularity (and often not without cause) they are thoughtful, empathetic people who look upon my knee jerk techno-doomism with bemusement and pity. I ask our oldest who just started his undergrad studies in is computer engineering about his take on AI. He scoffed and told me he refused to use the term as in his estimation AI is still profoundly stupid and likely to remain so indefinitely. He chooses to the old-school term “machine learning” as he feels that’s a much better descriptor of what LLMs actually do. Of course, he declaims all this with the bedrock confidence of a precocious college student. But he does his homework, and I’m inclined to put my faith in him, his brother, and their friends vs the techno-bro knuckleheads and nihilist opportunists Brian spotlights above
"being willing to aggressively announce themselves as scapegoats for the future, or by publishing long blogs retreading tales of the terrible power of AI"
Imo this just makes the people who do this look like colossal assholes. "This thing I'm building is going to destroy life for a lot of people and make those dystopian novels look like paradise, and yet I'm so magnanimous that I'M STILL GOING TO BUILD IT!"
Excellent Spanish title! I do love to see a bit of good anti-capitalist graffiti too, makes me nostalgic for the 90s
Brian, you are a national treasure. I hope you can get some rest and I look forward—if those are the right words—to your next installment on the absurdity of our moment.
I have to disagree on the "peak of the AI bubble". Considering the posture of VCs, that seem to be making debt to invest and the circular investments.. I have the strong feeling we will have to endure this crap a bit longer. I am soooo much hoping the bubble bursts because it is making the workplace an incredibly weird place, together with the rest, but... cannot complain, I still have a job :)
Now, about that Butlerian Jihad...
Could you please contact Bernie Sanders and teach him about AI? He seems to have bought the Amodei-Musk-Gates hype about AI:
https://www.youtube.com/watch?v=dthbi4lzO58
thanks
"Yes, they deserve to die, and I hope they burn in hell!"
"A recent college graduate whose only work experience was a stint at an infamously fraudulent crypto exchange and a job at OpenAI from which he was fired for leaking confidential information for personal gain, has maneuvered his way into managing $1.5 billion":
In other words, fools and their money continue to be parted. These particular fools are no less foolish for having lots of money from which to be parted. Never imagine that most of the rich sociopaths who blight the world are evil geniuses - evil, yes; geniuses, no.
"So we're doing an AI company with the best AI people, but we can't answer any questions.":
My, how South Sea bubble-y of Ms. Murati: "a company for carrying out an undertaking of great advantage, but nobody to know what it is"! Plus ça change, plus c'est la même chose.
(Charles Mackay reported that episode as factual in his book "Extraordinary popular delusions and the madness of crowds", whereas Andrew Odlyzko referred to it as apocryphal in his essay "Bubbles and gullibility":
https://www-users.cse.umn.edu/~odlyzko/doc/mania17.pdf
I'm not competent to assess its historicity, but in view of the nonsense we're now witnessing, it's all too plausible.
By the way, I called it in a comment on a post here about a year ago. You (Merchant) wrote, "Murati is gone now too, offering no discernible reason for her departure." I commented, "Maybe she realized she can probably get more money by setting up her own scam than by continuing to be a cog in Altman's scam." It's nothing but scams as far as the eye can see.)
They're dragging everyone else down with them, is the problem.