6 Comments
User's avatar
Stephen S. Power's avatar

Another great post. I appreciated this bit most:

"There was, after all, a time in which, if a founder walked into a VC’s office on Sand Hill Road with a pitch for a big new company, and then the VC asked 'what is your company going to do?' and the founder said 'I can’t answer any questions about my company actually,' it would be clear the person was either delusional, or was trolling, and they would be shown the door. During a particularly absurd bubble around a technology with uniquely science fictional aspirations, however, investors might say, great, here is $2 billion."

Here's why: A recent article in Fast Company revealed that Andreesen doesn't make most of its money from the companies it invests in, it makes its money, like hedge fund, on the exorbitant fees it charges the investors it gets capital from. In other words, their incentive is not to support functional companies with customer-ready products and clear future profits, things a company is supposed to do; it's to attract more and more people willing to invest and pay them fees--and that requires telling these suckers a fanciful story about why customer-ready profits and future profits. So you're absolutely on point, it seems to me, that the story told about AI is far more important than any AI itself. Basically, we're not in a bubble, we're in a fairy tale.

Expand full comment
Alex Tolley's avatar

Isn't that how hedge fund companies work, too, albeit also really leveraging the equity to increase the beta? If they win, they get the fees and the cut of the profits. If they lose, they just get their exorbitant fees. Great racket. The spiel is to claim they have positive alphas, and can best manage to stay on the winning side. The industry is littered with eventual busts.

We seem to be back to the Gilded Age, with charlatans and grifters selling snake oil, manipulating stocks, and criminal bankers.

Expand full comment
Bruce Cohen's avatar

I swear, it must be a requirement for investors in Silly Valley to have their bullshit detectors removed surgically and replaced with LLM trolling engines.

And since I want my share of the billions*

I think I’ll start a company whose mission is to build a super intelligence to hunt down the super intelligences that are dangerous to humanity. How could investors not want to throw money at that?

* No, I really don’t. I spent some years working there, and I’m not at all willing to even work *near* those people again.

Expand full comment
Mary Wildfire's avatar

Worth the wait--the main story is head-spinning, the bonuses are interesting and I'm glad BITM is going out in other languages...and if you're right, many of the young are catching on. This seems slightly ironic, in that I can remember when people my age had to ask their grandchildren how to use their computers, and the kids were so tech-savvy...now will it be us oldsters that still stare at phones all day? Not me, we have a cellphone but only use it when traveling or to get the phone company to fix our landline.

Expand full comment
Jon Rynn's avatar

Could you please contact Bernie Sanders and teach him about AI? He seems to have bought the Amodei-Musk-Gates hype about AI:

https://www.youtube.com/watch?v=dthbi4lzO58

thanks

Expand full comment
Gerben Wierda's avatar

Hanlon’s Razor (“never attribute to malice what can be attributed to stupidity”) probably holds. I would estimate so for Aschenbrenner (who clearly is a special kind of simpleton), Sutskever (who seriously argued in an interview with a serious outlet about a year ago that we could get to superintelligence by adding “answer like you are superintelligent” to an LLM prompt), and Murati, who clearly runs a cabal of people who think the foundational randomness of GenAI is a bug, not a feature. Even Slippery Sam knows better (and has publicly said so).

Expand full comment