6 Comments
User's avatar
Mordechai Rorvig's avatar

Agreed that narratives are important, and they are often being spun to be used against the common people. Agreed that AGI is probably being used to some degree as a marketing tactic, or at least being oversold or overhyped, especially without recognizing risks and issues enough.

On the other hand, I think we run the risk of over-simplifying when we try to put forward any one single narrative. We don't want to just replace a (false) narrative that AI will help us all with another, false, over-simplified narrative that AI is just a gimmick, or that AGI is not a buildable technology that is somewhere on the horizon—and possibly somewhat close, at that.

Sarah Smith's avatar

AGI is not a buildable Technology. And it is not close. AGI as defined in the article is a term that defines a machines usefulness by comparing it to human capabilities. It’s always been a nonsense. Comparing humans to computers to humans is idiotic. It’s like comparing a human farmer to a combine harvester. Machines especially LLMs are fast, can process in bulk and are dumb. They crunch floating point operations at massive scale. But. The “P” in GPT means pre-trained. They’re out of date by the time they’re deployed. They average, and predict next tokens. That’s it. There’s no path to “Data” from Star Trek here.

Mordechai Rorvig's avatar

I'm not sure if there would be a very productive discussion to be had here, as it sounds like we have pretty fundamental disagreements that would be hard to unpack in a Substack comment section. But what is a human? A body driven by a brain and a nervous system. What is a brain? It is a biological machine that came about through the very, very long processes of evolution. Brains are machines.

Are brains comparable to AI programs? Well, functionally, of course they are. Just like we can compare a calculator to a human mathematician, even though their similarities are mainly only superficial, or algorithmic.

But I think you're questioning if they're comparable (or if they ever would be) on a more meaningful level. Here, again, you need to look at neuroscience research. There, you find that the current leading models of the most important large scale brain regions, like the visual cortex, are deep neural network programs, closely related to commercial AI programs.

Now, most neuroscientists do not agree that deep neural network models of the visual cortex or the language network are comparable to cortical simulations in their realism. But many do, and there is a very substantial body of evidence to support such an interpretation. I wrote a book about it.

Sarah Smith's avatar

Visual cortex models - computer vision is done by convolutional neural networks - CNNs - and they are classifiers. They’re not LLMs. CNNs are not proposed as a path to AGI as LLMs are. My award winning funded AI startup used CNNs.

Mordechai Rorvig's avatar

CNNs are artificial neural networks, as are language models (usually on transformer architectures). Language models are the leading models for the language cortex amongst computational and cognitive neuroscientists. This is just the research; not a personal opinion. I encourage you to take a look. It's very interesting and indeed, has been groundbreaking to discover that these models wind up having extremely deep correspondences with large scale brain regions.

Sarah Smith's avatar

AI as a field set out decades ago to mimic some aspects of brains. The extent to which they now mimic those successfully is not proof of anything except AI researchers fulfilling their own prophecies, and smuggling their own assumptions into their findings. Correlation is not causation.

My statement is that AGI won't be reached soon, and not via LLMs.

I say human capabilities (part of the AGI definition) make no sense to be compared to machine capabilities. I'm not interested in reductive arguments of composition - I'm talking about capability. What they can do.

Researchers can mess about with FRMI as they have done since I worked as an engineer alongside neuroscientists in a Human Factors lab 20 years ago. Go "ooo" and "ahh" at patterns of activation. Broca's area is very small relatively. I smell confirmation bias by the truckload.

A scarecrow looks a bit like a human. They don't have the capabilities of a human and the similarities are not instructive in understanding.

Gen AI vendors won't be closer to AGI by using just machines powered by multi-layer perceptrons and attention no matter how they imagine those impoverished mathematical equations (which is all they are) compare to a human neurone. A tensor passing through a NN is mathematics, and not a useful model of the brain. The "neural" here is branding, marketing.

But that is irrelevant.

A world model is required. Humans have them, and we are great at keeping them up to date. We have no idea what an AGI world model is, or how to build one and it won't be done in my lifetime or yours.

The current crop of Pre-trained (non-update-able) LLMs are to an order of magnitude approximation lossy compressed storage, with a chat API on top.

They predict next tokens using a set of weights computed from stolen data, amusing the gullible, and enriching oligarchs at massive scale.

No brains here.