4 Comments

Never has a race to the bottom of oblivion been more apparent.

Expand full comment

Looks like a conflict of interest to me

Expand full comment

Look ma, no scientists!

Okay, not quite. I see they included Andrew Yao, although he's more an academic politician than a practicing scientist.

So, none of the faculty members, postdoctoral fellows, or graduate students who could credibly claim to be making progress toward genuine artificial intelligence* are on this ridiculous list. (E.g., one of many who could reasonably be there is Pentti Kanerva, whom I mention by name because he happened to be a colleague of mine at the Swedish Institute of Computer Science and I like his work.)

Granted, I suppose few actual scientists are "influential" with respect to what passes for AI at present, because the latter is mostly a hollow fraud touted by hollow frauds like Sam Altman, hence few actual scientists are very interested in it.

*In a piece published yesterday at The Markup, Tomas Apodaca noted that LLMs like ChatGPT routinely "produce text that is untethered from reality":

https://themarkup.org/hello-world/2024/09/07/how-im-trying-to-use-generative-ai-as-a-journalism-engineer-ethically

Of course, that's because such models practically *are* untethered from reality. They're models of documents or images, which are themselves models of reality only in some very loose, vague sense.

By "genuine artificial intelligence", I mean systems with far deeper, richer models of the world (or appreciable subsets of it) and capacities for learning from experiences in the world, as humans, dogs, cats, fish, and even fruitflies and nematodes do (without anything resembling human language, by the way). Calling systems that lack such capacities "intelligent" is preposterous.

Expand full comment

AI is a pyramid scheme

Expand full comment