22 Comments

What might happen is what happened with the internet. What was hoped for (and touted incessantly) was a world in which everyone would have free access to good information (and even participation in decision making based on that). What we got was social media where we are not in the driving seat but where we are the product to be sold and the 'good' that comes from it — and there is 'good' — is the side effect. The question that becomes, what evil outcomes (follow the money is a good start) might we get? And will the net effect (for whatever that phrase means something here) be a positive one?

Note: I assume you know snake oil originally came from Chinese watersnakes and probably had some actual effect. If was *fake* snake oil that was the problem. There may be a usable analogy here too. GenAI will provide us with usable stuff, but it is peddled as a cure-all, like *fake* snake oil (from rattlesnakes) was.

Expand full comment

"(Sometimes you stumble onto something so ripe with resonance you wish you could go back and add it retroactively to your book — would love to be able to add in a bit about phantasmagoria and magic lanterns to Blood in the Machine, but alas)."

And sometimes you stumble onto something that could be the seed of your next book?

Expand full comment

Indeed!

Expand full comment

The AI that Israel is using to indiscriminately bomb Palestine, Lavender, is the true threat of AI. No one is talking about it🧐🤔

Expand full comment

Totally. IN FACT: I had a graf on this I wanted to include but the system said the piece was so long it would cut off the end so I had to chop the last three grafs. Here it is:

In a more horrifying example, further details broke this month about the Israeli Defense Forces use of AI to target victims — we knew it was using AI systems, but we did not know the extent that it was using them to justify target selection, or how terrifyingly frequently it was doing so. The AI was less powerful for the technological capabilities it proffered the military than for the leeway it granted the army to carry out its objectives — as with the phantasmagorias, the technology is necessary, yet arguably the least important element present; it helps create a useful illusion, but how that illusion is used is up to the operators of the machinery. Again the AI was smoke and mirrors — an analogical demonstration for the rank and file of how the IDF leadership believed its war should be carried out. For undemonstrable philosophies.

Expand full comment

Thank you for adding them. The MSM isn't covering what the true threat of AI actually is. Don't think for a second it won't be used here against POC and the LGBTQ community.

Expand full comment

I just don't get that. It's like saying "This school shooter used an AR-15 with a bump stock to shoot up a school. Clearly that means we need to ban the bump stock!" Like, sure, the AI definitely helps Isreal commit war crimes. But... you know what *really* helps Isreal commit war crimes? Isreal itself.

Expand full comment

Police used an AI, which is trained on data provided by police, and that AI produces racist results. How is that surprising?

You know the common copypasta of "despite making up only 13% of the population..." is fundamentally flawed, right?

Expand full comment

You... genuinely believe that copypasta? I'm not sure what to say...

Expand full comment

There was an article last week about the success of the USAF in deploying an AI-controlled fighter (the X-62, a heavily modded F-16) to participate in mock dogfights with a human-flown F-16. No word on who won, but this is a huge step beyond using AI's for targeting.

Expand full comment

And not a good thing for humanity

Expand full comment

Maybe our problem is that so much of our culture's orientation toward technology has been rooted in war for the last couple of hundred years or more. At least since 1940, American civilization has been almost entirely enfolded in militarized technology and its offshoots. We kid ourselves if we think we are not part of that and believe that everything we do is in some way related to perpetuating it.

Expand full comment

There is one venue where the new Phantasmagoria has let loose actual demons and monsters --

the modern classroom!

AI in the classroom has turned our colleges into robo-colleges and diploma mills, and worse. Student assessment is worthless, as argued in this recent piece below.

What's happened is that students in asymmetrical power relations have been given access to the equivalent of tactical atomic weapons. No one grasped the significance of this reversal of power relations because they never appreciated the extent to which the entire education system was premised on a necessary imbalance.

My worry is the destabilization of credential markets themselves, and threats to nation-states that are closely tied to their credentialing systems. Credential inflation has already created $1.7 trillion in student loan debt, with recent graduate underemployment running at more than 50%.

Digital "magic lanterns" have certainly created a new will-o-wisp for investors to chase after, but asset bubbles have always driven fluctuations in financial markets, and greater-fool musical chairs eventually stabilizes.

But ChatGPT in the classroom goes beyond this in ways little understood. See below.

-----------------------------------------------------------------------------

"The data we are collecting right now are literally worthless. These same trends implicate all data gathered from December 2022 through the present. So, for instance, if you are conducting a five-year program review for institutional accreditation you should separate the data from before the fall 2022 term and evaluate it independently. Whether you are evaluating writing, STEM outputs, coding, or anything else, you are now looking at some combination of student/AI work."

https://www.insidehighered.com/opinion/views/2024/03/28/assessment-student-learning-broken-opinion

Generative AI, like ChatGPT, has broken assessment of student learning in an assignment like this. ChatGPT can meet or exceed students’ outcomes in mere seconds. Before fall 2022 and the release of ChatGPT, students struggled to define the sociological imagination, so a key response was to copy and paste boilerplate feedback to a majority of the students with further discussion in class. This spring, in a section of 27 students, 26 nailed the definition perfectly. There is no way to know whether students used ChatGPT, but the outcomes were strikingly different between the pre- and post-AI era.

Expand full comment

I see 4 levels of corruption:

0- l'art pour l'art, you're going for the best w/ no regards for power/money/sex/fame

1- search for profit while still more or less trying to perform an honest service (the "old media" model)

2- search for profit diregarding anything else. If lies and madness sell, you sell lies and madness. (the "religion" model)

3- actively using tools to sabotage a company/country/social model (the "5th column" model).

The issue is, we'll get all 4 levels, and may lack the education and ethical compass to make the right choices.

Expand full comment

Quite a good take and not one commonly expressed on the subject. I’ve shared it with my colleagues.

Expand full comment

If it’s a computer routine, it will always be true: GARBAGE IN, GARBAGE OUT.

If it’s the internet, it will always be true: bring down the system, BYE BYE BITCOIN!

Expand full comment

Good article. Are we heading for the " trough of disillusionment" once again?

Expand full comment

Such a sobering read, thank you!

Expand full comment

This is excellent. I’m in!

Expand full comment

I talked to someone about my small business finance class. And this person who I'd assumed already took such a class as they're a successful small business owner, shared they'd recently downloaded chatGPT and used the natural language interface to ask it to generate spending plans based on the last three months of records. And chatGPT indeed generated *a* spending plan. And the person did not check if it was accurate. And I can't blame the person for not knowing better, given how hardcore the hype has been, given chatGPT ships with no disclaimer "this is not a calculator and is prone to mathematical mistakes."

Expand full comment