It's knives out for AI: Engineers, artists, educators and other workers are raising the alarm more loudly than ever before. Here are 8 of their must-read broadsides against the tech.
For a while now, I have thought (and suggested) we need to have a sort of 'required Hippocratic oath' for people working in IT. I proposed this in a talk at the BIL-T conference in 2020 (and DADD in 2021). My draft (better versions are of course possible) was:
As no society can exist without shared convictions, and as the most beneficial convictions are factually truthful ones, and as the convictions of members of society are strongly influenced by the information that a person consumes, I declare:
• I will not work on systems that have the effect of damaging society by weakening the flow of factually truthful information or by amplifying the flow of factually untruthful or misleading information
• I will not work on systems that damage people’s security of mind
• I will not remain silent if I know of such systems being created or used
As Brian has so clearly pointed out, the evil of GenAI is more economic. Taking an ethical stance against that is in fact a political stance. And 'capital' will fight hard, as it did late 19th first half of the 20th century, and I suspect this was a major factor in the world wars that followed.
So... we can write songs, record and perform them ... and be better off selling original recordings offline... 🤔👋... what was the point of the internet again?
When I was a kid in the 20th century, we learnt about Luddites. There was not much context given and my impression was they just hated technology. Turns out, Luddism was a political movement fighting against craftsmen losing their jobs. While machines could produce fabrics and textile much faster, the quality was shit. Those damn Luddites didn't want to starve to death!
I really enjoyed Alkhatib's piece, refreshing. For the illustrators, as with so many others, I sometimes worry the focus on AI as a new, special thing people need to fight against, keep us from talking about the overarching, long in the making, issues that undergirds it. The visual logic embedded in genAI models is based on an objectifying and commodifying understanding of what images are and what they're for that has long been contentious in the creative industry, particularly in advertising and branding. I think any analysis that tries to criticize the human-like prowess of "AI" needs to include an analysis of how people's lives and work have been mechanized and objectified for a long time. Unfortunately, in a field like commercial art, the lack of unions and organizing has made it really hard to create this kind of reflexive, big picture, thinking about labor.
Anyway, looking forward to read more of these! thanks for the list!
As a teacher, "Keep ChatGPT out of the classroom" resonates with me. This technology is a solution in search of a problem. Advocates of LLMs in class will say "use it to create an outline", but the first step in writing is thinking about what you will write! Thinking is fundamental to all writing, and the less of it we do, the lousier the results will be.
Or even better: "use it to brainstorm"! Should we pretend to forget what the first syllable of that word refers to? Reading someone else's list of "ideas" is not a form of brainstorming.
The awful lesson being implied is: when you're stumped or confused or having a hard time thinking about anything, don't persevere. Give up and let a robot think for you.
Agreed, I have also heard it pushed as a "time saver" for us teachers. It's frustrating. We are already so overworked and this is their only solution? At that point, AI just represents the collective intellectual work of humanity tortured into its most abstracted form.
We might hope for a comeback via a form of 'Verelendungstheorie'. When trying to get information is becoming like wading through a sea of slop, the value of real editorial services may become very important. But in the meantime, the damage that is being done is disheartening.
"It has become standard to describe A.I. as a tool. I argue that this framing is incorrect. It does not aid in the completion of a task. It completes the task for you."
Exactly.
It drives me nuts when people compare Midjourney to graphics tablets or Photoshop.
It means not only not understanding anything about Gen AI, but continuing to carry on the cliché of "the computer draws for you" for digital illustrators.
No: if I use a Wacom or CSP it is me continuing to draw by other means, if I use Dall-E or similars I am using a replacement. That's the whole point.
The large model market is already divided. Currently, there are only three independent companies: OpenAI, Anthropic, and xAI. OpenAI is valued at hundreds of billions of dollars, while Anthropic and xAI each have valuations of around $20 billion.
The remaining opportunities lie in the development of AI applications on the commercial side.
Great read! I hate AI. It's already been massively harmful to way too many people. Israel is using Lavender, it's all bad. Cops are using it. Revenge porn just became streamlined, so too kiddie porn. Journalists and artists losing jobs and their works stolen.
For a while now, I have thought (and suggested) we need to have a sort of 'required Hippocratic oath' for people working in IT. I proposed this in a talk at the BIL-T conference in 2020 (and DADD in 2021). My draft (better versions are of course possible) was:
As no society can exist without shared convictions, and as the most beneficial convictions are factually truthful ones, and as the convictions of members of society are strongly influenced by the information that a person consumes, I declare:
• I will not work on systems that have the effect of damaging society by weakening the flow of factually truthful information or by amplifying the flow of factually untruthful or misleading information
• I will not work on systems that damage people’s security of mind
• I will not remain silent if I know of such systems being created or used
As Brian has so clearly pointed out, the evil of GenAI is more economic. Taking an ethical stance against that is in fact a political stance. And 'capital' will fight hard, as it did late 19th first half of the 20th century, and I suspect this was a major factor in the world wars that followed.
The honour system has not worked out for us, reguarding the medical mafia, but is worth a try.
So... we can write songs, record and perform them ... and be better off selling original recordings offline... 🤔👋... what was the point of the internet again?
Surveillance and manipulation were the points of the internet.
That was my concluding thought exactly!
When I was a kid in the 20th century, we learnt about Luddites. There was not much context given and my impression was they just hated technology. Turns out, Luddism was a political movement fighting against craftsmen losing their jobs. While machines could produce fabrics and textile much faster, the quality was shit. Those damn Luddites didn't want to starve to death!
Also, the Amish have something there.
Good point!
I really enjoyed Alkhatib's piece, refreshing. For the illustrators, as with so many others, I sometimes worry the focus on AI as a new, special thing people need to fight against, keep us from talking about the overarching, long in the making, issues that undergirds it. The visual logic embedded in genAI models is based on an objectifying and commodifying understanding of what images are and what they're for that has long been contentious in the creative industry, particularly in advertising and branding. I think any analysis that tries to criticize the human-like prowess of "AI" needs to include an analysis of how people's lives and work have been mechanized and objectified for a long time. Unfortunately, in a field like commercial art, the lack of unions and organizing has made it really hard to create this kind of reflexive, big picture, thinking about labor.
Anyway, looking forward to read more of these! thanks for the list!
See Ed Zitron https://www.wheresyoured.at/sam-altman-is-full-of-shit/
That one is a great article!
As a teacher, "Keep ChatGPT out of the classroom" resonates with me. This technology is a solution in search of a problem. Advocates of LLMs in class will say "use it to create an outline", but the first step in writing is thinking about what you will write! Thinking is fundamental to all writing, and the less of it we do, the lousier the results will be.
Or even better: "use it to brainstorm"! Should we pretend to forget what the first syllable of that word refers to? Reading someone else's list of "ideas" is not a form of brainstorming.
The awful lesson being implied is: when you're stumped or confused or having a hard time thinking about anything, don't persevere. Give up and let a robot think for you.
Agreed, I have also heard it pushed as a "time saver" for us teachers. It's frustrating. We are already so overworked and this is their only solution? At that point, AI just represents the collective intellectual work of humanity tortured into its most abstracted form.
A.I. is going to lower I.Q. and quickly if people do not get out of the stupid childlike loop. Your mind is a muscle, use it lose it.
We might hope for a comeback via a form of 'Verelendungstheorie'. When trying to get information is becoming like wading through a sea of slop, the value of real editorial services may become very important. But in the meantime, the damage that is being done is disheartening.
https://en.wikipedia.org/wiki/Immiseration_thesis
Do read 'The Library of Babel' from Borges!
"It has become standard to describe A.I. as a tool. I argue that this framing is incorrect. It does not aid in the completion of a task. It completes the task for you."
Exactly.
It drives me nuts when people compare Midjourney to graphics tablets or Photoshop.
It means not only not understanding anything about Gen AI, but continuing to carry on the cliché of "the computer draws for you" for digital illustrators.
No: if I use a Wacom or CSP it is me continuing to draw by other means, if I use Dall-E or similars I am using a replacement. That's the whole point.
You might appreciate this podcast on A.I. :
https://soberchristiangentlemanpodcast.substack.com/p/ai-deception-s1-ep-1-of-3
GenAI delenda est. The only question it answers is "How can power and money be further concentrated at the top?"
The large model market is already divided. Currently, there are only three independent companies: OpenAI, Anthropic, and xAI. OpenAI is valued at hundreds of billions of dollars, while Anthropic and xAI each have valuations of around $20 billion.
The remaining opportunities lie in the development of AI applications on the commercial side.
The first part of Cal Newport's latest podcast has a great explanation of what AI can and cannot do. It brings some light to a discussion with far too much heat. https://www.thedeeplife.com/podcasts/episodes/ep-306-defusing-ai-panic/
Great read! I hate AI. It's already been massively harmful to way too many people. Israel is using Lavender, it's all bad. Cops are using it. Revenge porn just became streamlined, so too kiddie porn. Journalists and artists losing jobs and their works stolen.