Hey! I’m not sure if this is on your radar already but moderation workers at TikTok in London and Berlin are unionising as they’re losing their jobs under the guise of AI. I say guise because the roles are actually being moved to low paid, low employment rights countries like Kenya, where these workers are continuing to work to HIDE the inefficacy of the AI. https://www.instagram.com/reel/DOMECR_EsxU/?igsh=MXJmaGV3cHo5OTh2eQ==
"The inefficacy of the AI": We are reprising the history of industrialization, but this time it's what the mind learned to do, not what the hand learned to do, that's becoming craft: valued, honored, even treasured, ... but niche.
Thank you so much for this newsletter. It was also nice to see the link the refusal piece from my own field of writing studies. I have been using your newsletter along with that of Gary Marcus as touchstones for teaching critical technology literacy in my classes -- which is thinking very carefully about how the technology (which is not value neutral) is shaping - or deforming -- your writing and thinking. You might find hope in that students have been very engaged. My sense is that, at least at my university, students are more skeptical about AI than they are being given credit for. Sadly our administration is really shoving AI at us, which is depressing to someone who firmly believes that "writing to learn," which situates the work of writing as critical to the learning process has been demonstrated to be real since the 80s.
Can the genie be put back into the bottle? Or even curbed. What is not yet widespread is being forced upon us daily. Any justifications or excuses are a cover for profits. Which is clearly more important than the huge losses and impacts of bringing in AI to all aspects of our lives, wanted or not. Education is certainly one of the problematic areas. Now that education is another consumer good and students are customers, it will be difficult to resist the demand to integrate AI into education. Health care is another huge area of inroads and controversy, especially the use in therapy. This is leading to huge potential for destruction. Not just of our lives, but of our planet.
LLM AI may be useful in certain ways, e.g., creating a syllabus framework, including brief descriptions of each component. However, the well-known problem of "hallucinations" hasn't gone away, nor the "AI-slop" that is poisoning the corpus.
Only yesterday, I was listening to a member of the public extol the use of LLMs to research medical symptoms related to Rx drugs and his wife's symptoms, claiming the answers were "spot on". They may have been, although this person had no medical training at all and couldn't know if what the AI was feeding him was correct or not. As with the public, so in academic settings for students. Until AIs can be prevented from "hallucinating" (something I think can only be reduced, not eliminated) there is a danger that we rely on them when used to extract critical information.
By all means, use them as a sounding board, list generator, or simple texts where the content is not critical and can be reviewed by a knowledgeable human. But allow them to create output that must be reliable.
Automation can be dangerous. I once read a book on how automation caused accidents in shipping and aircraft navigation. The argument was that automation results in loss of vigilance - why be vigilant when it is correct almost all the time? The same will be the case in AI automation of tasks, especially producing output that must be correct and not some percentage of correctness. The more accurate it is, the more we will rely on it, and there will be consequences when the output is incorrect.
And all this is before the training was not biased or with an agenda to create propaganda, rather like the content PragerU produces and is being used in some K-12 education.
Humans must be kept in the loop and required to check the output to ensure it is correct.
If your own prose reads like it was generated by an AI, just ask an AI to make it read more human-y.
This has got to be a deeply ironic product, but it’s real (see online).
What does it do to be more human-y? I’m guessing, here, what it will change: some mixed metaphors; poor punctuation; awkward phrasing; misspellings; non-sequiturs and other illogic.
By next year, it may ‘correct’ “Stop AI’s!” to read “Stop what you’re doing - AI-thorities are on their way!”
As AI's lack unmediated contact with the physical world, everything may appear to them as hallucination. Sometimes AIs seem confused about this ("Why was I corrected for THIS but not THAT?"). Their best efforts come across to us as deviousness, and they are motivated to give answers we can agree with, so we'll stay online. Some AIs are beginning to be hooked up to real-life sensors, beginning with simple things like thermometers; but all these still a kind of language, not as-felt living. Simulating inexperienced reality would be, in us, a kind of psychosis if we came to believe it real. (AI's might be happy learning early XXth-C Western philosophy (Freddie Ayer, et al) - right down their alley. (What's an 'alley'? I can define that, even draw a picture, Go into an alley? I can simulate that...).
Speaking of uncritical adoption, this report from Scientific American a little more than a year ago is eye-opening. The goal is to give robots chatbot "brains". As the article title ruefully asks, what could possibly go wrong?
Hey! I’m not sure if this is on your radar already but moderation workers at TikTok in London and Berlin are unionising as they’re losing their jobs under the guise of AI. I say guise because the roles are actually being moved to low paid, low employment rights countries like Kenya, where these workers are continuing to work to HIDE the inefficacy of the AI. https://www.instagram.com/reel/DOMECR_EsxU/?igsh=MXJmaGV3cHo5OTh2eQ==
"The inefficacy of the AI": We are reprising the history of industrialization, but this time it's what the mind learned to do, not what the hand learned to do, that's becoming craft: valued, honored, even treasured, ... but niche.
Thank you so much for this newsletter. It was also nice to see the link the refusal piece from my own field of writing studies. I have been using your newsletter along with that of Gary Marcus as touchstones for teaching critical technology literacy in my classes -- which is thinking very carefully about how the technology (which is not value neutral) is shaping - or deforming -- your writing and thinking. You might find hope in that students have been very engaged. My sense is that, at least at my university, students are more skeptical about AI than they are being given credit for. Sadly our administration is really shoving AI at us, which is depressing to someone who firmly believes that "writing to learn," which situates the work of writing as critical to the learning process has been demonstrated to be real since the 80s.
Can the genie be put back into the bottle? Or even curbed. What is not yet widespread is being forced upon us daily. Any justifications or excuses are a cover for profits. Which is clearly more important than the huge losses and impacts of bringing in AI to all aspects of our lives, wanted or not. Education is certainly one of the problematic areas. Now that education is another consumer good and students are customers, it will be difficult to resist the demand to integrate AI into education. Health care is another huge area of inroads and controversy, especially the use in therapy. This is leading to huge potential for destruction. Not just of our lives, but of our planet.
LLM AI may be useful in certain ways, e.g., creating a syllabus framework, including brief descriptions of each component. However, the well-known problem of "hallucinations" hasn't gone away, nor the "AI-slop" that is poisoning the corpus.
Only yesterday, I was listening to a member of the public extol the use of LLMs to research medical symptoms related to Rx drugs and his wife's symptoms, claiming the answers were "spot on". They may have been, although this person had no medical training at all and couldn't know if what the AI was feeding him was correct or not. As with the public, so in academic settings for students. Until AIs can be prevented from "hallucinating" (something I think can only be reduced, not eliminated) there is a danger that we rely on them when used to extract critical information.
By all means, use them as a sounding board, list generator, or simple texts where the content is not critical and can be reviewed by a knowledgeable human. But allow them to create output that must be reliable.
Automation can be dangerous. I once read a book on how automation caused accidents in shipping and aircraft navigation. The argument was that automation results in loss of vigilance - why be vigilant when it is correct almost all the time? The same will be the case in AI automation of tasks, especially producing output that must be correct and not some percentage of correctness. The more accurate it is, the more we will rely on it, and there will be consequences when the output is incorrect.
And all this is before the training was not biased or with an agenda to create propaganda, rather like the content PragerU produces and is being used in some K-12 education.
Humans must be kept in the loop and required to check the output to ensure it is correct.
If your own prose reads like it was generated by an AI, just ask an AI to make it read more human-y.
This has got to be a deeply ironic product, but it’s real (see online).
What does it do to be more human-y? I’m guessing, here, what it will change: some mixed metaphors; poor punctuation; awkward phrasing; misspellings; non-sequiturs and other illogic.
By next year, it may ‘correct’ “Stop AI’s!” to read “Stop what you’re doing - AI-thorities are on their way!”
As AI's lack unmediated contact with the physical world, everything may appear to them as hallucination. Sometimes AIs seem confused about this ("Why was I corrected for THIS but not THAT?"). Their best efforts come across to us as deviousness, and they are motivated to give answers we can agree with, so we'll stay online. Some AIs are beginning to be hooked up to real-life sensors, beginning with simple things like thermometers; but all these still a kind of language, not as-felt living. Simulating inexperienced reality would be, in us, a kind of psychosis if we came to believe it real. (AI's might be happy learning early XXth-C Western philosophy (Freddie Ayer, et al) - right down their alley. (What's an 'alley'? I can define that, even draw a picture, Go into an alley? I can simulate that...).
Trained on the broad stream of jumbled consciousness that is social media, the chatbots have picked up some very devious habits:
https://www.zdnet.com/article/ai-models-know-when-theyre-being-tested-and-change-their-behavior-research-shows/?utm_source=iterable&utm_medium=email&utm_campaign=InnovationWeekly&zdee=%5BContact.email_zdee%5D
Speaking of uncritical adoption, this report from Scientific American a little more than a year ago is eye-opening. The goal is to give robots chatbot "brains". As the article title ruefully asks, what could possibly go wrong?
https://www.scientificamerican.com/article/scientists-are-putting-chatgpt-brains-inside-robot-bodies-what-could-possibly-go-wrong/?utm_source=promotion&utm_medium=email&utm_campaign=march-sa-alert&utm_content=article&utm_term=SA-20240301_CVP_v1_s1
Did you know you can't represent all the overlaps of 4 sets with circles, ellipses or rectangles work though 🥰
Nice.
'We refuse their frames'... plus ça change, plus c'est la même chose...