Hey! I’m not sure if this is on your radar already but moderation workers at TikTok in London and Berlin are unionising as they’re losing their jobs under the guise of AI. I say guise because the roles are actually being moved to low paid, low employment rights countries like Kenya, where these workers are continuing to work to HIDE the inefficacy of the AI. https://www.instagram.com/reel/DOMECR_EsxU/?igsh=MXJmaGV3cHo5OTh2eQ==
Can the genie be put back into the bottle? Or even curbed. What is not yet widespread is being forced upon us daily. Any justifications or excuses are a cover for profits. Which is clearly more important than the huge losses and impacts of bringing in AI to all aspects of our lives, wanted or not. Education is certainly one of the problematic areas. Now that education is another consumer good and students are customers, it will be difficult to resist the demand to integrate AI into education. Health care is another huge area of inroads and controversy, especially the use in therapy. This is leading to huge potential for destruction. Not just of our lives, but of our planet.
LLM AI may be useful in certain ways, e.g., creating a syllabus framework, including brief descriptions of each component. However, the well-known problem of "hallucinations" hasn't gone away, nor the "AI-slop" that is poisoning the corpus.
Only yesterday, I was listening to a member of the public extol the use of LLMs to research medical symptoms related to Rx drugs and his wife's symptoms, claiming the answers were "spot on". They may have been, although this person had no medical training at all and couldn't know if what the AI was feeding him was correct or not. As with the public, so in academic settings for students. Until AIs can be prevented from "hallucinating" (something I think can only be reduced, not eliminated) there is a danger that we rely on them when used to extract critical information.
By all means, use them as a sounding board, list generator, or simple texts where the content is not critical and can be reviewed by a knowledgeable human. But allow them to create output that must be reliable.
Automation can be dangerous. I once read a book on how automation caused accidents in shipping and aircraft navigation. The argument was that automation results in loss of vigilance - why be vigilant when it is correct almost all the time? The same will be the case in AI automation of tasks, especially producing output that must be correct and not some percentage of correctness. The more accurate it is, the more we will rely on it, and there will be consequences when the output is incorrect.
And all this is before the training was not biased or with an agenda to create propaganda, rather like the content PragerU produces and is being used in some K-12 education.
Humans must be kept in the loop and required to check the output to ensure it is correct.
Hey! I’m not sure if this is on your radar already but moderation workers at TikTok in London and Berlin are unionising as they’re losing their jobs under the guise of AI. I say guise because the roles are actually being moved to low paid, low employment rights countries like Kenya, where these workers are continuing to work to HIDE the inefficacy of the AI. https://www.instagram.com/reel/DOMECR_EsxU/?igsh=MXJmaGV3cHo5OTh2eQ==
Can the genie be put back into the bottle? Or even curbed. What is not yet widespread is being forced upon us daily. Any justifications or excuses are a cover for profits. Which is clearly more important than the huge losses and impacts of bringing in AI to all aspects of our lives, wanted or not. Education is certainly one of the problematic areas. Now that education is another consumer good and students are customers, it will be difficult to resist the demand to integrate AI into education. Health care is another huge area of inroads and controversy, especially the use in therapy. This is leading to huge potential for destruction. Not just of our lives, but of our planet.
LLM AI may be useful in certain ways, e.g., creating a syllabus framework, including brief descriptions of each component. However, the well-known problem of "hallucinations" hasn't gone away, nor the "AI-slop" that is poisoning the corpus.
Only yesterday, I was listening to a member of the public extol the use of LLMs to research medical symptoms related to Rx drugs and his wife's symptoms, claiming the answers were "spot on". They may have been, although this person had no medical training at all and couldn't know if what the AI was feeding him was correct or not. As with the public, so in academic settings for students. Until AIs can be prevented from "hallucinating" (something I think can only be reduced, not eliminated) there is a danger that we rely on them when used to extract critical information.
By all means, use them as a sounding board, list generator, or simple texts where the content is not critical and can be reviewed by a knowledgeable human. But allow them to create output that must be reliable.
Automation can be dangerous. I once read a book on how automation caused accidents in shipping and aircraft navigation. The argument was that automation results in loss of vigilance - why be vigilant when it is correct almost all the time? The same will be the case in AI automation of tasks, especially producing output that must be correct and not some percentage of correctness. The more accurate it is, the more we will rely on it, and there will be consequences when the output is incorrect.
And all this is before the training was not biased or with an agenda to create propaganda, rather like the content PragerU produces and is being used in some K-12 education.
Humans must be kept in the loop and required to check the output to ensure it is correct.
Nice.
'We refuse their frames'... plus ça change, plus c'est la même chose...