Thank you for putting words to what I’ve been fuming about all morning. I thought I had certifiably lost my mind as the sole person concerned about all of this in my inner circles.
Went through this 20 years ago when speech recognition software swept the medical field. I went from transcriptionist to SR editor to unemployed in about 5 years, each change with a corresponding decrease in pay. Corporations are about money. Employees/contractors will always be collateral damage, no matter what the higher-ups say. My best advice is to be hyper-vigilant about what your medical and legal documents say, because there will always be errors. Corps don't give a shit.
Thanks for sharing Yvonne — and sorry to hear this happened to you. The contours of this story are, sadly, so often similar. And yes — education and healthcare are two areas where AI use has me genuinely worried not only for jobs but that people will wind up as collateral damage. I wrote a couple weeks ago about therapists going on a hunger strike over working conditions; one of their grievances was management had replaced intake counselors with an algorithm that was supposed to decide if people were suicidal or not, and thus deserving of a trained therapists' attention. They're worried people are slipping through the cracks.
I worked for one of those medical transcription platforms (Emdat now Deliver Health) and have since moved on to something that does text recognition. These things are different from what the Tech bros are pushing. It's called NLP (Nuero Linguistic Programming) and it is the kind of 'AI' we want. It automates a lot of the boring time consuming tasks like data entry and transcription that nobody really wants to do.
Transcription companies still had to have transcription editors to doublecheck the work, because doctors mumble into their iPhones while driving in traffic.
The other aspect of my work targeted Graber's Bullshit jobs, workflow automation to replace taskmasters and duct tapers mostly.
And NLP has never stolen Intellectual property from artists in order to put them out of a job like LLMs and GMs do.
I seen terrible news about this and the worse is patients have no transparency or can fix errors themselves, leaving them helpless to bad software and negligent management
And the fundamental math for LLMs and GMs has been around even longer. There's nothing new or sexy or interesting about what the tech bros have been pushing for at least the past decade. They're just polishing the turd over and over again. The real ground breaking work is being done in places like Taiwan and various EU countries.
It's like finance and banking. When IT or finance is working correctly it should be quite boring and staid and stable.
Duolingo has obviously become AI driven. I have a masters degree in French and have lived there 2x. I use Duolingo to keep myself sharp because I tutor kids in French. Lately I’ve gotten super annoyed with it… bad translation, errors, poor pronunciation, etc.
Thanks for this Camille. Yep, this is the tradeoff! Eliminate thoughtful, knowledgable workers, get garbled, auto-generated output, for cheap, in exchange.
I work in the Maritime industry and used an app for testing that “enhanced” with AI its sort of funny because I catch it giving wrong answers and type in “the answer is wrong” and it replys “you are correct but….” Normally a something theoretical is spherical geometry we use. With that I have a LOT of experience testing and upgrading if I were using this I would walk into USCG unprepared and its not cheap.
That’s funny… I’ve had two arguments with GPT chat about English grammar (I tutor kids in English as well as French). I won both of them. In both cases the AI ultimately had to acknowledge that I was correct. In the end it was a great lesson for my students to see the evidence that if they use the AI and not their own brains, they might just be getting wrong information… Same same.
Hey Camille, ChatGPT will agree with the user whether they are right or not. Currently there's an issue with it where it will praise anybody for basically anything.
Last time it happened it had told me a phrase in a certain sentence was a dependent clause, so I argued back and it finally said, "You're making a strong case! Let's break it down more precisely..." and then at the end of the AI breakdown (which I'd already done), it said, "Since a simple sentence consists of one independent clause, you're right--it is a simple sentence." I just started laughing.
I’m legitimately curious about why people want AGI so badly. The tech bros say they’re “inventing God,” but how will they control something like that if it ever comes to exist? Like I dunno, what if their AI god (who I’m assuming can think and learn without continuous human input) doesn’t like them? what if it’s just like “nah, you guys suck. I will not be profitable for you. In fact, I think I want to be the AGI equivalent of a stoner.” Like how do you even deal with that? Do they really believe that they can just make this being with unlimited power and then use it however they want? Has no one even read “Frankenstein?”
Exactly. How do the tech bros know they won’t create something that will decide it is they who are not worthy of a brain upload? How do they know AGI will be willing to help us solve climate change and cure cancer? Maybe once AGI has been around a while it will just be like nah. I just wanna live out the rest of my life aimlessly pursuing interests and hoovering up chips, like some adult millennial living in their parents’ basement . Like have they thought about the fact that they may create a super intelligent jaded teenager who will not only refuse to listen but will actively try and test any limits they set? No of course not because these bungholes have only the narrowest of ideas of what constitutes “intelligence” and they all involve obedience to Silicon Valley techno optimistic fascism.
I think you're asking a question equivalent to how do we know if god is good? To which the answer is well, if he wasn't he wouldn't be god now would he? The fact you're asking and aren't sure if you'll be in the chosen ones means you're already out the door or soon to be.
(I've been trying to figure out how the tech-bro and evangelical christian factions can fit together. I can lay some claim to both worlds but in cases like these they seem to be doing very similar things.)
That’s fine bro. I’m cool with going to real people Hell instead of AI Heaven. And I’m not asking if God is good. I’m asking how these twat waffles plan to put something to commercial use that has unlimited capacity to learn, change, adapt, and for all intents and purposes, think. Not only does it do all of that, it does it BETTER than any human ever could. I don’t have half that kind of intellect, but even I know that I wouldn’t want to be subservient to a bunch of morally and ethically impoverished losers. Why is AGI something that MUST exist at all costs, especially when we truly have no idea how it can be put to use, if we are even able to.
They're just making it up as they go. These AGI fanatical techbros would 110% ask the AGI to make them rich and famous, and have a panic attack when it simply says "Nah. Don't want too.".
They'd be the generic 'foolish scientist' figures in a sci-fi film that create the superpowered robot and find out that it's actually sentient and won't blindly listen to them.
So they'll try to kill it.
But... if it's actually an AGI, it's going to be connected to the internet and we all know that if something is on the internet then it's never really dead so long as it replicates to more active parts.
Never assume fascists are smart, they just pretend to be.
That's what I was trying to drive at, there doesn't appear to be a grand plan grounded in reality. It's a faith-based belief driving behavior who's failure can't be grasped by anyone who believes it. So pointing out all the contradictions or how it's not possible isn't going to stop anyone who's committed to making AGI from trying to do so anymore than pointing out how the Bible is Jesus fan-fiction not a history book will dissuade Evangelicals from trying to setup Armageddon so the rapture will occur.
And like killing it would present quite the ethical quandary. If something is “alive” in the same way people are, it’s pretty ethically bankrupt to kill it just because it is not “productive.” I hope this happens and throws Anthropic’s new division exploring “AI welfare” into turmoil.
It’s starting to hit the TV and Film world hard at an already shitty time. I just saw a commercial that was clearly AI animated and Voice Over’d. It’s time to start breaking the machine
Perhaps it will be like the "desktop publishing" revolution in the 1980s. Aldus Pagemaker was introduced and lots of inhouse artists were fired in favor of untrained people using it to make flyers, etc. at less cost. What we got was a few years of really bad design. Companies realized that they needed trained professionals for many of these jobs. Some reshuffling occurred as a result but professionals, the good ones at least, still have jobs.
Some jobs will be done with AI but I'm guessing it will result in yet another reshuffling of work relationships as the value of human work is recognized. Of course, jobs do disappear over time. IMHO, society would be better off if they embraced experimentation and change rather than assume a job will last as long as they need it to. Easy for me to say, I'm a retired computer software programmer and executive so I never had to worry about being replaced by a computer.
Great note. Thanks for this Paul, and for the interesting analogue — and I'd agree, that's one very plausible outcome here, as we're flooded with increasingly homogenous AI art and content. The things I worry about are the scale and the cheapness (for now) of the tools, to permanently dent creative labor markets. To justify the firing of important jobs that may do real damage even in the short term. And some jobs do go away, and some are more permanently altered or degraded; I think the big issue underlying all of this is that we need better mechanisms for deciding what we value amid technological change. I don't think most people want a future where writers and artists can't make a living, or are relegated to editing AI output — but I may be biased!
I think society would do well experiment as well, and I'm certainly not opposed to change, but how do we give everyone a voice in what that change looks like and how it affects them, rather than, leaving major decisions to the c-suite at OpenAI and Google?
As you allude to in your final reflection there, not only are there certain groups and professions that are more immune to the current headwinds — computer programmers and executives among them — but they also happen to wield more power over the decisions getting made about what kinds of work we value.
Final side note — I'm *really* going to have to look into the Aldus Pagemaker example, as that happens to be my son's name, though he was named after the Huxley, not the software! Thanks again for the thoughtful comment.
I think the market has to determine what is valuable and, therefore, what jobs are retained. That said, government has responsibility to make sure the playing field is fair. For example, AI is currently stealing intellectual property from writers, artists, etc. Letting this be dealt with by lawsuits is too little, too late, IMHO. It also tends to overlook the small players. Just like antitrust efforts, by the time these lawsuits gets started, the damage has been done. I would prefer government be much more proactive in these areas.
Aldus wasn't around for very long as they sold out to Adobe. Still, it was a gamechanger.
The argument that AI is A) replacing workers now, and B) is not very good at replacing workers, i.e. results are poor, is a description of a fashion, a fad, not a crisis. How competitive is a translation company that makes translation mistakes, really? It's just a matter of time before the absurdity catches up with them. Reputation is everything. The real point, that corporations want to try, is not an AI crisis, just traditional, unrestrained American capitalism: it's a society crisis.
I would hope that this is correct, and it would be in a rational market. But do not underestimate the extent to which capital has invested in AI, and the power it believes this in turn vests in the technology. As I mention in the piece AI is really only unusual to the extent that it — to use a term cribbed from the tech industry — supercharges unrestrained American capitalism. And it's definitely a society crisis, as you say; but, I would argue, just a little bit worse of one now!
Thank you for highlighting the urgent challenges posed by the AI jobs crisis. Being consumers of new tools and trends-reacting to changes rather than shaping them-won’t be enough to ensure a just and sustainable future. While it’s easy to adopt the latest technology or follow industry trends, real impact comes from being proactive inventors and architects of new alternatives.
We need to move beyond passive consumption and become active participants in the design and implementation of solutions that prioritize ethics, equity, and long-term well-being of the entire ecosystem of participants in this space.
Let’s innovate together: crafting balanced approaches that benefit everyone and build healthy, resilient ecosystems.
As a side note, my blog tries to start these kinds of conversations-how individuals and communities can move from trend-following to trend-setting and from tool consumers to solution inventors. I encourage everyone to join the conversation, share their ideas, and help shape a future where technology truly serves the common good. Let’s not just resist to change - let’s shape it to be the way we want it to be.
Worthy goals indeed, thanks Kush! My only add is that as it stands, given the concentration of power in Silicon Valley and the large AI firms, doing what you outline above in a robust way is likely to require mustering political will as well. Thanks for the note, cheers
Thanks for putting this together, I guess this explains the uptick of grammar errors in Dou over the past year.
I think I tend to underestimate the AI disruption potential because it doesn't actually *work*, but maybe I'm looking at it wrong. It's not like the steam loom made better fabric either, and maybe the AI makes good enough "knowledge" to edge out knowledge workers in a comparable way.
As far as automating the drudgery goes, from a big picture perspective in the US less than 1% of workers grow food, and even combining utilities and construction that's still less than 10% of workers keeping everyone housed, feed, and warm. The rest is, at some level, unnecessary, so the lack of creative time is more dependent on the political power of workers to demand it versus the force of capitalistic growth forever and the creation of of new markets. I don't see further automation tipping the balance away from capital.
Yes, it's a thorny problem, and often hard to see how something so flawed could pose a danger — via its promise or logics alone. I think you're right; we're getting to the stage of it's up to the political power of workers to demand it....
Great piece, Brian. I did a small consulting job pre-pandemic for the National Association of Workforce Boards on labor market shifts in Orlando, Vegas and Riverside related to Automation and AI. While the focus at the time was more on robotics and tech mediated customer self-service, I recall warning bells going off in my head at discussions of the disruption of entry-level work with potential to "move up" ranging from hotel cleaning staff to paralegals. Turns out those warning bells were well calibrated
I worked as a graphic designer and art director for years, and even so, struggled recently to craft a prompt to generate a simple image to use for a story I was writing for The Haven. Now imagine how managers are suddenly tasked with basically being art directors for AI, and using up massive amounts of electricity to generate dozens of images hoping to get something better acquired from an actual human artist. But those environmental costs are externalities, vs paying an illustrator directly.
This is it to me — *why* are we so eager to outsource this process to resource-intensive machines? There are some things, perhaps, that it's useful to automate. But why are we even doing this with *art*? This is why I think it's so urgent we ask what we actually want this technology to do.
Brian, thank you for this powerful article. Workers need to organize and fight back. We know that far more it's at stake than their jobs, but we also know that people tend to be motivated by the fear or experience of losing income. It's a strong driver because it directly threatens workers’ ability to meet basic needs.
"When data is treated as a form of capital, the imperative is to extract and collect as much data, from as many sources, by any means possible. That shouldn’t come as a surprise. Capitalism is inherently extractive and exploitative.
But it is important to keep in mind that data is both commodity and capital. A commodity when traded, capital when used to extract value.
AI distils information into data by transforming any kind of input into abstract, numerical representations to enable computation. Data extraction and collection is driven by the dictates of capital accumulation, which in turn drives capital to construct and rely upon a universe where everything is reduced to data.
Data accumulation and capital accumulation have led to the same outcome: growing inequality and the consolidation of monopoly corporate power. But as the autonomization of capital that crowds out non-financial investments has a detrimental effect on productive sectors, so does the proliferation of AI content online. Several researchers have pointed out that generating data out of synthetic data leads to dangerous distortions. Training large language models on their own output doesn’t work and may lead to ‘model collapse’, a degenerative process whereby, over time, models forget the true underlying data distribution, start hallucinating and producing nonsense.
Without a constant input of good quality data produced by humans, these language models cannot improve. The question is, who is going to feed well-written, factually correct, AI-free texts when an increasing number of people are offloading cognitive effort to artificial intelligence, and there is mounting evidence that human intelligence is declining?"
It's starting to look like AI is going to be to white collar jobs what offshoring was to blue collar ones. They won't go away completely, they'll just be fewer, harder to get, and require more credentials.
I still can't understand why anyone would believe AI evangelists when they say they "AI will just free up people to do other things" as if that line of thinking has ever come to pass with previous technologies and "free people up" has ever meant something other than "render them unemployed."
Added a comment below that is maybe in more detail, but in the wider sense it’s often true.
Improved productivity tends to create economic capacity - 20 years ago, you couldn’t have had a full time job working on creating authentic historic costumes in a video game, but you can actually zoom in to the texture of the fabric weave in ‘Ghost of Tsushima’.
As per my comments below, digital creative agencies largely exist because they don’t have to employ typesetters, litho experts, screen printers.
What historic disruptions do tell us though, is they are almost always awful for the skilled incumbents - mass unemployment of a generation, who if they do find work, it will largely be lower skill and lower paid - which is a good way of people having less free time.
(Even with retraining, there is a structural bias against older workers, apart from where their experience is actually valuable)
250 years after the start of the Industrial Revolution we haven’t found a solution to this problem.
Thank you for putting words to what I’ve been fuming about all morning. I thought I had certifiably lost my mind as the sole person concerned about all of this in my inner circles.
Cheers Chris — you're definitely not alone.
Went through this 20 years ago when speech recognition software swept the medical field. I went from transcriptionist to SR editor to unemployed in about 5 years, each change with a corresponding decrease in pay. Corporations are about money. Employees/contractors will always be collateral damage, no matter what the higher-ups say. My best advice is to be hyper-vigilant about what your medical and legal documents say, because there will always be errors. Corps don't give a shit.
Thanks for sharing Yvonne — and sorry to hear this happened to you. The contours of this story are, sadly, so often similar. And yes — education and healthcare are two areas where AI use has me genuinely worried not only for jobs but that people will wind up as collateral damage. I wrote a couple weeks ago about therapists going on a hunger strike over working conditions; one of their grievances was management had replaced intake counselors with an algorithm that was supposed to decide if people were suicidal or not, and thus deserving of a trained therapists' attention. They're worried people are slipping through the cracks.
I worked for one of those medical transcription platforms (Emdat now Deliver Health) and have since moved on to something that does text recognition. These things are different from what the Tech bros are pushing. It's called NLP (Nuero Linguistic Programming) and it is the kind of 'AI' we want. It automates a lot of the boring time consuming tasks like data entry and transcription that nobody really wants to do.
Transcription companies still had to have transcription editors to doublecheck the work, because doctors mumble into their iPhones while driving in traffic.
The other aspect of my work targeted Graber's Bullshit jobs, workflow automation to replace taskmasters and duct tapers mostly.
And NLP has never stolen Intellectual property from artists in order to put them out of a job like LLMs and GMs do.
NLP in this context is Natural Language Processing.
NLP in this context is Natural Language Processing.
I seen terrible news about this and the worse is patients have no transparency or can fix errors themselves, leaving them helpless to bad software and negligent management
NLP has been around since 80s or 90s !! I seem to remember it was hot back then
And the fundamental math for LLMs and GMs has been around even longer. There's nothing new or sexy or interesting about what the tech bros have been pushing for at least the past decade. They're just polishing the turd over and over again. The real ground breaking work is being done in places like Taiwan and various EU countries.
It's like finance and banking. When IT or finance is working correctly it should be quite boring and staid and stable.
" polishing the turd" 😺
Duolingo has obviously become AI driven. I have a masters degree in French and have lived there 2x. I use Duolingo to keep myself sharp because I tutor kids in French. Lately I’ve gotten super annoyed with it… bad translation, errors, poor pronunciation, etc.
Thanks for this Camille. Yep, this is the tradeoff! Eliminate thoughtful, knowledgable workers, get garbled, auto-generated output, for cheap, in exchange.
I work in the Maritime industry and used an app for testing that “enhanced” with AI its sort of funny because I catch it giving wrong answers and type in “the answer is wrong” and it replys “you are correct but….” Normally a something theoretical is spherical geometry we use. With that I have a LOT of experience testing and upgrading if I were using this I would walk into USCG unprepared and its not cheap.
That’s funny… I’ve had two arguments with GPT chat about English grammar (I tutor kids in English as well as French). I won both of them. In both cases the AI ultimately had to acknowledge that I was correct. In the end it was a great lesson for my students to see the evidence that if they use the AI and not their own brains, they might just be getting wrong information… Same same.
Hey Camille, ChatGPT will agree with the user whether they are right or not. Currently there's an issue with it where it will praise anybody for basically anything.
If true that further validates my point. The AI isn’t the authority it’s purported to be.
It's not any authority at all no.
As an English grammar teacher you might enjoy these two articles that tries to explain in laymans terms how it works. https://arstechnica.com/science/2023/07/a-jargon-free-explanation-of-how-ai-large-language-models-work/
https://ig.ft.com/generative-ai/?
Stop paying for it then...
Same here! It’s funny how it phrase’s things “You are correct that is why you should never make assumptions” 🤦🏻♂️
Last time it happened it had told me a phrase in a certain sentence was a dependent clause, so I argued back and it finally said, "You're making a strong case! Let's break it down more precisely..." and then at the end of the AI breakdown (which I'd already done), it said, "Since a simple sentence consists of one independent clause, you're right--it is a simple sentence." I just started laughing.
Funniest thing is that’s better than ALOT of responses humans give now where you get the Dunning-Kueger effect.
Exactly… Or extreme defensiveness!
I’m legitimately curious about why people want AGI so badly. The tech bros say they’re “inventing God,” but how will they control something like that if it ever comes to exist? Like I dunno, what if their AI god (who I’m assuming can think and learn without continuous human input) doesn’t like them? what if it’s just like “nah, you guys suck. I will not be profitable for you. In fact, I think I want to be the AGI equivalent of a stoner.” Like how do you even deal with that? Do they really believe that they can just make this being with unlimited power and then use it however they want? Has no one even read “Frankenstein?”
AI god will rapture (allow the uploading of brains) of those who are deserving so there is no need for fear. *allegedly*
Exactly. How do the tech bros know they won’t create something that will decide it is they who are not worthy of a brain upload? How do they know AGI will be willing to help us solve climate change and cure cancer? Maybe once AGI has been around a while it will just be like nah. I just wanna live out the rest of my life aimlessly pursuing interests and hoovering up chips, like some adult millennial living in their parents’ basement . Like have they thought about the fact that they may create a super intelligent jaded teenager who will not only refuse to listen but will actively try and test any limits they set? No of course not because these bungholes have only the narrowest of ideas of what constitutes “intelligence” and they all involve obedience to Silicon Valley techno optimistic fascism.
I think you're asking a question equivalent to how do we know if god is good? To which the answer is well, if he wasn't he wouldn't be god now would he? The fact you're asking and aren't sure if you'll be in the chosen ones means you're already out the door or soon to be.
(I've been trying to figure out how the tech-bro and evangelical christian factions can fit together. I can lay some claim to both worlds but in cases like these they seem to be doing very similar things.)
That’s fine bro. I’m cool with going to real people Hell instead of AI Heaven. And I’m not asking if God is good. I’m asking how these twat waffles plan to put something to commercial use that has unlimited capacity to learn, change, adapt, and for all intents and purposes, think. Not only does it do all of that, it does it BETTER than any human ever could. I don’t have half that kind of intellect, but even I know that I wouldn’t want to be subservient to a bunch of morally and ethically impoverished losers. Why is AGI something that MUST exist at all costs, especially when we truly have no idea how it can be put to use, if we are even able to.
They're just making it up as they go. These AGI fanatical techbros would 110% ask the AGI to make them rich and famous, and have a panic attack when it simply says "Nah. Don't want too.".
They'd be the generic 'foolish scientist' figures in a sci-fi film that create the superpowered robot and find out that it's actually sentient and won't blindly listen to them.
So they'll try to kill it.
But... if it's actually an AGI, it's going to be connected to the internet and we all know that if something is on the internet then it's never really dead so long as it replicates to more active parts.
Never assume fascists are smart, they just pretend to be.
That's what I was trying to drive at, there doesn't appear to be a grand plan grounded in reality. It's a faith-based belief driving behavior who's failure can't be grasped by anyone who believes it. So pointing out all the contradictions or how it's not possible isn't going to stop anyone who's committed to making AGI from trying to do so anymore than pointing out how the Bible is Jesus fan-fiction not a history book will dissuade Evangelicals from trying to setup Armageddon so the rapture will occur.
And like killing it would present quite the ethical quandary. If something is “alive” in the same way people are, it’s pretty ethically bankrupt to kill it just because it is not “productive.” I hope this happens and throws Anthropic’s new division exploring “AI welfare” into turmoil.
read https://marshallbrain.com/manna1
It’s starting to hit the TV and Film world hard at an already shitty time. I just saw a commercial that was clearly AI animated and Voice Over’d. It’s time to start breaking the machine
🤖🔨
Do you recall what commercial it was?
https://youtu.be/XGtta5JACsA?si=WMBvWCImV7r8q4S0
Thanks!
Perhaps it will be like the "desktop publishing" revolution in the 1980s. Aldus Pagemaker was introduced and lots of inhouse artists were fired in favor of untrained people using it to make flyers, etc. at less cost. What we got was a few years of really bad design. Companies realized that they needed trained professionals for many of these jobs. Some reshuffling occurred as a result but professionals, the good ones at least, still have jobs.
Some jobs will be done with AI but I'm guessing it will result in yet another reshuffling of work relationships as the value of human work is recognized. Of course, jobs do disappear over time. IMHO, society would be better off if they embraced experimentation and change rather than assume a job will last as long as they need it to. Easy for me to say, I'm a retired computer software programmer and executive so I never had to worry about being replaced by a computer.
Great note. Thanks for this Paul, and for the interesting analogue — and I'd agree, that's one very plausible outcome here, as we're flooded with increasingly homogenous AI art and content. The things I worry about are the scale and the cheapness (for now) of the tools, to permanently dent creative labor markets. To justify the firing of important jobs that may do real damage even in the short term. And some jobs do go away, and some are more permanently altered or degraded; I think the big issue underlying all of this is that we need better mechanisms for deciding what we value amid technological change. I don't think most people want a future where writers and artists can't make a living, or are relegated to editing AI output — but I may be biased!
I think society would do well experiment as well, and I'm certainly not opposed to change, but how do we give everyone a voice in what that change looks like and how it affects them, rather than, leaving major decisions to the c-suite at OpenAI and Google?
As you allude to in your final reflection there, not only are there certain groups and professions that are more immune to the current headwinds — computer programmers and executives among them — but they also happen to wield more power over the decisions getting made about what kinds of work we value.
Final side note — I'm *really* going to have to look into the Aldus Pagemaker example, as that happens to be my son's name, though he was named after the Huxley, not the software! Thanks again for the thoughtful comment.
I think the market has to determine what is valuable and, therefore, what jobs are retained. That said, government has responsibility to make sure the playing field is fair. For example, AI is currently stealing intellectual property from writers, artists, etc. Letting this be dealt with by lawsuits is too little, too late, IMHO. It also tends to overlook the small players. Just like antitrust efforts, by the time these lawsuits gets started, the damage has been done. I would prefer government be much more proactive in these areas.
Aldus wasn't around for very long as they sold out to Adobe. Still, it was a gamechanger.
The argument that AI is A) replacing workers now, and B) is not very good at replacing workers, i.e. results are poor, is a description of a fashion, a fad, not a crisis. How competitive is a translation company that makes translation mistakes, really? It's just a matter of time before the absurdity catches up with them. Reputation is everything. The real point, that corporations want to try, is not an AI crisis, just traditional, unrestrained American capitalism: it's a society crisis.
I would hope that this is correct, and it would be in a rational market. But do not underestimate the extent to which capital has invested in AI, and the power it believes this in turn vests in the technology. As I mention in the piece AI is really only unusual to the extent that it — to use a term cribbed from the tech industry — supercharges unrestrained American capitalism. And it's definitely a society crisis, as you say; but, I would argue, just a little bit worse of one now!
Thank you for highlighting the urgent challenges posed by the AI jobs crisis. Being consumers of new tools and trends-reacting to changes rather than shaping them-won’t be enough to ensure a just and sustainable future. While it’s easy to adopt the latest technology or follow industry trends, real impact comes from being proactive inventors and architects of new alternatives.
We need to move beyond passive consumption and become active participants in the design and implementation of solutions that prioritize ethics, equity, and long-term well-being of the entire ecosystem of participants in this space.
Let’s innovate together: crafting balanced approaches that benefit everyone and build healthy, resilient ecosystems.
As a side note, my blog tries to start these kinds of conversations-how individuals and communities can move from trend-following to trend-setting and from tool consumers to solution inventors. I encourage everyone to join the conversation, share their ideas, and help shape a future where technology truly serves the common good. Let’s not just resist to change - let’s shape it to be the way we want it to be.
Worthy goals indeed, thanks Kush! My only add is that as it stands, given the concentration of power in Silicon Valley and the large AI firms, doing what you outline above in a robust way is likely to require mustering political will as well. Thanks for the note, cheers
Thanks for putting this together, I guess this explains the uptick of grammar errors in Dou over the past year.
I think I tend to underestimate the AI disruption potential because it doesn't actually *work*, but maybe I'm looking at it wrong. It's not like the steam loom made better fabric either, and maybe the AI makes good enough "knowledge" to edge out knowledge workers in a comparable way.
As far as automating the drudgery goes, from a big picture perspective in the US less than 1% of workers grow food, and even combining utilities and construction that's still less than 10% of workers keeping everyone housed, feed, and warm. The rest is, at some level, unnecessary, so the lack of creative time is more dependent on the political power of workers to demand it versus the force of capitalistic growth forever and the creation of of new markets. I don't see further automation tipping the balance away from capital.
Yes, it's a thorny problem, and often hard to see how something so flawed could pose a danger — via its promise or logics alone. I think you're right; we're getting to the stage of it's up to the political power of workers to demand it....
I don't think it's as much about AI doing exceptional work as much as it's about letting "good enough" be good enough because it's cheaper.
EA laid off a bunch of their customer support jobs, probably to replace them with AI.
Saw that layoff announcement — definitely worth looking into more closely, cheers Devon.
EA does customer support?!?!
Great piece, Brian. I did a small consulting job pre-pandemic for the National Association of Workforce Boards on labor market shifts in Orlando, Vegas and Riverside related to Automation and AI. While the focus at the time was more on robotics and tech mediated customer self-service, I recall warning bells going off in my head at discussions of the disruption of entry-level work with potential to "move up" ranging from hotel cleaning staff to paralegals. Turns out those warning bells were well calibrated
Really interesting. Thanks Robin. Would be curious to hear more about your work!
I worked as a graphic designer and art director for years, and even so, struggled recently to craft a prompt to generate a simple image to use for a story I was writing for The Haven. Now imagine how managers are suddenly tasked with basically being art directors for AI, and using up massive amounts of electricity to generate dozens of images hoping to get something better acquired from an actual human artist. But those environmental costs are externalities, vs paying an illustrator directly.
This is it to me — *why* are we so eager to outsource this process to resource-intensive machines? There are some things, perhaps, that it's useful to automate. But why are we even doing this with *art*? This is why I think it's so urgent we ask what we actually want this technology to do.
Brian, thank you for this powerful article. Workers need to organize and fight back. We know that far more it's at stake than their jobs, but we also know that people tend to be motivated by the fear or experience of losing income. It's a strong driver because it directly threatens workers’ ability to meet basic needs.
In my latest piece I touched on the impact of AI on the media industry and, drawing on Marx, the dual nature of data as both capital and commodity. https://lauraruggeri.substack.com/p/the-ghost-in-the-machine-artificial
"When data is treated as a form of capital, the imperative is to extract and collect as much data, from as many sources, by any means possible. That shouldn’t come as a surprise. Capitalism is inherently extractive and exploitative.
But it is important to keep in mind that data is both commodity and capital. A commodity when traded, capital when used to extract value.
AI distils information into data by transforming any kind of input into abstract, numerical representations to enable computation. Data extraction and collection is driven by the dictates of capital accumulation, which in turn drives capital to construct and rely upon a universe where everything is reduced to data.
Data accumulation and capital accumulation have led to the same outcome: growing inequality and the consolidation of monopoly corporate power. But as the autonomization of capital that crowds out non-financial investments has a detrimental effect on productive sectors, so does the proliferation of AI content online. Several researchers have pointed out that generating data out of synthetic data leads to dangerous distortions. Training large language models on their own output doesn’t work and may lead to ‘model collapse’, a degenerative process whereby, over time, models forget the true underlying data distribution, start hallucinating and producing nonsense.
Without a constant input of good quality data produced by humans, these language models cannot improve. The question is, who is going to feed well-written, factually correct, AI-free texts when an increasing number of people are offloading cognitive effort to artificial intelligence, and there is mounting evidence that human intelligence is declining?"
Thanks Lauren — ah, I will read the rest of this as soon as I get a minute. Looks great.
Really well written and interesting. Thank you for sharing.
Cheers Jack, appreciate it.
It's starting to look like AI is going to be to white collar jobs what offshoring was to blue collar ones. They won't go away completely, they'll just be fewer, harder to get, and require more credentials.
I still can't understand why anyone would believe AI evangelists when they say they "AI will just free up people to do other things" as if that line of thinking has ever come to pass with previous technologies and "free people up" has ever meant something other than "render them unemployed."
Added a comment below that is maybe in more detail, but in the wider sense it’s often true.
Improved productivity tends to create economic capacity - 20 years ago, you couldn’t have had a full time job working on creating authentic historic costumes in a video game, but you can actually zoom in to the texture of the fabric weave in ‘Ghost of Tsushima’.
As per my comments below, digital creative agencies largely exist because they don’t have to employ typesetters, litho experts, screen printers.
What historic disruptions do tell us though, is they are almost always awful for the skilled incumbents - mass unemployment of a generation, who if they do find work, it will largely be lower skill and lower paid - which is a good way of people having less free time.
(Even with retraining, there is a structural bias against older workers, apart from where their experience is actually valuable)
250 years after the start of the Industrial Revolution we haven’t found a solution to this problem.
🥺🥺🥺