As a retired software engineer, I got interested in the claims Microsoft wa making about the ability of Copilot to write code, so I tested it. It failed, rather spectacularly, in tests on writing well known algorithms in Swift code, C++, and even pseudocode. In on case it left out half the algorithm, in another it emitted random gibberish instead of correct syntax at one point. Certainly not ready for serious use.
As a father, I’ve had to watch my younger son be turfed out of his career as a freelance graphic designer. The jobs just dried up and disappeared, as generative AI took over cheaply but poorly.
But as a longtime believer in fairness, and a believer that we now have the ability to create a post-scarcity economy for every member of the human race, just not the will to do it, I think that even as we protect the lives and livelihoods of the workers against encroachment by AI replacements, we need to be very concerned about the large number of people especially in the Global South who work for a few dollars a day to make the AI datasets work and to put guardrails on them. Without them the mechanical Turk won’t work, but the AI companies, mostly through 3rd party entities to prevent accountability, want to squeeze them even harder than the workers they employ directly.
And as someone who wants there to be a human-habitable planet for my granddaughters to inherit, I think we need to find ways to limit the use of energy and water to power the training of the AI engines that seem to be so profitable for just a few people.
How do we do all these things at once? Damned if I know, but I’ll keep thinking about it, and I’m definitely open to suggestions.
Hear, hear. Thanks for all this Bruce, and this is absolutely the challenge — or interlocking set of challenges — before us. And 1,000%, need to do a lot more to lift up the workers around the world who create and clean the datasets, and those who test the output etc. A hell of a project, indeed, but I do think there are ways forward here...
I don't know about you, but I'm not demanding a right to get paid to work for someone else; I'm demanding the right to be able to live comfortably even if nobody cares to hire me for enough $$$ to live comfortably on, or even if I am, for whatever reason, unable to work, or even if I'm busy doing something nobody wants to pay me to do, like raising my own kids: https://diy.rootsaction.org/petitions/end-poverty-demand-a-ubi-equal-to-what-congress-pays-itself
I also agree that our ability to live with dignity should not be tethered to these factors! I am all for transformative reforms, and not opposed to UBI, especially if it's not a pittance that risks further stratifying an underclass
The 'teacherless classroom' reference reminds me of the book 'MEGAMISTAKES: Forecasting and the Myth of Rapid Technological Change' by Steven P. Schnaars. He illustrates quite convincingly that our expectations of what the 'solution' will be for certain issues (like education) more reflects the Zeitgeist than what these technologies actually can do. Poster child example from the book: when the 'technologies of the day' were TV and jet engines, a 'solution' for education was proposed that would put TV-transmitters on jet planes so 'all students could profit from having the best teachers'. Now, this obviously wasn't a solution for better education, but it was how the Zeitgeist manifested itself in the education domain.
Around 2013, a rather naive public figure in The Netherlands launched the concept of 'iPad schools' (formally labeled 'Steve Jobs Schools', but the idea was that iPads would radically transform education and improve it). This failed rather spectacularly when it turned out that the children were actually worse off.
Schnaars mentions that IT is actually an exception to the many overhyped trends (thus 'megamistakes') in that most technologies were overhyped, but IT was actually underestimated ("the world needs maybe three computers").
Yeah, more even than the debate over art, the case for generative AI in the classrooms seems clearer cut to me — outside of, say, computer classes, or specialized classes in higher learning, it is an obvious nonstarter to me, and should be banned from classrooms where it can seriously inhibit the development of critical thinking.
Maybe we will be somewhat saved by the fact that human brains generally 'look for' stimulation/surprise while GenAI produces cheap 'average blandness' and GenAI may soon be seen as very 'uncool'. One can hope. (Probably not though).
It’s interesting, I have been head down in a blog post on UBI, so when I read “tech works for us” I interpreted it from the perspective of technology literally working — assuming our job roles — for us so we can focus on different, more creative and fulfilling things.
I'm old enough to remember when the computer was going to do what AI says it will do...and the work week got longer. Remember when the paperless office was right around the corner? Uh-huh, now we've got more of it than ever.
AI is a tool, but it's a dumb (as in not very smart) tool, ultimately, that has to be fed and oriented, and that takes far more work than the advocates (who don't have to do the scut work) claim. It only "knows" what someone shows it, only "understands" in ways that the humans tell it to. We haven't made "intelligence" quite as much as we've made a very fast, very clever program that only "knows" what it's told. Sure, it can "learn" what it is told to learn, but no more.
Everyone knows that if you really want to screw something up, you need a computer. AI is that in spades. AI will take more work, ultimately, than any other computer program not just to train or create, but to control, because of the faith the great unwashed put in it's efficacy. It' been way oversold already.
Pie-in-the-sky fantasies about paid health care, 4 day workweeks and 48 week work years are completely unrelated to AI. There is no connection. Indeed, expect the 7 day work week very soon.
Haha yes indeed! I agree *if we allow things to continue on the current course* as dictated by AI companies. Every supposed gain—days off, 8-hr work weeks, weekends etc—has been the result of the labor movement pushing back against industrialization, and arguing in tandem with new technologically abetted productivity gains that workers should get a share. I'm just saying we should turn the logic espoused by Silicon Valley on its head, and say 'well, if you say this will make everyone so productive, then here's a measure to ensure everyone benefits' precisely because we *know* if we don't organize and speak out for such ends that SV and the tech companies will be all to happy to hoard any of the gains (if they come at all!).
Great piece. A while back I read the book Omon Ra, a dark Russian satire of the Soviet Union that I think applies equally well to the technocratic silicon valley control of so much of US society.
I keep thinking about the book every few weeks and how the main critique of Soviet Communism treating people as literal cogs in a machine applies so well to much of capitalist technological change as owners seek to cover up the human cogs they use. Even AI is built on millions of hours of underpaid labor with human beings basically checking every possible output and combination from LLMs and rating them, rather than some sort of massive advance in technology that is making life easier for workers.
I think the protections need to start before AI is deployed. Instead, start at the beginning: how is AI being trained? Early deployed versions could not distinguish the faces of POC or women as readily as it did white males. Bias is being built in.
Agreed — there should be much more transparency here, and just letting Congress see it is insufficient. There are bias and copyright concerns here alike. Yet artists are fighting for this, as I wrote about in an earlier newsletter, and at least some of these companies' models are now potentially about to be opened up in discovery. So yeah, some hope for change there... need to keep pushing on this for sure.
A lot of companies are wising up to this. Google itself has a guide sheet on responsible AI practices that talks about understanding the limitations of your training model:
- Machine learning models today are largely a reflection of the patterns of their training data. It is therefore important to communicate the scope and coverage of the training, hence clarifying the capability and limitations of the models.
- A model trained to detect correlations should not be used to make causal inferences, or imply that it can.
The OECD also has language around AI transparency:
Specifically, to your point: "where feasible and useful, to provide plain and easy-to-understand information on the sources of data/input, factors, processes and/or logic that led to the prediction, content, recommendation or decision, to enable those affected by an AI system to understand the output"
As a retired software engineer, I got interested in the claims Microsoft wa smacking about the ability of Copilot to write code, so I tested it. It failed, rather spectacularly in tests on writing well known algorithms in Swift code, C
The contracts that determine pay, worker protections, rights to bargain, etc are often written in intentionally dense legalese to keep people confused about the rights they actually have vs what they're signing away. AI summaries could help close that gap.
Writing emails to lawmakers about policy is tedious work. Giving ChatGPT your zip code, asking it for all the contact information on your local elected officials, and having it draft letters on important issues could make that work a lot easier.
As for specific policies, it's more common for courts to weigh in favor of consumer protections than labor protections. Requiring companies who use AI to prove that they lowered costs to the end consumer would be more likely to succeed than asking for an increase in worker pay, but would still have the benefit of de-incentivizing fully non-human workforces.
It would also be important to demand transparency around which aspects of production are AI-generated instead of human generated.
I don't know, I've been a member of three unions at this point, and have seen multiple contracts — they're really not that complex, and union reps hold information sessions and make themselves available to anyone who wants further explaining already. You could have an AI summarize it, but I'm not a lawyer, they don't confuse me, and nor have I heard of anyone being particularly confused by contracts.
And sure, writing AI-generated letters to lawmakers may be faster, but on the other hand, you risk a situation where there's a deluge of identical-ish letters that reps don't really read; kind of like a Change.org petition; when the input cost is so low, so is the corresponding weight of the response; a phone call or a written letter is going to do much more than an AI-generated email to your rep.
You're right, but a) this is changing, as the NLRB awakens from its slumber, and b) *both* labor and consumer protections are important to fight for with zeal. The FTC is interested in consumer-side cases with regard to the AI companies, and I actually just spoke with them about the final point; the issue of watermarking AI posts etc. I could see a push there proving successful.
I often look at AI through the bias of creative labor, which deals with notoriously complex contracts and rarely has a union rep to ask for clarity. A 19-year-old artist getting signed by a record label isn't going to understand the ins and outs of recoupment, breakage, or the royalties on different types of licenses, and might not have the wherewithal (or financial ability) to run it past a lawyer first.
I'm glad the push for transparency is making headway. I think it's the prerequisite for everything else we want to accomplish, and I think we're helped immensely by the fact that people who depend on LLMs have a pressing need to keep generative AI from polluting their models.
Yeah this is a good point — for artists and freelancers, protections are basically nonexistent and looking to copyright law won't be an option at all unless the lawsuits against the AI companies bear fruit.
Someone wiser than me has noted that we need technology that is aligned with nature-sourced systems of order and understanding (already developed in the East.) We need to be co-creating with Natural Law in an alchemizing way. In particular, we need to respect and protect the electromagnetic energies in ourselves, all living beings and systems, and the planet herself, and to replace exploitative approaches and entitlements.
Argh, let me try that again.
As a retired software engineer, I got interested in the claims Microsoft wa making about the ability of Copilot to write code, so I tested it. It failed, rather spectacularly, in tests on writing well known algorithms in Swift code, C++, and even pseudocode. In on case it left out half the algorithm, in another it emitted random gibberish instead of correct syntax at one point. Certainly not ready for serious use.
As a father, I’ve had to watch my younger son be turfed out of his career as a freelance graphic designer. The jobs just dried up and disappeared, as generative AI took over cheaply but poorly.
But as a longtime believer in fairness, and a believer that we now have the ability to create a post-scarcity economy for every member of the human race, just not the will to do it, I think that even as we protect the lives and livelihoods of the workers against encroachment by AI replacements, we need to be very concerned about the large number of people especially in the Global South who work for a few dollars a day to make the AI datasets work and to put guardrails on them. Without them the mechanical Turk won’t work, but the AI companies, mostly through 3rd party entities to prevent accountability, want to squeeze them even harder than the workers they employ directly.
And as someone who wants there to be a human-habitable planet for my granddaughters to inherit, I think we need to find ways to limit the use of energy and water to power the training of the AI engines that seem to be so profitable for just a few people.
How do we do all these things at once? Damned if I know, but I’ll keep thinking about it, and I’m definitely open to suggestions.
Hear, hear. Thanks for all this Bruce, and this is absolutely the challenge — or interlocking set of challenges — before us. And 1,000%, need to do a lot more to lift up the workers around the world who create and clean the datasets, and those who test the output etc. A hell of a project, indeed, but I do think there are ways forward here...
I don't know about you, but I'm not demanding a right to get paid to work for someone else; I'm demanding the right to be able to live comfortably even if nobody cares to hire me for enough $$$ to live comfortably on, or even if I am, for whatever reason, unable to work, or even if I'm busy doing something nobody wants to pay me to do, like raising my own kids: https://diy.rootsaction.org/petitions/end-poverty-demand-a-ubi-equal-to-what-congress-pays-itself
I also agree that our ability to live with dignity should not be tethered to these factors! I am all for transformative reforms, and not opposed to UBI, especially if it's not a pittance that risks further stratifying an underclass
It seems to me that Congress has a clear idea of what the cost of living really is.
The 'teacherless classroom' reference reminds me of the book 'MEGAMISTAKES: Forecasting and the Myth of Rapid Technological Change' by Steven P. Schnaars. He illustrates quite convincingly that our expectations of what the 'solution' will be for certain issues (like education) more reflects the Zeitgeist than what these technologies actually can do. Poster child example from the book: when the 'technologies of the day' were TV and jet engines, a 'solution' for education was proposed that would put TV-transmitters on jet planes so 'all students could profit from having the best teachers'. Now, this obviously wasn't a solution for better education, but it was how the Zeitgeist manifested itself in the education domain.
Around 2013, a rather naive public figure in The Netherlands launched the concept of 'iPad schools' (formally labeled 'Steve Jobs Schools', but the idea was that iPads would radically transform education and improve it). This failed rather spectacularly when it turned out that the children were actually worse off.
Schnaars mentions that IT is actually an exception to the many overhyped trends (thus 'megamistakes') in that most technologies were overhyped, but IT was actually underestimated ("the world needs maybe three computers").
I should add that I suspect that GenAI will disrupt (with 'cheap') and that Brian's observation is a key insight. It is also not 'copying the art', it is more like 'cloning the (collective) artisans' (which is new). See https://ea.rna.nl/2024/07/27/generative-ai-doesnt-copy-art-it-clones-the-artisans-cheaply/
Yeah, more even than the debate over art, the case for generative AI in the classrooms seems clearer cut to me — outside of, say, computer classes, or specialized classes in higher learning, it is an obvious nonstarter to me, and should be banned from classrooms where it can seriously inhibit the development of critical thinking.
Maybe we will be somewhat saved by the fact that human brains generally 'look for' stimulation/surprise while GenAI produces cheap 'average blandness' and GenAI may soon be seen as very 'uncool'. One can hope. (Probably not though).
It’s interesting, I have been head down in a blog post on UBI, so when I read “tech works for us” I interpreted it from the perspective of technology literally working — assuming our job roles — for us so we can focus on different, more creative and fulfilling things.
I would love that! That is a totally reasonable way to look at this — still the question remains: How do we get there and ensure that's the case!
I'm old enough to remember when the computer was going to do what AI says it will do...and the work week got longer. Remember when the paperless office was right around the corner? Uh-huh, now we've got more of it than ever.
AI is a tool, but it's a dumb (as in not very smart) tool, ultimately, that has to be fed and oriented, and that takes far more work than the advocates (who don't have to do the scut work) claim. It only "knows" what someone shows it, only "understands" in ways that the humans tell it to. We haven't made "intelligence" quite as much as we've made a very fast, very clever program that only "knows" what it's told. Sure, it can "learn" what it is told to learn, but no more.
Everyone knows that if you really want to screw something up, you need a computer. AI is that in spades. AI will take more work, ultimately, than any other computer program not just to train or create, but to control, because of the faith the great unwashed put in it's efficacy. It' been way oversold already.
Pie-in-the-sky fantasies about paid health care, 4 day workweeks and 48 week work years are completely unrelated to AI. There is no connection. Indeed, expect the 7 day work week very soon.
Haha yes indeed! I agree *if we allow things to continue on the current course* as dictated by AI companies. Every supposed gain—days off, 8-hr work weeks, weekends etc—has been the result of the labor movement pushing back against industrialization, and arguing in tandem with new technologically abetted productivity gains that workers should get a share. I'm just saying we should turn the logic espoused by Silicon Valley on its head, and say 'well, if you say this will make everyone so productive, then here's a measure to ensure everyone benefits' precisely because we *know* if we don't organize and speak out for such ends that SV and the tech companies will be all to happy to hoard any of the gains (if they come at all!).
Great piece. A while back I read the book Omon Ra, a dark Russian satire of the Soviet Union that I think applies equally well to the technocratic silicon valley control of so much of US society.
I keep thinking about the book every few weeks and how the main critique of Soviet Communism treating people as literal cogs in a machine applies so well to much of capitalist technological change as owners seek to cover up the human cogs they use. Even AI is built on millions of hours of underpaid labor with human beings basically checking every possible output and combination from LLMs and rating them, rather than some sort of massive advance in technology that is making life easier for workers.
My review of Omon Ra for anyone interested- https://open.substack.com/pub/bathruminations/p/review-of-omon-ra?utm_source=share&utm_medium=android&r=jxqbu
Thanks Sean — fascinating, will check that out for sure.
I think the protections need to start before AI is deployed. Instead, start at the beginning: how is AI being trained? Early deployed versions could not distinguish the faces of POC or women as readily as it did white males. Bias is being built in.
Agreed — there should be much more transparency here, and just letting Congress see it is insufficient. There are bias and copyright concerns here alike. Yet artists are fighting for this, as I wrote about in an earlier newsletter, and at least some of these companies' models are now potentially about to be opened up in discovery. So yeah, some hope for change there... need to keep pushing on this for sure.
A lot of companies are wising up to this. Google itself has a guide sheet on responsible AI practices that talks about understanding the limitations of your training model:
https://ai.google/responsibility/responsible-ai-practices/
Some excerpts:
- Machine learning models today are largely a reflection of the patterns of their training data. It is therefore important to communicate the scope and coverage of the training, hence clarifying the capability and limitations of the models.
- A model trained to detect correlations should not be used to make causal inferences, or imply that it can.
The OECD also has language around AI transparency:
https://oecd.ai/en/dashboards/ai-principles/P7
Specifically, to your point: "where feasible and useful, to provide plain and easy-to-understand information on the sources of data/input, factors, processes and/or logic that led to the prediction, content, recommendation or decision, to enable those affected by an AI system to understand the output"
Allegedly, the US adheres to these principles (https://www.state.gov/artificial-intelligence/) but I don't know that we've codified much of it into law.
Art is not about the end-product, but the experience one goes through to develop it - building character and one's soul along the way.
Brian, you inspired us to create a podcast around the luddites as well - would love to have you join us for part 2!
https://romanshapoval.substack.com/p/luddites
Yeah, the Luddites are fascinating. This is an interesting read, if you haven't come across it yet:
https://libcom.org/article/machine-breakers-eric-hobsbawm
As a retired software engineer, I got interested in the claims Microsoft wa smacking about the ability of Copilot to write code, so I tested it. It failed, rather spectacularly in tests on writing well known algorithms in Swift code, C
The contracts that determine pay, worker protections, rights to bargain, etc are often written in intentionally dense legalese to keep people confused about the rights they actually have vs what they're signing away. AI summaries could help close that gap.
Writing emails to lawmakers about policy is tedious work. Giving ChatGPT your zip code, asking it for all the contact information on your local elected officials, and having it draft letters on important issues could make that work a lot easier.
As for specific policies, it's more common for courts to weigh in favor of consumer protections than labor protections. Requiring companies who use AI to prove that they lowered costs to the end consumer would be more likely to succeed than asking for an increase in worker pay, but would still have the benefit of de-incentivizing fully non-human workforces.
It would also be important to demand transparency around which aspects of production are AI-generated instead of human generated.
I don't know, I've been a member of three unions at this point, and have seen multiple contracts — they're really not that complex, and union reps hold information sessions and make themselves available to anyone who wants further explaining already. You could have an AI summarize it, but I'm not a lawyer, they don't confuse me, and nor have I heard of anyone being particularly confused by contracts.
And sure, writing AI-generated letters to lawmakers may be faster, but on the other hand, you risk a situation where there's a deluge of identical-ish letters that reps don't really read; kind of like a Change.org petition; when the input cost is so low, so is the corresponding weight of the response; a phone call or a written letter is going to do much more than an AI-generated email to your rep.
You're right, but a) this is changing, as the NLRB awakens from its slumber, and b) *both* labor and consumer protections are important to fight for with zeal. The FTC is interested in consumer-side cases with regard to the AI companies, and I actually just spoke with them about the final point; the issue of watermarking AI posts etc. I could see a push there proving successful.
I often look at AI through the bias of creative labor, which deals with notoriously complex contracts and rarely has a union rep to ask for clarity. A 19-year-old artist getting signed by a record label isn't going to understand the ins and outs of recoupment, breakage, or the royalties on different types of licenses, and might not have the wherewithal (or financial ability) to run it past a lawyer first.
I'm glad the push for transparency is making headway. I think it's the prerequisite for everything else we want to accomplish, and I think we're helped immensely by the fact that people who depend on LLMs have a pressing need to keep generative AI from polluting their models.
"as the NLRB awakens from its slumber"
Praise be!
Yeah this is a good point — for artists and freelancers, protections are basically nonexistent and looking to copyright law won't be an option at all unless the lawsuits against the AI companies bear fruit.
Someone wiser than me has noted that we need technology that is aligned with nature-sourced systems of order and understanding (already developed in the East.) We need to be co-creating with Natural Law in an alchemizing way. In particular, we need to respect and protect the electromagnetic energies in ourselves, all living beings and systems, and the planet herself, and to replace exploitative approaches and entitlements.