AI can't fix what automation already broke
Generative AI is the latest in a long line of technologies that promise innovation and fixes but grind away at public life
Hello, and welcome to Blood in the Machine: The Newsletter; not to be confused with Blood in the Machine: The Book. It’s a newsletter about Silicon Valley, AI, labor, and power, written by me, journalist and former LA Times tech columnist Brian Merchant. It’s free, but if you find this sort of independent tech journalism and criticism valuable, and you’re able, I’d be thrilled if you’d help back the project. Enough about all that, though; grab your hammers, onwards, and thanks for reading.
Yes, there’s a constant influx of ‘snapshot of our ever-exacerbating dystopia’ type stories to endure these days, but this one, from the American Banker trade magazine, manages to stand out. The piece reveals a new, cutting-edge use case for enterprise AI: trying to prevent call center workers from “losing it” by showing them video montages of their family set to their favorite pop music after they have been barraged with angry callers and the system has assessed they are on the brink.
Pretty bleak! But it’s a telling example of how AI and automation gets used in the workplace—in more ways than initially meets the eye.
First, the details:1
The AI bringing zen to First Horizon's call centers
Call center agents who have to deal with angry or perplexed customers all day tend to have through-the-roof stress levels and a high turnover rate as a result. About 53% of U.S. contact center agents who describe their stress level at work as high say they will probably leave their organization within the next six months, according to CMP Research's 2023-2024 Customer Contact Executive Benchmarking Report.
Some think this is a problem artificial intelligence can fix. A well-designed algorithm could detect the signs that a call center rep is losing it and do something about it, such as send the rep a relaxing video montage of photos of their family set to music.
First Horizon is using artificial intelligence and such video "resets" to bring a state of calm and well-being to the people who talk to customers on the phone all day.
If this showed up in the b-plot of a Black Mirror episode, we’d consider it a bit much. But it’s not just the deeply insipid nature of the AI “solution” being touted here that gnaws at me, though it does, or even the fact that it’s a comically cynical effort to paper over a problem that could be solved by, you know, giving workers a little actual time off when they are stressed to the point of “losing it”, though that does too. It’s the fact that this high tech cost-saving solution is being used to try to fix a whole raft of problems created by automation in the first place.
Consider: Why, exactly, are these workers so stressed out? Why are they dealing with so many “angry” and “perplexed” customers—a consequential number of whom yell at them every single day—that they are, according to their own employers, on the brink of breaking down?
Later in the article, a clue emerges: “Today, about 85% to 95% of customer calls that First Horizon fields are handled in a self-service manner within the interactive voice response,” according to one of the bank’s executives. Aha! This tells us that however many years ago, the management at First Horizon Bank was sold on an another automation technology, interactive voice response (IVR), that is now used to field the vast majority of its incoming calls. So, like most of its peers, First Horizon Bank was able to replace many of its call center workers with an IVR system, or to allows its call volume to balloon without hiring more of them, until most calls were not being answered by people at all, but IVR. The problem is, everybody absolutely hates IVR.
It’s one of the most reviled forms of automation in existence, and this is why so many customers are livid by the time they reach the beleaguered call center workers who remain on hand. These callers have been navigating an automated system designed to save a bank some money on labor costs (and to encourage exasperated callers to hang up and drop their issue, which would likely require more resources to address) for many minutes or even hours, and are now understandably angry that so much of their time has been wasted.
To better illustrate: Let’s say you’re a First Horizon customer; the bank has put a hold on your credit card as a fraud prevention measure (the result of an automated warning system) and you’re standing in the grocery store after finally managing to get the self-checkout kiosk to work (the result of labor-saving automation technology) and your kids are screaming and the card is declined. You call the bank, and you’re routed to an interactive voice menu, and you can’t get the thing to go to the right option and now your kid has spilled one of the grocery bags and you’re scrambling as you wait and punch the number again on your phone and the people behind you are shaking their heads and nope, wrong department and the prerecorded voice drones on so you try again, and it’s ringing now, and you FINALLY get through to someone. You don’t mean to sound mad, you know that it’s not this poor worker’s fault for installing such an impenetrable and spirit-crushing system, but this worker is there, on the other end of the phone, and you can’t help but be aggravated when it’s time to explain your issue.
Any of this sound familiar? Maybe you have the patience of a saint that is necessary to endure a world overstuffed with the broken promises of automation without “losing it”. Not everyone does! The point is, these are the kinds of situations the First Horizon call center workers are picking the phone up to every day; callers evincing polite but thinly concealed agitation to those thrown into a full-blown rage. Anxious customers, yellers, insult-hurlers, the gamut. I’ve worked in a call center, I get it.
That’s why this story broke me a little bit. First Horizon and the company that sold them on this AI solution are telling low wage workers—whose jobs are to absorb customers’ wrath at the fact that First Horizon has messed something up, and then used automation to make it all but impossible to answer for it—that “We get that you’re stressed thanks to decades of our cost cutting and bad automation, but you can now listen to a pop song and look at AI-curated vacation slides as a treat, before returning to the call mines.” It’s deeply insulting.
I used that example of that incident in the grocery store as a way to try to illustrate how pervasive the effects all these little examples of automation are. Taken alone, none of them are the end of the world—except, perhaps, for workers who once relied on a job that was automated away—and some are well-intentioned (fraud prevention). But collectively they’re a corrosive force that erodes social bonds and spoils personal interactions and generally makes it less pleasant to go about our days as human beings.
Next to no one’s lives are improved, except maybe the company that saves on labor costs—and even then not always. I’ve termed this shitty automation in the past, because no one wins. (An art historian once wrote an academic paper about it, even!) Customers dislike it, workers hate it, and all around, it causes our experience to suffer. The world would be a better place if most IVR systems simply vanished, and we once again could speak with real humans about our problems, and real humans could try to solve them—just imagine it!
But instead of, say, making room in the budget for more intelligent human staff members, a bank like First Horizon decides to shell out for the latest technological trend that promises ever-improved efficiency—this time, the AI ‘reset’ button that impresses management but condescends to workers. Much of the history of workplace technologies is thus: high-tech programs designed to squeeze workers, handed down by management to graft onto a problem created by an earlier one.
This is my great worry with generative AI. I have not lost a single wink of sleep over the notion that ChatGPT will become SkyNet, but I do worry that it, along with Copilot, Gemini, Cohere, and Anthropic, is being used by millions of managers around the world to cut the same sort of corners that the call center companies have been cutting for decades. That the result will be lost and degraded jobs, worse customer service, hollowed out institutions, and all kinds of poor simulacra for what used to stand in its stead—all so a handful of Silicon Valley giants and its client companies might one day profit from the saved labor costs. The result will be the latest wave of shitty automation, spread all over the internet, our phones, and our lives.
On that note, I might mention that I had a piece in the Atlantic last week about the growing trend of companies, creators, and brands promising “AI-free” products and services—a reaction to ethical issues and safety concerns, and the reputation AI-produced content has for being cheap and lower-quality.
I dug the (human-made) art and the piece has been spurring a lot of good discussion online. Also some bad discussion: The CEO of Medium showed up in the Atlantic’s mentions on Threads saying the platform should get credit for being the first to be “pro-human.” I was one of 70 journalists that Medium laid off after deciding it was cheaper to run the platform on poorly compensated user-generated content, so I shot back that that wasn’t very pro human. A debate ensued, and it’s still going on as I write this…
One reader reached out and asked how I felt about the Atlantic’s licensing deal with OpenAI, which a) surprised many Atlantic staffers themselves, and b) was announced after this piece had been commissioned. The answer is: I vehemently oppose such licensing deals! I think they’re bad for the industry, and are the latest capitulation to tech companies, who have spent the last 20 years steamrolling journalism in myriad ways. Will I write for the Atlantic again as long as they have this policy in place, if I or other freelancers can’t opt out? Probably not! I’ll have to learn more—writers can usually carve out exceptions in the contracts, which are typically awful in all sorts of ways, but that can be onerous and time consuming. At the very least, contributors should be asked for consent to add their work to a training data set, and compensated for it.
I also headed up to South Lake Tahoe last week to speak to the Teamsters about luddites, AI, and organizing around big tech—it was a great chat, and was very glad to see Blood in the Machine added to a number of locals’ libraries. Good people, good times; I’m always happy to do this stuff. I’d love to do more of it, in fact—and your support can help there. I recently turned payments on, and plan to be doing more of this writing—look for a wider announcement in coming weeks—and I would appreciate your support so I can keep doing this work. A million thanks to all of those of you who’ve already pledged support and signed up with next to no prompting; you’re the very best, and this General Ludd salutes you. Honestly it means so much.
Until next time — hammers up.
bcm
Hat tip to the great Alex Press for flagging today’s tale of woe.
Another flavor: “AI chatbots and friends can help people feel less lonely!”
Gee, wonder if the fact that Silicon Valley has done its best to ensure every social interaction is mediated by a data-harvesting app on a tiny screen has anything to do with why people feel so lonely in the first place
“That the result will be lost and degraded jobs, worse customer service, hollowed out institutions, and all kinds of poor simulacra for what used to stand in its stead—all so a handful of Silicon Valley giants and its client companies might one day profit from the saved labor costs.”
As I’m sure you are aware, several (maybe most or all) of those giants are being pushed into the AI business by the opportunities for executives and other stock owners to siphon off large quantities of money from the huge streams of investment being poured into the field (Sam Altman is probably the exemplar of this manifestation of greed).
My son worked for more then 15 years as a freelance graphic designer, getting contracts with large companies like Nike and Adidas renewed for years. In the last year work has dried up completely as his clients all shifted to using generative AI, and he was lucky to be able to switch industries after 8 months of unemployment to being a wafer fabrication tech at Intel. Some of his friends have not been so lucky at finding new employment. But in this case my son is benefiting from the unreliability of automation: the systems that run the fab lines need the constant oversight and correction of human beings, and it’s unlikely that managers will be willing to risk the investment in a multi-billion USD fab plant or the ongoing bottom line of the production of millions of chips with AI that can’t be trusted.