It is such good news that so many of my fellow humans are saying a hard "no" to this hazardous technology. How quickly it came--and how quickly it seems to be leaving,or being forced out. I lost a good chunk of my academic editing business when the chatbots showed up--why pay a human $50 an hour when ChatGPT will rewrite your paper or even your dissertation for nothing? It will be interesting to see how long before grad students start showing up at my door again!
Might I also suggest that we go a bit lower on the food chain to nip this whole thing in the bud and that is to oppose and ban Chip Factories, like Micron, from our communities . It's the chips that are the building blocks of these data centers that create the AI.
And the processing of these chips requires as much, if not more electricity, water and land, not to mention the enormous amounts of chemical wastewater that needs to be "treated" before being returned to our rivers and lakes - that make them as environmentally destructive as, if not more so than, data centers, but seem to be below the radar in this scenario ...
The chess example is more complicated than what Barkan describes in that note. If you're playing in a chess tournament, using a chess engine like Stockfish during a game is cheating, and the punishments are severe. But it's not cheating to use Stockfish to study and map out an opening variation to great depth, which you can then memorize and spring on an unsuspecting opponent. Given the huge number of possible chess positions, this is something only the pros do effectively, but it shows how computer use in chess governed by a highly contextual group of rules. If a writer used AI like a top chessplayer uses Stockfish, they'd have it plan and organize the piece of writing, memorize as much of it as possible, and then reproduce said writing by hand on paper with a judge watching them to make sure they don't consult their phone.
The more direct parallel to AI use in chess and in writing is that AI has made it much easier to cheat. You end up with a lot of suspicions that something isn't legit - a series of chess moves, a paragraph - but it's hard to prove anything conclusively, and it's easy to get paranoid.
The latest gossip I have is that Pangram is very good at detecting whether stories are 100% or 0% AI-written, but has trouble with percentages if it’s a mix.
Have you heard of any AI detectors that are doing better than that? I’m not totally sure how this problem gets solved on an institutional level if it’s too hard to detect
It’s great for writing documents that have been done a million times before, like business cases, that take forever and barely get read, but never for anything creative like opinions, reviews, fiction. I’ve been using it to customise cover letters and CVs for job applications and it does a good job of doing 80% of the work of matching skills to job ads. I then read every word and make it sound like me, and chop out some waffle. Sometimes it flat-out makes up stuff, which is a problem.
Hard agree with the thrust of this article, and I love that it's happening. As a professional audiobook narrator and producer, I can confirm that (at least anecdotally) readers do NOT want AI writing, especially in romance, which means moves like Hachette's are welcome and choices like Harlequin's recent decision to create animated AI slop shorts from its titles are bewildering. Who do they think their audience is??
However, even within this context, I found the following (excellent) reporting by the Drey Dossier Substack to be really important for us all to consider.
Tl;dr: the journalism process that led to Shy Girl's contract cancelation by Hachette is suspicious AF, the encoded racial and gender biases of AI products (of which Pangram is one!) cannot be ignored in the discussion of the cancelation of a first time black female author, and why on earth are we trusting AI to tell us when something is written by AI (as Pangram does)? Who actually trusts that??
In a different domain, the medical research community has integrated AI trained on medical sources. Scripps researcher Eric Topol, MD, of groundtruths.substack.com, is a prominent example, and he interviews top researchers who say the same. Personally I refuse AI in my work, but if you listen to these top scientists you have to consider that generalized AI is the problem and maybe a curated AI can be very useful. Not talking about insurance or other bureaucratic uses, but radiology and cardiovascular testing for instance.
Until it hallucinates - I read a story where an AI reading a chest X-ray was being demonstrated for a group - when asked about the X-ray the AI said "the hip prosthesis is in place"
It is such good news that so many of my fellow humans are saying a hard "no" to this hazardous technology. How quickly it came--and how quickly it seems to be leaving,or being forced out. I lost a good chunk of my academic editing business when the chatbots showed up--why pay a human $50 an hour when ChatGPT will rewrite your paper or even your dissertation for nothing? It will be interesting to see how long before grad students start showing up at my door again!
i love when your writing gives me hope
Might I also suggest that we go a bit lower on the food chain to nip this whole thing in the bud and that is to oppose and ban Chip Factories, like Micron, from our communities . It's the chips that are the building blocks of these data centers that create the AI.
And the processing of these chips requires as much, if not more electricity, water and land, not to mention the enormous amounts of chemical wastewater that needs to be "treated" before being returned to our rivers and lakes - that make them as environmentally destructive as, if not more so than, data centers, but seem to be below the radar in this scenario ...
The chess example is more complicated than what Barkan describes in that note. If you're playing in a chess tournament, using a chess engine like Stockfish during a game is cheating, and the punishments are severe. But it's not cheating to use Stockfish to study and map out an opening variation to great depth, which you can then memorize and spring on an unsuspecting opponent. Given the huge number of possible chess positions, this is something only the pros do effectively, but it shows how computer use in chess governed by a highly contextual group of rules. If a writer used AI like a top chessplayer uses Stockfish, they'd have it plan and organize the piece of writing, memorize as much of it as possible, and then reproduce said writing by hand on paper with a judge watching them to make sure they don't consult their phone.
The more direct parallel to AI use in chess and in writing is that AI has made it much easier to cheat. You end up with a lot of suspicions that something isn't legit - a series of chess moves, a paragraph - but it's hard to prove anything conclusively, and it's easy to get paranoid.
The latest gossip I have is that Pangram is very good at detecting whether stories are 100% or 0% AI-written, but has trouble with percentages if it’s a mix.
Have you heard of any AI detectors that are doing better than that? I’m not totally sure how this problem gets solved on an institutional level if it’s too hard to detect
It’s great for writing documents that have been done a million times before, like business cases, that take forever and barely get read, but never for anything creative like opinions, reviews, fiction. I’ve been using it to customise cover letters and CVs for job applications and it does a good job of doing 80% of the work of matching skills to job ads. I then read every word and make it sound like me, and chop out some waffle. Sometimes it flat-out makes up stuff, which is a problem.
Hard agree with the thrust of this article, and I love that it's happening. As a professional audiobook narrator and producer, I can confirm that (at least anecdotally) readers do NOT want AI writing, especially in romance, which means moves like Hachette's are welcome and choices like Harlequin's recent decision to create animated AI slop shorts from its titles are bewildering. Who do they think their audience is??
However, even within this context, I found the following (excellent) reporting by the Drey Dossier Substack to be really important for us all to consider.
https://thedreydossier.substack.com/p/the-shy-girl-ai-scandal-is-way-worse
Tl;dr: the journalism process that led to Shy Girl's contract cancelation by Hachette is suspicious AF, the encoded racial and gender biases of AI products (of which Pangram is one!) cannot be ignored in the discussion of the cancelation of a first time black female author, and why on earth are we trusting AI to tell us when something is written by AI (as Pangram does)? Who actually trusts that??
In a different domain, the medical research community has integrated AI trained on medical sources. Scripps researcher Eric Topol, MD, of groundtruths.substack.com, is a prominent example, and he interviews top researchers who say the same. Personally I refuse AI in my work, but if you listen to these top scientists you have to consider that generalized AI is the problem and maybe a curated AI can be very useful. Not talking about insurance or other bureaucratic uses, but radiology and cardiovascular testing for instance.
Until it hallucinates - I read a story where an AI reading a chest X-ray was being demonstrated for a group - when asked about the X-ray the AI said "the hip prosthesis is in place"