
I wanna talk about Cameron’s The Terminator and Carpenter’s The Thing, but first, let’s get it out of the way —
If you know anything at all about me in this Current Era, it is that I am vehemently opposed to generative AI. I do not use it. I will not use it. It does not exist for me in any form — the only “use” I had of it recently was writing my Vital Cat Update, which copied from Google’s search engine AI off its main search page. Otherwise, I don’t touch the stuff. I don’t even know how to access it. I couldn’t tell you how to use Chat GPT or Claude or any of that. My copy of Word is one with Copilot not inside it, and I had to change my subscription to get there. I turn off Apple Intelligence in every instance I can. I am against AI because it steals our work, which it then uses to steal our jobs, which it further uses to steal our water and our electricity.
Which is to say, it is here to steal our future.
So, I’m against it! It sucks moist open ass.
But there’s a delightful (read: not at all delightful!!) new perniciousness afoot, and that requires us to talk a little about the novel Shy Girl, by an author who I won’t even name because whatever she did or did not do, I do not think directing theoretical harassment toward said author is really valuable, nor is it the point. The problem isn’t one book. The problem is the whole system.
To keep it as brief as I can, what happened was, to my understanding:
Shy Girl was a self-published novel. A horror novel. It came out a year or so ago, on its own, I think? It did well enough, I guess, though I don’t know that it set the world on fire — but somehow a publisher, Hachette, picked it up for traditional publication and it was to come out soon. Ten months ago, there appeared to be accusations that the book read like it was written by generative AI in whole or in part. Those conversations continued and appeared to boil over right around now-ish, and the current narrative is that the author did not herself use generative AI, but employed an editor who made changes to the book using generative AI, changes that the author did not — review? Did not catch? I don’t know for sure.
Certainly some aspect of this may be wrong, or new details may come out, and if you have corrective details, please sling ’em in the comments below.
That is the situation currently.
To switch tracks a bit, though you’ll soon see (or already can predict) where this is going: I’ve in the last several months seen an uncomfortable number of instances, usually on Threads, where someone will look at a photograph or a video or a piece or art or graphic design and they will assert, with dogmatic certainty, that is AI.
And sometimes, it is, or appears to be.
And other times, it definitely isn’t.
I’ve seen people look at a beautiful, very real but also very-processed photo, and say with their whole chest, that shit is AI, and sometimes that’s started a small little avalanche of people asserting similarly. And in more than one instance, I’ve seen the creator come back and post how that photo predates the current generation of gen-AI — it’s just a photo that looks either really good because of Lightroom or really overprocessed because someone wanted a slick HDR effect, or whatever.
This has also happened with writing.
It started with the emdash.
It was asserted, with Great Authority, that emdash use was a strong signifier of a piece of writing being AI.
The artbarf robots, they said, love that little emdash sumbitch so much, so so much, that they just can’t help themselves.
Needless to say, that made my bowels go to ice water because —
Holy shit, I love the emdash, too.
In fact, most Current Era writers I know love love love a fucking emdash.
But instead of making me sympathetic toward the artbarf robots — “Aww, it loves the same things I do!” — it only made me hate the artbarf robots more, because the reason the piece-of-shit AI loves an emdash is because it stole all our work, and all our work features a lot of goddamn emdashes.
It doesn’t use emdashes.
We use emdashes, and it stole our work and then mimics us.
Emdashes and all.
So now, with Shy Girl, what do I see?
I see some folks putting forth the “signs” that told them that Shy Girl was very obviously AI-written, and those signs include a number of stylistic choices.
And when I say stylistic choices, they are not choices that generative AI made, because generative AI doesn’t make choices. It just eats and regurgitates.
We make choices, as authors. Narrative ones, stylistic ones, and so forth.
But this list of signs and symptoms and AI portents included stylistic choices that I myself absolutely one hundred percent make. Same as the emdash. I’ve seen people say that AI loves metaphors, AI loves certain kinds of repetition, it loves adjectives no wait it loves adverbs no wait it loves alliteration no wait–
Of course, again, as with choices, AI doesn’t love a fucking thing, because AI isn’t alive, it isn’t intelligent, it isn’t aware. The key word is always artificial. It fakes it. It fakes choices. It fakes preferences. It fakes love. And it is able to fake it because it stole those choices and preferences from us.
I saw The Terminator last night on the big screen. I’ve seen it before, obviously — seen it many, many times. Seen all of them! Even the stinky ones. But I think this was my first seeing that one on the big screen. (It’s of course excellent, if occasionally a little corny and showing its age.)
But one place where it isn’t showing its age is how it still issues a sharp warning about AI — it’s long been held as a kind of bellwether for that particular threat, right? It’s an early iteration of the Torment Nexus meme. That warning has told us, hey, AI is going to get smart, get mean, it’s going to inhabit robots who want to kill us, it’s going to tangle itself up in our systems and decide that we’re a threat and drop a batch of nukes on our heads.
But I think one of the warnings in the movie(s) didn’t really register for me back then, but it damn sure registers now —
What happens in the movie? The AI is going to pretend to be us, and it’s going to be get harder and harder to tell the difference. It’s going to wear our faces. Only dogs will be able to sniff it out. It can steal our voices — so when we call home to talk to Mom, maybe the Mom we think we’re talking to us actually dead, and it’s a soulless Cyberdene drone on the other end there.
That makes me think of John Carpenter’s The Thing, because it, too, understands that same threat, but worse — it understands the fear of being amongst your people except one of those people isn’t your people. Ohhh, no. It’s an Impostor, an alien being clothed in the raiment of your friend’s flesh, and soon you’ll be paranoid about who is alien and who is human, and you’ll have to work very hard to find a way to figure out just who is who — all that without accidentally killing a friend, or failing to kill the thing that wants to eat your face and then wear it.
Sound familiar?
The AI — artistically! — is us.
It steals our artistic skin.
It wears it, pretends to be us.
And it gets harder and harder to tell what’s us, and what’s it.
I’ve long said that one of the threats of AI is that it damages the fidelity of our information. Of truth and reality itself! It’s not just that it pumps out misinformation and disinformation — digital illusion and virtual legerdemain! — but rather that its mere existence makes it harder and harder to tell what is truth and what is fiction.
And we’re seeing that now with Shy Girl.
We’re seeing it with photos and videos and artwork.
People are right to hate AI — and the pernicious, insidious presence of AI has made them like the men trapped in that Antarctic base.
They are paranoid that it’s everywhere.
Because, ostensibly, it is. Or they (they being the techbros who are really the man behind the wizard curtain) want it to be. And it has a deleterious, corrosive effect on all that we do and all that we see. It’s like Paramount taking over CBS, or Musk taking over Twitter — it doesn’t matter that it becomes successful, it just matters that they ruin the ability to disseminate good information. To ruin truth.
So, what the fuck do we do about all this?
I have no idea. I mean, the obvious thing on the face of it is to keep your own garden free of it. Pledge to use no AI. In all the ways you can avoid it? Avoid it. But that won’t stop someone in the future telling you you’re using it. Or even using an AI detector — which is itself AI! — from “detecting” it. And it won’t stop others from assuring you that this photo or that video or this logo is AI, even when it’s not. That certainty has been ruined.
More to the point, I don’t know what this means for writers, for readers, and for publishing at large. Ideally, publishing gets ahead of this problem and tries to get commitments from writers to not use AI — but therein lies a rub, too, wherein a “no AI” contract looks like a “morality clause.” Without clear definitions, if enough people were to accuse you and your book of being AI — whether at the authorial level, the editorial level, or in some aspect of publishing — they can get it tanked whether or not AI has ever even chastely kissed the work in question. And it doesn’t inspire confidence when a publisher like Hachette published Shy Girl… when already the accusations of AI were afoot. Did they do their due diligence? I don’t know. Maybe! But given the lack of editorial oversight… ennnh, maybe not.
Do I think AI should be published? I do not. I think using AI at any of those levels is not only problematic for the reasons listed above, it also takes opportunity from an Actual Human doing the Actual Work of Being Human. A contract given to some slopwrangler is a contract not given to an actual writer. A fake book will take the place of a real one. It’s stupid fucking robots all the way down when it should be humans.
So, this is a snarled nightmare tangle — one where the existence of AI en masse is becoming its own problem, regardless of whether it’s presence in a single instance of art of writing. We’re just going to have to do our best going forward. We must pledge not to use it — but also try to be very, very cautious kicking other people under the tires of this bus without knowing for absolute sure what we’re accusing someone of doing. As AI gets better, the environment in which it exists is only going to get noisier and more confusing. And we can’t just stick a copper wire into the blood of the book to make it transform into the monster, revealing its True Self.
We just gotta do our best. Be vigilant, be cautious.
And don’t use the AI slop-shitting artbarf techbro bullshit.
SIGH.
I do not care for this era of writing and publishing, lemme tell you.
The faster we pop this bubble, the better off we will all be.
Good luck, friends!
And fuck off, robots.
Buy my books or I die in the abyss.








terribleminds says:
(I should add here as a follow-up that Hachette, the publisher, appears to have dropped the author and the book from its publishing roster. What that means going forward, I dunno, and I’m honestly surprised it got as far as it did given the earlier accusations and editorial weirdness, but again, I’m not on the inside of that particular situation, so who knows?)
March 20, 2026 — 4:09 PM
debigliori says:
Yes. A thousand yesses. I won’t use it and moreover, I can’t take part in the settlement with Anthropic ( Bartz v Anthropic) because any settlement dollars are fucking tainted with the blood of children in Iran and Gaza where Claude has teamed up with Palantir and the Dept of War to do very bad things. What’re your thoughts on the settlement? Obvs your books will have been chewed up and ground into grey slop to train the LLM, but are you part of the class action?
March 20, 2026 — 4:20 PM
Alex Grecian says:
100%! And now I know I can change my Word subscription to get rid of copilot, so thanks for that!
March 20, 2026 — 4:27 PM
Dave says:
Going forward, for legal reasons, all authors shall retain a Git repository of their revisions starting from draft zero along with a safe containing all their handwritten notes.
Overkill? Yes. I expect to see it come up sooner or later anyway.
March 20, 2026 — 4:33 PM
Samuel Johnston says:
As a fellow em dash enjoyer, I feel your pain. Our greatest punctuation has been weaponized against us.
I think one of the reasons AI can be so insidious is that this artificial system is very good at activating our natural human compulsion to anthropomorphize. The same hind-brain process that sees a face in an oddly-shaped rock or moth’s wings, the process that recognizes emotional responses in the subtle expressions of other people and animals—the engine of our ability to empathize—is hijacked by a machine that places one word after another.
The great failing of the Turing test was not recognizing how good we are at projecting our own inner worlds onto everything around us. We automatically imagine feeling, thought, and intent behind those artificially generated words, even though it is no more meaningful than the angry expression on that rock. There is magic in that human ability, but now it works against us.
I worry that ubiquitous AI will force us to suppress this part of ourselves as a defense mechanism; force us to harden our hearts and carefully analyze every image, every word, in a vain attempt to recognize whether there is anything human in it, or if we’re just playing Tom Hanks to AI’s Wilson.
March 20, 2026 — 4:54 PM
Adam says:
I thought it was en dashes. Supposedly.
Which nobody better come for mine. Just sayin.
March 20, 2026 — 6:13 PM