Apple-Obsessed Author Fella

My Open Letter To That Open Letter About AI In Writing And Publishing

The tl;dr before you get into this post is this: the SFWA came out, said that some AI usage was okay enough in books for the authors of those books to not to be disqualified from winning a Nebula award, people got (correctly) pissed, the SFWA swiftly threw that fish back into the water and was like, “Just kidding, AI is bad,” and then launched this survey to get community input on AI usage in writing and publishing.

As a result, a few folks have kinda popped up their heads to be like, “But is all AI bad?” and some of this is reasonable and necessary discussion, because sure, what if your word processor accidentally injects some kind of AI process into the work, or what if your publisher against your wishes uses AI in, say, the marketing of the book? What does that mean for you? Do you have recourse? Are you still able to win awards? I guess it would suck to be shut out of awards for that — though, at the same time, awards aren’t even the frosting on top of the cake but the sprinkles on top of the frosting? Whatever.

Of course, in typical fashion, usually these sort of reasonable questions are a Trojan horse to allow a lot of other exceptions in through the city gates. To continue to mix metaphors, if you give a mouse an AI cookie, well, he’s gonna want the AI milk, the AI straw, until eventually you’ve given him an AI nuclear bomb where he kills all the human beings and can feast on our smoldering corpses at his rodenty leisure.

One of the people who popped up was Erin Underwood, who wrote an open letter about all this. It is a letter that purports to be reasonable, common sense, but in my mind is a goalpost-shifting mouse-cookie-giving very hungry caterpillar of a post, where it just wants more and more — and so, it summons in me the urge to point out a number of its flaws. And this, on my part, is probably already a sucker move, because Underwood more or less suggests that AI has written her open letter, at least in part:

“For transparency, I used speech-to-text to capture my words and generative AI to clean up grammar and structure. I needed an efficient way to get my thoughts down quickly so I could move into the work of manually editing and refining this text. I went through it multiple times, revising language, examples, and arguments until the final version fully matched my vision. This was done intentionally to demonstrate how AI can function as a communication tool for business purposes. This letter isn’t a work of art or artistic creation.”

So already, we’re off on a broken foot. I’ve no idea how much of a human letter I’m responding to. (And for full transparency on my part — all of this post is 100% human-written, human-edited, human-derived. I am not Soylent Greening this shit. This is all me, flaws and all.)

Before I get into her bullet points, up front she is essentially saying that we can’t be hostile to the conversation, to these difficult questions, and that:

At the same time, refusing to adapt in ways that protect our own communities would create new harm. Writers, artists, musicians, publishers, and the industries that support them must remain viable and competitive in a modern world that is becoming deeply dependent on AI tools and AI-driven infrastructure. If we are going to protect the future of creative work, we need award rules that are practical and that also allow us to use ordinary business tools.

My first thought here is: yeah, no, that’s not really true.

There’s little evidence at hand, first and foremost, that AI is a value-add to any of this. Writing, making music, publishing, whatever. Industries not using them are perfectly viable. Writers not using AI remain perfectly viable. (I’d argue: more than viable! Actually, you’re better not using it! AI is routinely shown to decrease efficiency and require more human intervention, often just at cut cost.) The trick to this paragraph is it is a false appeal to reason: a quietly fear-based approach that you don’t want to (gasp) be left behind because you aren’t using the reasonable business tools. Except, again, nothing about this is reasonable. AI is a random middle-man created by shitty techlords, forced into systems so that they get paid and that the Magic Number Lines go up instead of flatten or descend.

We are only as “dependent” on AI tools and infrastructure as we choose to be — this isn’t an automatic. But therein lies one of the tricksy bits about this letter, like so many of the AI boosters: it presupposes an automagic AI future, a destiny for AI in and above us. It assumes it’s already here to stay, already embedded in us like a tick, so we might as well make friends with the parasite and use its Lyme Disease Tools and its Rocky Mountain Spotted Infrastructure. Why cure it? It’s already in us! No reason to ask who will rid us of this meddlesome infection!

Having a yes/no switch that governs the use of AI and generative AI isn’t viable because this technology is now embedded throughout the core infrastructure that supports businesses today. However, the fundamentally human act of creation must remain in human hands. At the same time, there are AI use cases that touch creative work directly and indirectly, often without the creator’s knowledge or consent. Those realities must be acknowledged. Creators should not be penalized for incidental, accidental, or third-party use of AI in business processes surrounding their original work.

This is probably one of the only reasonable bits in the letter. Yes, there are tough realities of gen-AI intrusion, in part because so many tech services are foisting it upon us — and we aren’t always aware of how deeply that splinter is stuck.

But, again, give a mouse a cookie…

The creative arts community is experiencing a deep sense of disruption and vulnerability in response to the rapid rise of generative AI. These concerns are legitimate and, for many, unsettling. When tech companies began developing large language models, original creative works were used without permission to train the very systems that are now threatening creators’ livelihoods, authorship, and ownership. That breach of trust is real and unresolved. It also can’t be undone, which means creatives and the industries that support them must think strategically about how this technology shapes both risk and opportunity going forward while also continuing to fight for fair compensation for their work (which, again, was used without permission).

Ahh. Starts reasonable, but ends with: “It also can’t be undone.” Look, sorry, the demon is out! We can’t contain the demon, so now we just gotta figure out how to live with the demon — sure, we can feed it, but we also have to make sure it isn’t eating us! Otherwise, it’s fine!

Except, it’s not fine. It did steal from us, and that’s not just past-tense shit. It is now and will continue to do so.

AI is not inevitable.

Say it again:

AI is not inevitable.

AI IS NOT INEVITABLE.

The only strategy here is the sum total pushback against its uncanny horrors and its non-consensual intrusion into every corner of our world — it steals our content, guzzles our water, increases our power bills, is crammed into services we didn’t ask for it to be crammed into while also charging us more money for the “privelege.” There is no strategy here except to find the fields where the AI grows and metaphorically set them aflame.

And shame and anger against corporate overreach is a powerful fire.

The evolution of AI use cases is fundamentally reshaping how modern business and industry operate, from book publishers to sales and marketing firms, retailers, and fan communities. AI isn’t niche any longer. It’s everywhere, including in our everyday digital tools and the infrastructure that makes business operate effectively. It shapes marketing and advertising, powers internet browsers and discovery systems, feeds social media platforms, and supports strategic planning, workflow design, internal communications, and day-to-day operations.

Worth seeing the conflation here — generative AI and LLMs are not the same AI that necessarily powers every other thing.

Publishers can’t realistically avoid using these tools if they intend to remain competitive and continue selling books, art, and music created by their authors and artists. At the same time, these tools are enabling smaller and independent publishers to compete more effectively with large companies such as Tor, Penguin Random House, and Gollancz by improving efficiency, reach, and sustainability.

Publishers can and must avoid using generative AI and LLM AI. Publishers remain competitive by hiring and training real people to do real people jobs that support real people authors and real people readers. AI remains a broken foot. Bad for the environment, bad for writers, and also, generally doesn’t work well — it certainly doesn’t work as well, or as creatively, as actual humans! Remember, the AI is fed with the work of actual humans. Why do you think that is, exactly?

If you use it, it means you’re replacing people.

People who could’ve done the job better.

People who actually did the job, and now their work is pilfered and duped.

And just to remind people now — if you really do believe that AI is just so great at what it does, please go talk to my cat, Boomba. Or is it Franken?

Most creators are not attempting to replace their own creative labor with AI. They are acting in good faith and want clear, ethical boundaries around authorship, originality, and creative ownership. The real challenge is that avoiding AI entirely is becoming increasingly impractical, even for those who are committed to producing fully human-authored work, as AI is now embedded in systems creators can’t control or realistically avoid.

Avoiding AI is easy. I do it all the time! Literally, all the time.

Let’s get into what Erin sees as use cases — though you’ll note throughout these use cases are theoretical and have zero examples of where they have been used successfully.

Voice-to-Text Dictation: Voice-to-text is one of the most common and accessible digital tools in use today, and most modern systems rely on generative AI to transcribe, normalize, and correct spoken language. Dictation is used for verbally jotting down ideas, sending text messages, and drafting emails.

I guess? To be fair, dictation has… been around for many many years and predates generative AI. AI has not been essential in this — which of course is the running theme of AI, far as I can see. “Did you want this thing you do to be better? No? Too bad, here’s AI! Also, P.S. now it’s actually sort of worse.”

(There’s a great Marc Maron bit about turmeric. Watch it and replace “turmeric” with “generative AI” and you’ll see what I’m seeing.)

Meeting Transcription: Meetings often happen over Zoom, Teams, or other video platforms that allow for meeting transcripts, which can also generate summaries and lists. Those transcripts can also be dropped into a generative AI system to pull out to do lists, ideas, and themes from the call.

Again, I guess, though meeting dictation also existed before AI — and you should also be very, very cautious about letting AI dictate important meetings, because remember that part where AI steals stuff? Yeah. That’s a thing. Also, remember when it turns out ChatGPT is recording all your conversations with it and people were able to access those chats? Riiiiiight. Maybe don’t do this.

Writing Tools and Applications: Microsoft Word, Gmail, and many other organizational tools have AI embedded in their code and use programs like Grammarly and CoPilot to help people proof, edit, and write. Often the very words you were going to write appear as suggested text if you don’t turn off these functions. It’s not just the author who is using these tools but also the editor, the assistants, and any number of other staff who work on the original file.

I mean, you can usually turn those off — and often it makes for a better writing experience because it’s not trying to auto-suggest boring or incorrect messages, but hey, okay, yeah, this exists. Worth noting though that “embedded in their code” is a dubious sentiment. Also, I was able to downgrade to the version of Word without AI. And I turn it off on my phone too wherever I find it. It’s insidious!

Now, for publishers —

AI for Screening and Triage: Some publishers are either considering or have already started using AI to some degree to manage incoming submissions and to move through the digital slush pile to weed out submissions that did not follow the guidelines or other rules … as well as identifying AI generated writing. This may also help them to look for submissions that meet a specific publishing need quickly and efficiently to elevate it for human editorial review.

Well, I hate that, and publishers should absolutely not be using AI to weed through submissions for a few reasons:

a) AI is often wrong, even at identifying AI, which is why it’s often false-flagging things that students wrote as “AI” (see, f’rex, people’s insistence that emdashes mean AI use, even though AI got the emdash use from people)

b) AI is biased, often invisibly, by those who created it, and you cannot see or adjust those biases meaningfully

c) It’s just gross? Letting a bad, environment-destroying machine do the human job of finding cool human stories to publish is gross, and fuck you if you do it

(edit)

And d) it feeds YOUR WORK into THE THIEVING MAGPIE OF AI, what the fuck, you’re just bloating the beast further, goddamnit

Initial Research and Accessibility Tool: AI can help authors parse complex scientific concepts, historical material, or technical subjects, translate sources from other languages, or gain an initial understanding of unfamiliar topics. When used as a starting point rather than a substitute for research, this can expand access to knowledge for authors without institutional resources.

AI INVENTED A SHITLOAD OF CATS I DON’T OWN

If it does that it definitely can’t explain high-concept shit reliably.

Please.

Continuity and Reference Tools: For authors, publishers, and studios managing shared worlds or long-running series, private, domain-specific language models can be used as internal reference systems to track character details, timelines, world-building facts, and continuity. Using AI in this constrained, reference-oriented way supports consistency and accuracy without generating new creative content or replacing human authorship.

Okay, you know what, I’ll concede that there is some reasonableness here — I wouldn’t do it, because I am a person who likes to have his person-shaped hands all over his person-shaped creations. But! Sure, if someone has a local model AI that they train on just their own material, hey, go nuts. (Though if you use it beyond organization and instead use it to, say, create new ideas — well, you’ve again sold yourself up the river and done nothing good for your brain or for the audience who will one day read your work.)

Data Analytics, Market Research, and Strategy: Publishers may use AI to analyze large volumes of data to identify catalog gaps, assess risks, understand readership trends, optimize release timing, and inform strategic decisions. This directly impacts publishing choices for which original works they accept and which ones they reject.

Given biases and data-gorging AI, this seems fraught to me — but, again, maybe we’re talking AI in the non-generative sense, and if that’s examining raw data and doing something with that, hey, whatever. Though even here, I’ll note that the most successful model of writing and publishing remains the simplest one: write and publish the best things you can that speak to your heart and your soul and then work the marketing ropes as best as you can (with real money) to help the audience see this thing that you made exist.

Ultimately, I flinch pretty hard at the idea of letting Skynet decide what original work should exist and what should be rejected, and here’s why:

The best thing you ever read was an original idea. It was novel in the truest sense — novel like COVID was novel! Not novel like a novel is novel.

But AI can only examine the past.

It can only see the trends that happened, not the trends that go forward.

Think of AI like prequel material — it is forever bound by what has already come before it and can only build upon the ground that has already been laid. It understands things that exist, not things that don’t, and therefore, in a job based somewhat considerably on people’s imagination producing original material, it will shit the bed. Meaning, it will reject cool new things because it cannot understand deviation from the cool old things.

(To be fair, companies fall into this trap without AI, too! But AI codifies it and removes from the equation human instincts and interests.)

AI in Marketing, Promotion, and Discoverability: Even when a story itself is entirely human-written, publishers may use AI to generate cover copy, promotional blurbs, SEO optimization, CTR analysis, or marketing insights.

SEO, okay, whatever, but if you let AI fuck with my cover copy, I’ll kick someone in the dick. Or blurbs! What the shit? Is she suggesting AI write… my blurbs? The ones I provide because I thought a book was cool? At a certain point you just have to wonder what the end vision is, here — is it that you use AI to generate ideas and then the AI writes a book off those ideas and then edits it and then an AI publisher submits it to other AI so that the other AI can provide AI blurbs for it? Books by AI, for AI, marketed to AI by AI? Just this digital ouroborous eating its own tail, shitting in its own mouth? What a glorious future! Who needs people at all?

Audience Engagement and Community Management: Publishers and creators may use AI to manage newsletters, reader outreach, community moderation, and customer support across social and digital platforms. These tools shape audience relationships without affecting the creative work itself.

Listen I’m starting to get tired. I mostly just want to smear the word NO across the blog in some kind of bodily fluid, but I persevere —

God, just write your own newsletters, just reach out to readers like a person, moderate your community as you see fit, be a person dealing with people and if that’s too much, don’t do it. Okay? Okay.

Workflow Automation and Internal Operations: AI is increasingly used to automate scheduling, task management, internal documentation, production tracking, and coordination across editorial, design, and marketing teams. These operational uses support the publishing process without influencing creative authorship.

If this is non-LLM non-gen-AI shit, er, okay, but also, this stuff kinda happens organically as it is? This workflow is well-known and well-wrought. Every book is not a unicorn — there is a process and people are the stations along the chain.

Legal, Contractual, and Financial Processes: Agents and publishers increasingly use AI tools to review contracts, analyze royalty statements, or flag legal issues. These business uses are unrelated to the act of writing and should not affect award eligibility. However, it is worth noting that authors can also drop their contracts into a generative AI system to ask it questions about the contract related to their original work to ensure they understand their rights, what they might be missing, and what they should explore more fully with legal counsel.

Ha ha, what, holy fuck, do not let AI deal with legal, contractual, or financial shit. Jesus Fucking Christ, this is deeply irresponsible. It is not good at it. Lawyers show up to court with this AI shit and they get their asses handed to them. This is not an okay place for AI. This is a dangerous place for AI.

If anything disqualifies the “open letter,” it is this.

Just have an agent or a lawyer.

One that won’t use AI.

Rights Management and IP Protection: AI tools are being used to track copyright infringement, detect unauthorized distribution, manage licensing, and monitor derivative uses of creative works online. These systems help protect authors’ rights and income without contributing to creative content.

This sounds fine, until you realize that…

AI just makes stuff up.

All the time.

Not just my quantum cats, either. I have a search set up for my name and some other topics in Google and every day it yields results that are patently just not there — a headline and a subhead will offer text and description that simply aren’t present when I click through. The entire subject matter isn’t even right. It’s wholly fabricated. It gets worse! So. There was a kid who died in my area, recently? (Well, he was in his 20s, I think. I say kid because I am increasingly AN OLD.) And the web was full of auto-generated AI barf about it — just fake weird news about a poor dead kid who died.

AI is a plagiaristic lie machine. You really can’t rely on it to find licensing info, derivative works, and so forth.

Accessibility, Localization, and Format Adaptation: Publishers and platforms increasingly use AI to generate captions, transcripts, audiobooks, large-print formats, and translations for global or disabled audiences. These tools expand access to creative works without altering authorship or creative intent yet still involve generative AI touching the work after creation.

Another profoundly disqualifying bit. No! No. NO. Do not let AI translate or transcribe our books.

HUMANS ONLY.

I mean, what the fuck. This letter seems to try to lean toward “AI can help you in ways where it doesn’t do the creative work,” but audio books? Translations? It’s very much part of that work. And we want that done right, and by people.

(In part because of accountability! You know who’s accountable when a person fails? That person! You know who is accountable when AI fails? Ennnh! Nobody! Defrayed responsibility! Oops the poor widdle small guy machine made a boo-boo. Want something done right? People are great! We love people! People are why we do this thing! Stop kicking them out of the process!)

Production and Technical Preparation: AI is increasingly used in formatting, layout checks, quality assurance, audio cleanup, and technical preparation for print, e-book, and audio releases. These uses support distribution rather than authorship.

Something like audio cleanup would be, I imagine, not about gen-AI/LLM. But other stuff, yeah, no, people are good. Let the people do it. Thanks.

Generative and Agentic Internet Platforms: The internet itself is shifting from a search-based environment to a generative and agent-driven one. As generative search engines, AI agents, and platform-level AI models become embedded across the internet, users are operating inside ecosystems where AI mediates discovery, visibility, and engagement by default. This means that information gathered in these environments increasingly comes through generative AI systems.

AI

MADE

UP

CATS

I

DO

NOT

OWN

It told me I have cancer!

That I’m a Christian and also Jewish!

It makes up stuff all the time and we’re supposed to just… give everything over to these agentic dipshits? The amazing thing was, we had this very nice Internet — messy, sure, but made of people and all the stuff they said and that they made and that came out of their heads, and then we let robots scoop it all up and start remaking “new” versions endlessly and it’s been downhill since. Let’s not accelerate our descent, yeah? This is silly and bad and I hate it. And you can tell I’m petering out here because my logic is, admittedly, “ew I hate it,” but seriously, it sucks and you know it sucks and down deep in that space between your heart and your stomach it makes you feel icky as shit, like you ate some bad shrimp. AI is bad shrimp. Stop trying to convince us to eat more of the bad shrimp.

Disproportionate Impact on Small and Independent Presses: Small and indie publishers often rely on generative AI for marketing, planning, and analysis because they lack the staffing and budgets of large publishers. Blanket AI restrictions force these presses into an impossible choice of either avoiding modern tools that allow them to publish more work and sell more books or use them and disqualify all their authors from awards.

Small and indie presses provide the crucial value of being small and indie, and indie by the way is indicative of human-influence — right? You go to a small press, you want hands-on, you want people you know, a small flexible team, and not a giant corporation. Well, bad news: AI is giant corpo shit. It’s techlord billionaire shit. If a small press can’t exist without that, then maybe they should reconsider whether or not they should exist at all.

Operational Strain on Fan Organizations and Conventions: Fan organizations and conventions are overwhelmingly volunteer-run and chronically understaffed. These groups operate on extremely limited time and resources, often relying on a small number of overextended volunteers to handle writing, editing, scheduling, marketing, and email communications as part of basic business operations. AI tools can reduce the burden of these time-consuming tasks and help volunteers work more efficiently. Without such support, many conventions may be forced to scale back or shut down entirely due to burnout and lack of operational capacity. The loss of these community spaces would be a significant blow to the science fiction, fantasy, and horror community as a whole.

Uhh I think I’d rather go to a convention run by people, not deranged robots.

You want Fyre Fest? This is how you get Fyre Fest. You want a YA convention with a creepy ball pit? Yeah, this is that? Let AI do this and you’ll end up with 1000 empty tables and no bathrooms.

Again, the theme persists of: “I’m pretty sure conventions existed before AI, and were run pretty well, so what is the AI doing again?”

Anyway, I’ve gotta tap out here.

There’s more to the letter but ultimately it seems to rely on the false premise that creatives better not SNOOZE, lest they LOSE, and we either get involved in the conversation and control AI or it runs over us. Except it already ran over us and now we’re figuring out how to get back up and string caltrops across the road to blow the fucking tires on this thing before it tries to hit us again. Also, nobody’s inviting us to the table. Nobody’s asking for our input. All this does is obey in advance to a fascistic system — AI isn’t trying to make nice with writers, we’re not being asked to join the team. We’re just being told to get on board or get fucked.

And I don’t agree with that framing.

I know. I’m bullish on this. Belligerent. But I really do hate it. I hate AI, and I hate all the framing that it’s somehow essential — it’s like being told you have to use a garlic press in the kitchen, and it’s inevitable, so use it, use it for everything, use it for cutting bananas and chopping nuts and peeling potatoes and cleaning your oven and teaching your kids and it can do all those things so well (spoiler: no it cannot) but SHUT UP AND USE THE GARLIC PRESS BECAUSE WE INVESTED A TRILLIONTY DOLLARS IN IT and if we can’t convince you to subscribe to the garlic press for literally everything all the time — did we not mention it’s a subscription service? — then we’re fucked, uhh, I mean, you’re fucked for not using our miracle product.

Anyway.

I think AI is only inevitable when we believe the lie of its inevitability.

I think people actually hate it. I think they naturally resist it because we can smell the existential threat coming off it like the stench of the aforementioned bad shrimp.

I think we intuitively can detect how it was made by rich fucks who want to be richer fucks, and how we’re just chum in the bucket for their digital sharks.

And I think it sucks.

It fucks the planet. It fucks our information fidelity. It steals our shit, our resources, our time. It’s mostly just a ruse, a threat, a lever: they can say oh take a pay cut or we’re going to use the godlike AI to replace you, and then they replace you anyway, and invite you back at an even sharper cut so you can herd the AI slop barf into shape like you’re Richard Dreyfuss with the fucking mashed potatoes in Close Encounters.

As Ash from Army of Darkness says:

“It’s a trick. Get an axe.”

I’m tired and I emerged from HIBERNATION WEEK to write this and now I need a nap or maybe I just need to lick a couple batteries or something.

Anyway. That’s my open letter. Feel free to respond below, but if you’re a chode, I drop you into the spam oubliette.

Destroy AI.

Buy my books — a human wrote them.

Okay bye.