One of the occasional defenses of generative AI is that it quote-unquote ‘democratizes’ art and writing — and then, as with the NaNoWriMo statement yesterday, it becomes somehow problematic to condemn generative AI, because what, do you hate DEMOCRACY? Do you not want everyone to have access to art and writing? Oh! Oh! Somebody doesn’t want the competition, doesn’t want the masses to rise up with the FREEDOM of their RENEWED ACCESS to ART and STORY, you PRIVILEGED ELITE BASTARD.
But I think it’s important to take the air out of these things (often by kicking the absolute shit out of them).
Generative AI is not democracy.
Generative AI is not free.
Because that’s the cornerstone of the idea, right? It’s a freely accessible tool that evens the playing field.
But generative AI has considerable costs.
Let’s go through them.
1. Money, Cash, Ducats, Coin
Access to much of generative AI will cost you actual money in many cases, though certainly it’s also becoming freely accessible at some levels — and more and more services are forcibly cramming it into their existing platforms, which, I’d like to note, is seriously fucking annoying. I’m waiting for the day where my microwave tries to write and sell its “slam poetry.”
Still, free now isn’t free forever. I mean, the “first taste is free” drug deal rule applies here, c’mon. They get you interested, you use it, and suddenly it costs more, and more, and then more again. They have to do this. The development of this fucking nonsense horseshit has been a billions-of-dollars investment. They want that money back, and if that means they have to put it on a chip and have Elon Musk fire it into your skull with a modified .22 rifle, then that’s how they’ll do it. If it remains free to use, then that means it’ll come with advertising jackhammered into it. (“Every time I ask it a question, it answers ‘Taco Bell Crunchwrap Supreme,’ wtaf.”)
2. Future Money
Generative AI is meant as a disruptor. And classically, disruption is not always a good thing. (One might argue it’s rarely a good thing.) Big shiny new tech company shows up, reinvents a thing by offering it cheaply and loopholing its way around regulations, you get hooked, the older industry withers on the vine, the shiny new tech company nests inside the chest cavity of the older industry until its dead and it can erupt out from the carcass in a spray of blood and bone, and then it just charges you even more than the older industry did for what may potentially be a lesser product.
As such, the way one can currently earn money from art and writing is at risk thanks to the rise of generative AI. How this might happen is myriad — Amazon getting flooded with AI books makes it harder to find any book; companies learn they can generate “content” with the push of a button and either choose to do so or use the threat of doing so as leverage to reduce the money they will pay for art and for writing; generative AI’s implementation damages enough outlets for art and writing and sends them packing, which means fewer outlets for artists and writers, which lowers opportunity and, by proxy, money; generative AI acts as a labor scab during union disputes for creators; writers and artists are no longer hired to iterate and create but rather to “edit” and “fix” the work “created” by generative AI, which is to say, generative AI artbarf robots puke up a bunch of barely digested material and a company pays a cut-rate to once-notable writers and artists to push that slurry into some kind of shape, like they’re Richard Dreyfuss with the mashed potatoes in Close Encounters of the Third Kind.
And that’s just a sampling.
Ultimately, it puts power in the hands of corporations and tech-bros, and removes the power from artists and writers. And will try to eat away at copyright laws to do so.
That’s not democracy. And it certainly doesn’t come free.
3. Future Artists, Future Writers
There is a literal human cost. There will be people going forward — and, I’m betting, there are people right now — who are going to turn away from the art-and-writing path because of this. I know kids who already look at those career paths with the question of, “What’s even the point?” There will be a bonafide brain drain from the bank of artists and writers. (Not to mention teachers, or any other career currently being targeted and poached by generative AI on behalf of awful corporations.)
(And here, my conspiratorial eye-twitch red-thread-on-a-bulletin-board personality comes out and says, well, that’s awfully convenient — we’ve already seen such a heavy lean into STEM and away from the Humanities, because artists and writers tend to be thinkers, philosophers, they tend to have empathy, they tend to be less interested in the hustle culture churn of corporate life, and this only drives that nail in deeper, doesn’t it?)
Again, doesn’t sound like it’s democritizing shit. Anything that makes it harder and less likely to become a thing isn’t democritizing that thing.
4. The Costs of Actual Theft
Uh yeah, it steals shit. That’s how it works. It can’t do it without stealing shit. They’ve admitted it. Out loud. I don’t know how to explain to you the very real cost of having your work yanked out of the ether and thrown into the threshing maw of generative AI so your creations can become hunks of fake meat in their artbarf stew. But the cost isn’t metaphorical. It’s literal.
Once again, that’s not democritizing anything. It’d be like saying, “Ahh, Google has stolen your vote, and will vote on your behalf. How wonderful! You don’t even need to do it, now. We’ll handle it for you, for free. See? We’ve democritized democracy!”
God, even as I typed that out it feels alarmingly possible.
*shudder*
5. Environmental Cost
You don’t need to look far to learn about the environmental costs of generative AI. We didn’t ask for it, but it’s here, and even casual use can increase the burden on our environment.
A sampling of things to read:
How AI’s Insatiable Energy Demands Jeopardize Big Tech’s Climate Goals
Generative AI’s environmental costs are soaring — and mostly secret
AI brings soaring emissions for Google and Microsoft, a major contributor to climate change
We’re in danger of turning away from our already too lax environmental goals. We need coal and other fossil fuels gone, we need to protect water usage, and here comes AI to gobble up the water and our power and force us onto our back heel, all because some dickheads want a robot to lie to them about how many giraffes they see in Starry Night or because they need the magic computer to draw for them a picture of a 13-fingered Donald Trump freeing White Jesus from the cross with a couple of M-16s.
The only thing that’s democritizing is the death of our natural environment. Wow, nice work, Tech Bros. Guess that’s why Google removed their plan to DO NO EVIL from their mission statement.
6. The Damage to Informational Fidelity
It is increasingly hard to tell truth from fiction. Visually, textually, it’s getting easier and easier to just… lie, and to do so with effective facsimiles made from generative AI. Trump posting that Taylor Swift endorsed him, or creepy videos from Twitter’s AI showing Kamala Harris covered in blood and taking hostages, so newer abilities on a phone to just take an image and edit in whatever you want with the touch of a button — a giraffe, a bloody hammer, a hypodermic needle, a child’s toy, a sex toy, a loaded gun, whatever. The laws are far far too slow to catch this. This will be propaganda, given a nuclear-grade steroid injection. This will be revenge porn, god-tier level.
To sum up?
AI isn’t free.
It isn’t sustainable.
It isn’t democratizing a damn thing.
The tools and skills to create are already available. No, not perfectly, and no, the industries surrounding art and storytelling are certainly imperfect. But AI doesn’t push the existing imbalance into the favor of artists and writers, but rather, the opposite. And as it does so, it burns the world and fucks with our ability to tell truth from fiction, even right from wrong.
It’s weird. It’s horrible. I kinda hate it. I hope we all realize how absolutely shitty it is, and we can eventually shove its head in the toilet, same as we did with NFTs and crypto. Shove it in, give a good couple flushes.
Anyway. Buy my books or I die. Thanks!
Brian says:
I hope you’ve seen the response to AI from the CEO of Procreate one of the biggest iOS drawing programs. https://www.cnet.com/tech/services-and-software/why-procreates-anti-ai-pledge-is-resonating-with-its-creators/
September 3, 2024 — 11:30 AM
conniejjasperson says:
Amen. AI doesn’t create anything. It takes human creations, chews them up, and barfs them out.
September 3, 2024 — 11:43 AM
Curt says:
Chuck. Been reading your rants, er, expressions of opinion for years. 🙂
You are a sharp tack, a smart kiddo, a prolific and astute writer and critic. Me thinks you should tailor your recent comments on AI toward a more mainstream reader. NYT Op Ed page? WashPo? Other? To broadcast your thoughts away from the choir. I get it. Your devoted readers get it. Now, spread the word!! Cheers, Chuck.
September 3, 2024 — 1:09 PM
Glen says:
There’s more than just that.
7. Human Cost: These algorithms are not created by just sucking up content from the internet with automated web crawlers. The images and text fed into the algorithms, ESPECIALLY images and video, have to be screened and cataloged by humans. There are thousands of Kenyan people who are forced to wade through mountains of the worst the internet has to offer at insultingly low pay.
https://www.medianama.com/2023/07/223-kenyan-workers-call-for-investigation-into-exploitation-by-openai-3/
8. Cost to Integrity of Workers: Technical staff are forced to compromise their ethics or lose their jobs, because these algorithms are pretty crap. Many if not most of the tech demos are reputed to be faked to one degree or another. For an example look at the Amazon auto checkout stores.
https://arstechnica.com/gadgets/2024/04/amazon-ends-ai-powered-store-checkout-which-needed-1000-video-reviewers/
9. Cost to Integrity of the Economy: The whole Venture Capital system is a legalized scam in my opinion. The ‘AI’ bubble is the most flagrant example to date. VCs, startups, and a large ecosystem of grifters have done this over and over again. They announce some ground-breaking ‘new’ tech (usually a rehashed version of a very old tech like LLMs) and collect money from investors and take a skim off the top. Then they work between and among themselves pumping up hype to balloon the value and get lower level investors (suckers) to buy in and keep inflating the bubble. The whole time they provide ‘consulting support’ that funnels money into a vast ecosystem of startup money pits.
To keep it going they get a bunch of credulous executive dimwits with zero tech skills to spend millions of their institution’s funds (often funds created with layoffs) until it becomes obvious crypto, NFTs, blockchain, self-driving, VR, AI or whatever the current flavor of the year is doesn’t work.
No matter what happens the scammers get a nice mega-million skim off the top. Eventually the bubble pops leaving small investors and taxpayers holding the bag.
The whole VC tech bro system has caused incredible amounts of destruction and heartache over and over and over, and ‘AI’ is just the newest flavor. They need to be heavily taxed and regulated and have the SEC so far up their collective butts they can’t take a shit without someone auditing it. Else this is going to keep happening over and over again.
September 3, 2024 — 1:31 PM
Michelle says:
Great points. You’d think we would have learned after the dot com crash, but no. No one fucking learns and AI is here to make that problem far worse.
September 3, 2024 — 6:10 PM
Fatman says:
Maybe I’m just dense, but I don’t see how it’s “democratic” to enable people who can’t write to plagiarize the work of people who can.
Not everyone can write well, and among the miniscule population of folks with a modicum of talent for fiction writing, the number of those who can write compelling, stylish stories is infinitesimal. There’s a good reason why not everyone can be a published author.
Fiction publishing as a whole is already inundated with poorly written stories. Anyone with any amount (or in fact with zero amount) of talent can write and publish anything they want, whenever they want. The “democratization” of writing happened years ago, with the advent of simple-to-use self-publishing venues. Nothing about this change was positive (YMMV). Talented writers are now harder to spot, bobbing like driftwood on an ocean of brown slurry wastewater.
AI can only further accelerate the ongoing enshittification of fiction. Who in their right mind would consider this “freedom”?
September 3, 2024 — 1:52 PM
Sonjia Starling says:
From you keyboard to the worlds ears! I am graphic designer. I see the cost of it everyday. I constantly see these fly by night operations that are constantly talking up the use of generative AI in their “world leading designs”. It is all generic and uninspired, taking the human out of the art loses so much.
September 3, 2024 — 1:59 PM
Anastasia says:
I’ve thought AI would destroy art and writing and the humanities for a while now. It makes me excited to make things because making actual imperfect human art is an open act of rebellion once more.
We should be extra careful because the current crop of oligarchs and plutocrats seem set on undoing this whole life thing by transforming biology via technology. Why, among other things do you think that Elon Musk is trying so hard to get chips into human brains and bodies? Immortality via technology and biotechnology seems to be the current form of the techno utopia. Hey, if we’re only partially biological, this whole worry about destroying bio systems is lessened at least in theory. These transformations are being sold hard to us along many vectors. Even if the science isn’t yet up to this dark vision I’m sure that we can destroy a lot pursuing this, including the good things about being human, such as art.
PS – Loved the phrase Art barf stew.
September 3, 2024 — 2:05 PM
eftheflash says:
Excellent, informative rant, as per your usual standard. Until your post, I was unaware of the environmental costs of AI. #ArtificialIntelligenceIsEvil
September 3, 2024 — 2:12 PM
Lauralynn Elliott says:
I got an email from Draft to Digital basically asking how I felt about letting AI use my books to “learn stuff”. That was the gist of it, anyway. I’m sure everyone publishing with them got this letter. I let them know exactly how I felt about that. AI terrifies me. Especially for people like my granddaughter, who is a budding artist and is VERY good at it.
September 3, 2024 — 2:28 PM
Max Vos says:
AI…. Yup. You nailed that one square on the head. Forwarding this.
Max Vos
September 3, 2024 — 5:38 PM
Michelle says:
Thank. You.
September 3, 2024 — 6:05 PM
Bex says:
Thank you for keeping us informed. This scares the crap outta me and I can’t imagine any of this is going to get better.
September 3, 2024 — 6:54 PM
Von says:
I know our main goal is to top generative A.I. But I’ve got to wonder — would it at least help to pass laws requiring that anything created via A.I. is TAGGED as being created by A.I.? Or is that pie-in-the-sky?
Thanks for pointing out the environmental issues — I had not heard of those either.
September 3, 2024 — 7:43 PM
Kat says:
Thank you Chuck, for putting so simply and succinctly what needs to be said. I read that statement (even the revised one) by NaNoWriMo and rubbed my eyes with the horrors. Where do they get off, seriously. That whole drivel was like… CONTRARY to their actual mission of encouraging writing. Like… For real. What is the thought process. Enquiring minds (that are actually thinking for themselves) would like to know.
September 4, 2024 — 2:09 AM
Debby Hanoka says:
So much for Google being “carbon neutral” (their words, not mine).
September 4, 2024 — 11:08 AM
James says:
I can tell you that my freelance copywriting business has basically withered away to nothing. Last year, most of my clients included said some variation of, “We tried to write this with ChatGPT, but it wasn’t quite right, so now we’re coming to you!” This year, only a couple of clients are still asking me for stuff.
Fortunately, my copywriting isn’t my main source of income. But not having the extra money means my family and I have had to tighten our belts, and the lack of respect for my craft stings like hell. It’s definitely put a damper on my personal creative projects.
September 4, 2024 — 11:19 AM
David says:
This is a small counterpoint to your discussion, and considering the Grenfell Tower tragedy’s findings are in the news today, it feels appropriate. I want to stress the dangers of turning a complex and dynamic discussion into a fallacious good vs. evil dichotomy.
I appreciate you may not immediately see the relevancy or the connection between the 2017 Grenfell Tower fire and the panic over generative AI and ML, but bear with me.
It’s essential to recognise that the lessons we glean from past tragedies, like the Grenfell Tower fire, can profoundly inform our approach to policy-making in the realm of AI and machine learning (ML). The Grenfell disaster starkly highlighted the risks associated with sidelining marginalised voices; it serves as a poignant reminder that including diverse perspectives is not just a matter of fairness but a crucial component of effective governance.
The exclusion of marginalised communities, particularly in tech policy, can lead to unintended and harmful consequences. For instance, AI and ML have the potential to enhance accessibility and inclusivity for disabled individuals significantly. When these technologies are developed without the input of those they affect, we risk exacerbating existing inequalities rather than addressing them.
To foster a more inclusive technological future, we must prioritise the voices of those who are often left out of the conversation.
Engaging disabled and marginalised communities in decision-making processes will ensure their needs are front and centre, leading to policies that uphold their rights and autonomy.
Furthermore, thorough impact assessments are vital to evaluate how proposed policies affect diverse groups (yes, that includes abled writers, artists, etc.), ensuring accessibility. Encouraging innovation that directly responds to the needs of marginalised communities can also drive positive change, promoting inclusivity and economic independence.
By learning from past mistakes, we can work towards creating a society where technology serves everyone equitably, paving the way for a future that embraces diversity and fosters a sense of belonging for all. Engaging in ongoing evaluation and feedback mechanisms will further ensure these policies remain responsive to the evolving needs of our communities.
If you care for a more nuanced discussion, you can find it here:
https://medium.com/@dwtutoringeducation/empowering-inclusion-the-transformative-role-of-ai-in-accessibility-5e77e2974677
September 4, 2024 — 11:21 AM
Melissa Clare says:
This argument conflates AI tools of all kinds with GENERATIVE AI, which is specifically what Chuck is discussing. Humans have always had tools. But a paintbrush (whether held in the hand or the mouth) does not create the art, the artist does. For that matter, graphic art software and other computer-assisted technologies do not create the art for the artist, and to suggest that disabled artists require GENERATIVE AI to create their art (i.e., that they need “tools” that will create art on their behalf, rather than letting them create art of their own in the way that they are able to do so) is offensive. To suggest that a curb cutout is in any way analogous to generative AI is being (deliberately) obtuse.
September 5, 2024 — 4:20 AM
David says:
Hi Melissa, thank you for trying to discuss this openly and honestly. Thank you for clearing up some terms for me. After spending so many years doing bioinformatics for my PhD, those terms, well thank you. It wasn’t that I was trying to avoid technical terms and jargon.
Of course as I must now believe you actually read my article and noted I sowed my receipts to back comments, can I see yours?
Or maybe you have vast lived experience as a multiply disabled person, or experience in assisting, teaching or mentoring them?
I have on both counts, all on the public record. I suggest you reread the article and counter with something other than demonstrably incorrect accusations and failing of basic comprehension.
I do understand though if you are unwilling to give up your power and privilege and presume to talk about marginalised communities you have never cared about.
September 5, 2024 — 8:47 AM
Melissa Clare says:
You are welcome to your opinions, as am I. You are welcome to believe whatever you want to about me. I’m not getting into a personal fight with you, nor justifying to you who I am or what my lived experiences might be.
September 5, 2024 — 9:41 AM
David says:
Melissa I will accept the most likely reason is that you only skimmed the post, or if you did read in full, your cognitive dissonance betrayed your understanding. Obviously as I’ve said nothing is black and white, there are so many other possibilities too.
So if you *had* read it you probably would have realised there are many things I agree with you both on.
Of course I do admit sometimes my co morbidities makes communication a little difficult. Maybe I should have used goblin tools to ensure my tone was accurate. But doing that would again leave me with all the emotional labour in trying to advocate for our needs but also substantiate the validity of my disability status and making sure I don’t upset the abled NT types.
My point was really quite simple. Essentially if people insist on having this debate (which they should) do not whitewash it, of have it in closed echo-chambers. Do not discount or invalidate the experiences of the disabled, most who are just trying to be involved as fully as they can or simply for just trying to exist. Do not gaslight them, with the idea there are only good or bad groups. Invite them in and *actively* listen to them.
I provided many examples of what happens to us, when you don’t listen and chastise us instead. I also showed you tools and technologies once thought to threaten society but ended up being a universal benefit to all.
I don’t want a fight. I want a voice, I want to speak, I want to be heard and I want to be respected. This, right here, is far worse for me and my health, than it is for you.
I have been fighting this fight for far too long. I am very passionate about it, not for, but for the clients I have, for the people I teach and mentor. I do this for them.
September 5, 2024 — 12:15 PM
Tammi says:
A piece of software stealing the work of artists and writers to mix it into a slurry and then reform it into new art or writing at the push of a button has NOTHING to do with “marginalized communities.”
September 5, 2024 — 9:46 AM
Debby Hanoka says:
Conflating the issues AND grandstanding. I asked for a definition of ‘marginalized’.
That said, if generative AI — using Open AI as an example — is *that* dependent on using people’s copyrighted material, without first securing permission and terms of use, to train their LLM, then their business model was faulty in the first place.
September 5, 2024 — 10:02 AM
Fatman says:
Yep – strawman argument, followed up by furious sealioning.
Coupled with the delivery style, I’m inclined to think someone is trolling using LLM-generated text. You know, to prove a point. There was a similar comment a few months ago – also in response to an anti-generative-AI post.
September 5, 2024 — 11:13 AM
Debby Hanoka says:
If someone is explicitly asked to define their issue — such as me asking David to define “marginalized” — and continues their grandstanding and tangents, one can only assume they are a troll.
I’d also like to know what these ‘marginalized’ people are trying to do for themselves and what they need help with.
September 5, 2024 — 12:06 PM
Fatman says:
Two things about “David’s” posts. One, the robotic, machinelike expressions and writing style. Two, the vague “counterpoints” that superficially appear to reference your arguments, but do not engage with them at all.
In other words, suspiciously similar to how an LLM would respond if fed a bunch of text (e.g. my comment) and asked to write a response. It even seems to pick up on a couple of irrelevant prompts I seeded through my comment – and responds to them as if they’re part of the discussion.
We’re being trolled by an LLM, or rather a human training an LLM on our responses. I don’t know whether I should be upset, or applaud.
September 6, 2024 — 11:44 AM
David Wakeham says:
It is almost laughable that Steve comments about meaningful conversation, when I originally posted for that very reason. Of course, as is so often the case, the immediate and collective knee-jerk defensive reaction.
One could ask is it due to fear and shame, a threat to your own self image, amygdala hijack, misunderstanding of the intersectionality of power and privilege, or a low emotional intelligence? Any one is possible.
In your case Steve it is just you fearing a potential change in the dynamics of power. Again, hilarious considering your recent post about the [checks notes] “not the opinion itself, but the audacity of wanting to think for themselves”. Your other misogynist posts and rage against renewable energy does paint a familiar picture of someone upset that society has started to question the power you gained inherently without any effort due to centuries of institutionalised racism, sexism, ableism etc.
I predict that if you do reply it will again be in backlash to dismiss, deny, belittle and minimise my humanity and the assault and harm done by you and your herd. You will then attempt to force me to do all the heavy lifting and emotional labor, as you essentially have no skin the game. Perhaps finish with another case of “whataboutism”.
Alternatively, you could show some cultural humility, reread my original post after taking a few deep breaths, and comprehend what was said and what was asked.
I agreed with many of y’all points. I championed the further serious and discussion concerning ethical use of these tools. Like any tool, it is how people use them, and often, like all the examples I provided, ended up being of universal design. The only thing I asked was, in this discussion, include others, particularly those whose voices are most often ignored or simply spoken over.
A simple and extremely polite request you have all managed to obfuscate, purposefully misrepresent and just act like common school yard bullies regurgitating accusations in attempts to defame and belittle me.
Maybe next time use that tired and lazy trope of using logical paradoxes to neutralise me? Maybe that way you can cause my “artificial intelligence” to malfunction
September 7, 2024 — 9:58 PM
Fatman says:
… “Steve”?
September 9, 2024 — 4:16 PM
David Wakeham says:
So imagine my surprise that you and Debby after demanding responses and explanations from me, now refuse to do the same. We all see who the trolls are here.
September 12, 2024 — 9:20 AM
Debby Hanoka says:
Define marginalized. It can mean different things to different people.
September 5, 2024 — 9:59 AM
David says:
Sorry, Debby, I do not know why I am not getting notifications for your or any other response
I have genuinely run out of spoons. I suggest you read the original link to the article.
Because at the moment, it appears all you and “Fatman” are doing is shouting out ad hominins. Happy to discuss your counterpoints another day.
September 5, 2024 — 12:23 PM
Debby Hanoka says:
David,
Perhaps you are not receiving those notifications because you did not think to turn them on?
Now let me repeat my question: Please define “marginalized.” It can mean different things to different people.
That either you cannot or will not define what you mean by marginalized makes me think you are a troll. I welcome the opportunity to be proven wrong.
By “running out of spoons” do you mean that the questions at hand are too challenging for your STEM PhD self to consider answering?
I await your answers.
Debby
September 5, 2024 — 12:50 PM
David says:
Debby, I would define a marginalised community as a collective of individuals who face systemic disadvantages due to socio-political, economic, or cultural barriers. These groups often experience inequitable access to resources, opportunities, and representation within societal structures.
For example, individuals with disabilities may encounter societal stigmas and institutional frameworks that inadequately accommodate their needs, resulting in exclusion from various areas such as employment, education, and social participation. First Nations populations frequently grapple with the lasting impacts of colonialism, cultural dislocation, and ongoing disparities in health, education, and land rights, which contribute to their marginal status.
Individuals from low socioeconomic backgrounds face persistent inequalities that hinder social mobility—a result of both structural and cyclical poverty that limits access to quality healthcare, education, and job opportunities. Asylum seekers confront legal and bureaucratic challenges, often living in precarious situations and facing discrimination, which complicates their efforts toward integration and stability.
Those categorised as English as a Second Language (ESL) learners may experience linguistic barriers restricting their ability to engage fully with broader societal systems, including the labour market and civic participation, reinforcing their marginalisation.
These communities illustrate the intersectionality of identity and systemic disadvantage, where overlapping factors intensify their exclusion from mainstream socio-economic and political frameworks. Each group deserves a thorough examination of the intersectional dimensions that sustain their marginalisation, highlighting the need for targeted interventions to promote equity and inclusion.
TL;DR: I suggest that individuals may be marginalised depending on their status within the following (non-exhaustive) categories: Sexuality, Gender, Wealth, Language, Religion, Body Size, Education, Mental Health, Disability, Neurodiversity, and Skin Colour.
Thank you both for so perfectly demonstrating my points above. Masterfully done.
“I await your responss”.
September 5, 2024 — 8:33 PM
David says:
Sorry, Debby, your last comment did not have a reply button.
Even though I find it hard to believe you are asking those questions in goodwill, I will answer them.
The first question you ask is potentially disingenuous. It is dynamic, changing between individuals over time and obfuscated by traditional and cultural beliefs, expectations, philosophical worldviews, etc.
I would ask you, if you saw someone in need, would you help then?
Alternatively, I was pretty naive when I started tutoring/teaching at universities. I found students from a particular background would always say they understood and had no questions. They were raised to believe it disrespectful to dishonour your teacher by not understanding. They wouldn’t necessarily consider themselves marginalised, but were they receiving the same accessibility and inclusion as other students? No.
That being said, after many years of stupid pride from my own ‘upbringing’, I eventually accepted I was from marginalised communities. Now, not only do I work with (often pro bono), but I have also, in retrospect, been part of many of these communities for almost 50 years.
What have I done? I have volunteered across Australia, South Africa and Southeast Asia to give back when I had the means to do so.
Volunteering involved numerous dynamic tasks such as campaigning, fundraising, physical labour, training and education, etc. By means, I refer to finances, physical and cognitive ability, opportunities, etc.
I have also returned to study for my MEd (master of education specialising in learning difficulties) because I felt I was perhaps not doing enough for some of my clients. I needed more tools in my belt to help with heterogeneous ND and disabled clients I now specialise in. Because no one should have to have fought all the battles I have. And still am fighting.
September 5, 2024 — 9:53 PM
David says:
Debby, I would like love to hear of your experiences helping those less fortunate and marginalised groups.
Perhaps we could exchange ideas and tools? Maybe streamline some fundamental processes so we best understand their needs.
Exciting, having such genuine collaborations, don’t you think 🙂
September 5, 2024 — 10:01 PM
Fatman says:
No one is “shouting out” anything, and if you meant ad hominems, you’ll find none in my response. I’m merely pointing out that you a) responded to Melissa with a strawman, then b) proceeded to sealion with “why are you against disabled people” instead of addressing her points.
The article you linked appears to be your own. It purports to advocate “inclusivity”, yet fails to distinguish between AI tools and generative AI. Melissa is therefore correct in assuming that you do not know the difference between the two. You had the option to rectify this, but instead chose to insult her and wave your “PHD” in lieu of a coherent response.
I guess I’ll add “appeal to authority” to the list of fallacies you’ve employed in this brief exchange.
Of the links you included in your article, two are dead, two lead to paywalled papers (the abstracts of which do not seem to indicate relevance to the topic being discussed – i.e. no distinction of tools vs generative AI), and I did not read the one on plastic straws, but I agree with the general sentiment you expressed.
Until you try to engage in meaningful discussion, e.g. without throwing hissy fits, my assessment of “troll” will stand.
September 5, 2024 — 3:09 PM
David says:
I will allow you to reflect on your assumptions, tone and basic level of civility.
Whilst you are indeed acting the troll and also quite expertly proving my points, I will no longer engage with you until you acknowledge your power and privilege and apologise for gaslighting.
In a sign of good faith, I will explain some points. Melissa claimed to speak for the disabled community. I asked those questions to ascertain what knowledge or experience she had to do so to validate her claim.
My article, which I never claimed to be written by anybody else but me, is extended for that platform. As a writer (my assumption this time), you may appreciate that one cannot fit every nuance into a single piece.
As an academic, I can access papers behind paywalls, a privilege. I also fight this, as all my research is open access. If you would like a manuscript, I will gladly email it.
Yes, my initial response was very much in frustration. Do I regret allowing my emotions, though wholly justified, to get the better of me? Yes. Yet, I have maintained honesty in myself and my comments, which I have supported and verified.
September 5, 2024 — 8:56 PM
Debby Hanoka says:
David, two more questions:
1/ Has the ‘marginalized group’ to which you refer identified themselves as marginalized?
2/ What are you doing to help said marginalized group? What are you doing to help them help themselves?
If all you do is theorize about the ‘marginalized people’ and ponder their human condition, then you are part of the problem that keeps them marginalized.
September 5, 2024 — 9:02 PM
Melissa Clare says:
Thank you, Fatman, for taking the time to write that rebuttal. I really just didn’t have it in me. And I think you’re right – much of the responses do sound like it’s an LLM on the other end of the line.
September 6, 2024 — 4:48 PM
Bonnie666 says:
His answers are generated in ChatGTP with only mild edits. No reason to answer.
Half of the actual authors are autistic or otherwise neurodivergent, I mean art has a tendency to draw atypical individuals in the first place. Your brain does frequent bloopers or drifts off and daydreams? Let’s make it into art! A perfectly human answer to a human problem.
September 6, 2024 — 10:48 PM
David Wakeham says:
Cool story. Say hi to Ben for me.
September 7, 2024 — 11:27 AM
John says:
Chuck Wendig, you are PRICELESS! Re-reading “Kick-Ass Writer” for the third time. “[some days writing is] like brushing the teeth of a meth-cranked baboon.”
September 5, 2024 — 11:11 PM
bennydonalds3 says:
Yet another reason why people should be renewing their library cards. It is so frustrating that our society keeps wanting me to get information from the Internet even as the Internet becomes a less reliable source of information.
September 21, 2024 — 2:03 PM