There is a current trend, most notably among artists, but also by the general public, to hate AI and the horse it rode in on. This post aims at (hopefully concisely) explain why I do not fully subscribe to that.
AI is a tool.
Like all tools, its usefulness is determined by how it is applied, and to what.
If you have nails with all the consistency of chocolate, are you going to blame the hammer?
AI works with data — data sets so large it would be impossible for humans to crunch all of it. But the quality of the data determines largely how useful it is. Feed it garbage, and that is what will come out.
Large Language Models
The problem that is generally attributed to AI is, in fact, the data you put into it.
And that’s what I think fuels the general disdain, distrust, or sometimes even hate, for ChatGPT and the likes. It’s not AI, it’s GENERATIVE AI, and the LLM! If you train a generative AI engine on indiscriminate garbage, that is exactly what you’re going to get out of it. And quite frankly, every daqta set that is not validated is to be considered garbage.
And if you do not validate your data, you cannot reliably and consistently expect that what will come out of it isn’t garbage. So… if you are going to feed it with posts from Facebook users, or worse, users of X… I think you will know where this sentence is going.
And it gets worse when AI is trained on its own output. Which will happen if you are going to indiscriminately train it on whatever it finds in the public domain without validating it.
And of course, training it on material that is not in the public domain, but is copyrighted, exposes whatever AI generates to the risk of copyright infringement.
And then, of course, we have other applications of AI, such as image generation and music generation.
The first thing that comes to mind is that this would make life a lot tougher for real artists. This is indeed true — fields like stock images will become a lot less profitable. It’ll be hard for you and me to determine whether this would lead to copyright infringement — if an image generation engine is trained on photos from Facebook, Instagram or Twitter, the poster probably wouldn’t have a leg to stand on.
For music, it’ll be hard to train the engine on non-copyrighted material only, and while, in any individual case, plagiarism will be very hard to prove in court, it’ll be very hard to argue that it’s an ethically sound product.
So… what about AI as a technology?
In the medical world, AI technology knows no equal. In the field of diagnostics, AI has proven for years to be capable of determining patterns that would require humans to dig through hallucinatory piles of data. It’s also capable of coming up with specific antibiotics in hours that would take a dedicated team of scientists months.
This alone should convince you that AI is not inherently evil. Which is, as we all know, true for every tool.

Before I retired, I worked as a programmer, and we used Github Copilot to speed things up.
One of the things Copilot could do is use your own codebase as a data set. Essentially, that would mean that, if your code was lousy, or the design pattern you were using was a mess, everything Copilot came up with would be just as questionable.
What this shows us is that, if AI gets a specific task, and is trained from a validated data set, it can be very useful — but in our field, code generated by Copilot should still be treated as written by a temp, fresh out of school, with little to no knowledge of the problem domain.
In other words, validate its output before you put it in your code. Any bug produced by when you applied AI is still YOUR bug.
How it’s applied, and to what
A hammer is not evil.
But applying it to your neighbour’s head will raise questions, and probably make the authorities pay attention.
Just like that, the evilness of AI is entirely determined by the way it is used.
Now, because it is a potentially powerful as well as a potentially destructive tool, and because we’re still busy trying to harness it, it is often attributed to having a mind of its own.
It hasn’t.
If AI is doing something harmful (or trying to do something harmful), such as stealing your copyrighted work, that’s not AI doing that, it’s the people who told AI what to learn from being evil. AI in itself is not even intelligence — it’s just analytical capacity. It doesn’t have a drive to learn, it has no curiosity. It is not critical of its own outcome — it relies on us to be critital about it.
So, what we should disapprove of (ferociously if necessary) is the way in which the tool is applied.
But that’s a topic for a completely different discussion!
Sally Mack 2025-10-24
I don’t hate AI either.
I use it in my photos to make my work easier and pictures look better. As a tool, it’s great for straightening horizon lines, for instance, or separating out areas of a photo to enhance or minimize. I use it sparingly, grateful for what it can do.