Peter's blog

Musings (and images) of a slightly warped mind

Why I don’t hate AI

There is a current trend, most notably among artists, but also by the general public, to hate AI and the horse it rode in on. This post aims at (hopefully concisely) explain why I do not fully subscribe to that.

AI is a tool.

Like all tools, its usefulness is determined by how it is applied, and to what.
If you have nails with all the consistency of chocolate, are you going to blame the hammer?

AI works with data — data sets so large it would be impossible for humans to crunch all of it. But the quality of the data determines largely how useful it is. Feed it garbage, and that is what will come out.

In the medical world, AI knows no equal. In the field of diagnostics, AI has proven for years to be capable of determining patterns that would require humans to dig through hallucinatory piles of data. It’s also capable of coming up with specific antibiotics in hours that would take a dedicated team of scientists months.
This alone should convince you that AI is not inherently evil. Which is, as we all know, true for every tool.

Before I retired, I worked as a programmer, and we used Github Copilot to speed things up.
One of the things Copilot could do is use your own codebase as a data set. Essentially, that would mean that, if your code was lousy, or the design pattern you were using was a mess, everything Copilot came up with would be just as questionable.

What this shows us is that, if AI gets a specific task, and is trained from a validated data set, it can be very useful — but in our field, code generated by Copilot should still be treated as written by a temp, fresh out of school, with little to no knowledge of the problem domain.

In other words, validate its output before you put it in your code. Any bug produced by when you applied AI is still YOUR bug.

If you feed it garbage data (unqualified data), chances are you’ll get garbage out of it. Because you cannot reliably and consistently determine what will come out of it, that’ll be hard to predict. So… if you are going to feed it with posts from Facebook users, or worse, users of X… I think you will know where this sentence is going.
And it gets worse when AI is trained on its own output. Which will happen if you are going to indiscriminately train it on whatever it finds in the public domain without validating it.

How it’s applied, and to what

A hammer is not evil.
But applying it to your neighbour’s head will raise questions, and probably make the authorities pay attention.

Just like that, the evilness of AI is entirely determined by the way it is used.
Now, because it is a potentially powerful as well as a potentially desctructive tool, and because we’re still busy trying to harness it, it is often attributed to having a mind of its own.
It hasn’t.
If AI is doing something harmful (or trying to do something harmful), such as stealing your copyrighted work, that’s not AI doing that, it’s the people who told AI what to learn from being evil. AI in itself is not even intelligence — it’s just analytical capacity. It doesn’t have a drive to learn, it has no curiosity. It is not critical of its own outcome — it relies on us to be critital about it.

So, what we should disapprove of (ferociously if necessary) is the way in which the tool is applied.

But that’s a topic for a completely different discussion!

Next Post

Previous Post

1 Comment

  1. Sally Mack 2025-10-24

    I don’t hate AI either.

    I use it in my photos to make my work easier and pictures look better. As a tool, it’s great for straightening horizon lines, for instance, or separating out areas of a photo to enhance or minimize. I use it sparingly, grateful for what it can do.

Leave a Reply to Sally Mack Cancel reply

© 2025 Peter's blog

Theme by Anders Norén