I am not even one of the people who is going to get affected by AI (I hope), and I still don’t want it to advance at the pace it is going right now. I have a myriad of reasons, even though I might sound like a luddite, I am someone who knows about computer science, and the applications they are using it for are just stupid! Some companies are wanting to use it to help hire people, which I say good god, that is a TERRIBLE idea!

People have tons of misconceptions of AI, and I at least know enough to see it as a massive danger to us. I have read Roko’s Basilisk, and just the thought of that being a future reality is terrifying, and another reason that we need to put guard rails on AI to prevent it.

Preface

I am not entirely against AI being used in a some environments, there are areas where it can help dramatically (translating language is one example, even though it does a bad job most of the time). I support AI when it comes to helping us do our jobs or does some arduous job that most humans never wish to do (constructing a dyson sphere may be a future example). I just don’t want it to be used in a place where it gets programmed with tons of freedom to think or replace creative jobs, since I believe that it will destroy one characteristic we all have: creativity.

AI cannot understand “truth”

Let me explain, the “truth” is something that is real, cannot be denied, a fact. Something that most people don’t understand is that a computer doen’t know anything, it just listens to what we tell it to do via code. AI is just the same thing, it doesn’t know anything until you tell it what it should know and what it knows. This means that an AI model can be told by someone like you, James Smith, that President John F. Kennedy was killed by Lyndon B. Johnson, and say that is the truth, and anything otherwise is false.

This is because AI just listens to what information is given to it, and there is no fact-checking by the computer since it wasn’t told to do that. An AI model like ChatGPT was made to suck in words spread all across the internet and process it to help form responses, which can be entirely false information, despite the best efforts by OpenAI, ChatGPT doesn’t know the truth, It was designed to type out sentences to look like a human is writing back to you.

Any AI model is biased

That may not sound true to people who have used ChatGPT or similar models, but I am someone who took computer science classes, and I can tell you, AI is biased. In the class, on the subject of AI, we get a sample model to feed information to it so it can make decisions of its own. The information we feed it are simple yes/no things that we as humans know inherently the answer to, like if something is garbage or a fish. We feed information on if it is garbage or fish a certain amount of times, then let AI do its thing, and it does a good job telling if something is garbage or a fish.

The next samples of information we got for the sample model was more subjective, like if a certain fish looks fierce or not, which some people can disagree on. Once we did our manual steps of feeding information, we see that AI was biased to our own choices, and if someone else looked, they see AI picking tons of fish that aren’t “fierce” and saying they are fierce.

Subjectivity and ambiguity creates bias

If I am to go back to the example of a company using AI to hire people. HR and the hiring manager have to look at things on a resumé that can be irrelevant or ambiguous to an AI model. To help the AI understand its job, people must feed it information on the hiring process, kinds of resumés, and what qualifies for a job. The latter part can be very ambiguous depending on the company, which can lead to the company being hired with a bunch of baboons (figuratively and possibly literally).

We don’t even put Quality Assurance

From what I see, AI companies and companies investing in LLMs (Large Language Models) are not looking at the quality and the guard rails put on the models (if there are any). These companies are just helping to make AI advance faster than the regulators and Quality Assurance, making it much more free to do things that can be VERY dangerous to us. It seems that these people are just advancing the models and hoping for the best, without even thinking what dire consequences can be waiting to bite them in their backside.

Conclusion

I really hope I just pulled the curtain on your view of AI. Which I swear is just turning into a buzzword as I write this. The people and companies that develop these models know that most people here are ignorant about technology, which means people like me need to write these articles on their blogs to tell you more about the dangers. This isn’t really about how AI is taking jobs, and more about how much of a danger it is to humanity in general.