I remember when they told us in school that the only thing that separates us from animals, was the intelligence, and i remember being the smartest species on earth was a huge flex for us and made the whole humanity to think that we own the planet. Well, we have managed to create something that changes everything. We alone created languages, solved complex problems, wrote literature, made art, and built skyscrapers, satellites, and the internet. However in the last few decades, this exclusivity has started to fade. Machines, once limited to printing “Hello World” on a black and white screen, now ignores your request with a simple "Sorry, i can not assist with that".
Today, artificial intelligence (AI) can compose text that can pass as our own to your teacher, drive cars more reliably than some humans, and detect diseases from medical images with higher accuracy than many doctors. In fact, once you try to join forces with AI, the results get worse, which means doctors should leave the AI alone and let it do its job, LITERALLY (You can read the whole story and the published paper here).
So what is going to happen next? Are we their slaves? Are they going to eat us? No. Every time someone resist using AI for some task, i vividly remember my geometry class in school, whenever someone tried to draw a line by hand, the teacher would say:
We all have reached to this point, survived the nature and enjoyed the comfort of the couch in front of the TV, just because we used our tools, and you thought a ruler is too much for you...
I would be lying if I say I didn't try to start this blog with ChatGPT, but at the end, it just couldn't do it as i like and that shows AI can't replace humans that easily (yet). This is one of the very reasons i decided to started this series about AI (I will talk about the rest of them later), to show you AI is not a threat and it's simply just another tool and we should not worry about it (i will talk about the things we should worry about later).
The main reason behind this behaviour of us, is Anthropomorphism, which mean Human Bias. We naturally give things personality and intentions. We yell at our computer asking "where are my files?" when it's our responsibility to save them, or talk lovingly to plants. We also project this human-like thinking onto AI and machines. AI tools don’t get angry, bitter or jealous. They don’t secretly cooking evil plans, they just follow code, math, and probabilities. Your Siri isn't passive-aggressive, maybe you mumbled. Netflix isn't judging your binge habits, it's just math, patterns, and clicks. Knowing this human tendency should help us keep our sanity regarding AI paranoia.
A brief review of humanity breakthroughs
Look at it this way: every breakthrough in our history wasn’t just an invention, it was a revolution. Before we get wrapped up in debates about AI, let me refresh your memory about how we got here.

It all started with Fire. Not just a way to stay warm or roast a meal, but the spark that lit up the night and ignited our collective imagination. That first flame was a bold statement: we wouldn’t be held captive by darkness. Next came Writing. By putting thoughts into symbols, we made ideas permanent. Suddenly, knowledge wasn’t fleeting—it could be passed down, built upon, and shared across generations. Or the invention of Wheel. it was our first step of movement and change. This simple, yet important creation introduced us into new world of transportation and trade, connecting distant communities and made it possible to discover the planet.
With the beginning of the Industrial Revolution, mechanization and mass production reshaped every aspect of life. Factories created, cities expanded, and for the first time, technology began to work at a scale that redefined human productivity and social organization. Discovering Electricity was like capturing lightning in a bottle. A force that powered homes, industries, and a newfound era of communication. The era of Aviation broke the chains of gravity, giving us wings to fly and shrinking our big world into a connected global village.
Then arrived the digital age, started by the invention of Computers. These machines turned abstract calculations into real-world solutions, made us dare to dream even bigger with Space Exploration. Going beyond our planet wasn’t just about science, it was a declaration that human curiosity knows no bounds. And of course, the concept of the Internet was perhaps one of the most profound moments in modern history. This global network dissolved geographical barriers, fostering an era of instantaneous communication, collaborative innovation, and shared human experience.
Now, Just imagine for a moment, a world where we had allowed fear and doubt to held us from each of these breakthroughs. What if the Wright brothers had give in to the terror of defying gravity, thinking human flight too dangerous to pursue? Or when the camera was invented, some painters feared it would make their skills obsolete. Yet, instead of smashing cameras in protest, artists embraced photography, leading to new art forms and perspectives. Even the invention of the telephone faced skepticism, with concerns it would disrupt personal communication. But, it became an indispensable tool, enhancing connectivity and fostering relationships across distances once thought insurmountable.
Without the courage to explore, our society might have remained trapped in a perpetual state of “what ifs.” The creative, collaborative spirit that defines our past, and fuels our future, would have been lost in a maze of hesitation and missed opportunity. Throughout history, every significant technological leap has been met with fear. However, it’s our willingness to overcome these fears and integrate new tools into our lives that moves humanity forward. AI is no different, it’s just another tool created to augment our capabilities, not to limit them. Developing AI, like the innovations before it, can lead to unprecedented growth and opportunities.
Actually, AI has always been here!
AI history is not something new, it dates back to ancient myths of artificial beings, like the Greek automaton by Hero of Alexandria, and philosophical ideas from Aristotle and Descartes. Even the modern concept of AI (what we think of when we hear the word) began in the 1940s with the invention of the programmable digital computer, a machine based on abstract mathematical reasoning, which inspired discussions about electronic brains. In the 1950s and 1960s, AI saw early successes like programs playing checkers and chess, and proving mathematical theorems. However, the first "AI winter" in the 1970s occurred due to unmet expectations, leading to reduced funding. But again, the 1980s brought expert systems back, and the 1990s saw neural networks revive. By the 2000s, machine learning and big data drove breakthroughs in image recognition and natural language processing.
Today, AI is everywhere, and in fact, it has always been. Since we talked about most of the human's major milestones, it's only fair to also cover some of the contributions of AI. In healthcare and medicine, AI is nothing short of a lifesaver. Behind the scenes, advanced algorithms goes through countless medical images and patient records to spot early signs of diseases, often faster and more accurately than the human eye (as i mentioned in the beginning of the post). AI driven radiology tools generate precise, instant reports, personalized medicine and treatments to individual needs, opening up opportunities for breakthroughs in drug discovery and patient care.

While you might not even notice it, AI is quietly making our life easier every single day. Every time you find a new music on spotify that matches perfectly to your taste, get instant weather and traffic updates, or redirect junk emails to your spam folder, you’re benefiting from Artificial Intelligence. While these features may seem small to you, life would suck without them, and these are not even the nobel applications of AI. The most noble uses of AI are those addressing global challenges, particularly for vulnerable populations, aligning with SDGs (Sustainable Development Goals). These include healthcare improvements, humanitarian aid, education access, environmental conservation, social justice, and accessibility for disabilities, UNHCR's refugee support, and conservation international biodiversity tracking. These applications cover high moral principles, promising a better future for us and the our planet.
LLMs (Large Language Models), like ChatGPT, which i believe is the reason we care about AI nowadays, have evolved from simple text generators into smart assistants that help draft emails, brainstorm ideas, and even provide real-time problem solving. Since their creation (or at least since they became viral) they have created opportunities for businesses to streamline operations and for consumers to enjoy a smarter, more connected lifestyle.
But along with these opportunities, AI has created some fear inside our minds as well. Common fears expressed on social media include job displacement, loss of human decision making and control, and concerns about AGI (Artificial General Intelligence) risks like potential bioterror threats. Most people worry about privacy issues, data misuse and lack of transparency in AI decision making. These fears are the source for major conspiracy theories range from claims about AI being government surveillance tools to more extreme theories about alien technology involvement. However, these lack concrete evidence and often overlook AI's documented development history.
So, why there will be no AI apocalypse?
While it's natural to have concerns about AI, given the fears and conspiracy theories on social media, it's important to recognize that AI is still in its early stages, and many risks are manageable. History shows that technology, like the internet, initially faced similar fears but has become integral to our lives with proper safeguards. Look at us right now for example, it's been only 2 years (as im writing this in 2024), but it's really hard to remember how did we do most of the daily tasks without AI. So maybe it's time to start thinking differently about this tool.
Now, as i have promised in the title of the post, lets see why i say there will be no AI apocalypse and that AGI (while it's waiting at the corner and i'm sure it would be as powerful as they say) will NEVER be conscious. I claim this statement "there will be no AI apocalypse" based on few key observations:
Only way to know, is to do
Ever heard of Conway's Game of Life? It's very simple, you start by drawing simple patterns of cells on a grid. From there, the simulation evolves. These are the only rules: cells survive with 2-3 neighbors, die from isolation or overcrowding (less than 2 neighbours or more than 3 neighbours), and dead cells come back to life if exactly 3 neighbors exist. These rules cause some patterns vanish entirely, others stabilize, and some just keep transforming unexpectedly forever and growing endlessly. But here's the point, there's no shortcut (no fast forward or rewind technique) to predict exactly how a given pattern behaves, or what was the input that created this result. To know if your initial pattern survive forever or stop after a while, you can't just look at a it and say it looks eternal, no, you literally have to let it run step by step until you proven wrong or you ran out of time. You can give it a try right here (but also search it in google to see some cool effects). Try drawing a pattern with your name that you think it never vanishes or freeze, and while you are watching it unfolds, think how can you ever be sure that you have the correct answer?
AI, running on pure logic and math, ran into the same issue. It can't predict outcomes in certain complex situations without actually crunching through each and every step. That's exactly why an AI sometimes generates a whole page of text, only to stop at the last sentence saying, “Sorry, I can't assist with that.” It genuinely can't tell earlier, because it doesn't know until it knows. There's no shortcut, no backtest, and definitely no cheat codes to skip ahead. Gödel's Incompleteness Theorem is all about the same dilemma in logic and math:
There will always be statements that are true but can't be proven by being clever or taking shortcuts.

Monkey see, Monkey do
As we mentioned before, ChatGPT and other LLMs are not the only AIs we use, but since it uses prompting instead of programming, it became more popular, and since it generates almost everything like text, image and sound, it became a little bit concerning. Wittgenstein shares a slightly different opinion in his Tractatus Logico-Philosophicus (don't worry, it's much easier to understand than it's to pronounce)
Language isn't just about words and symbols, it's deeply rooted in shared human experiences and context.
He basically claims that without lived experiences, emotions, and cultural backgrounds, words don't truly mean anything, they’re empty shells, it's the people who give meanings to them, just like how you can write these 🍑 🍆 emojis in a message and send to both your partner and the local grocery store and they think about two totally different things (hopefully). AI can generate convincing text, memes, and joke for you, but they not laughing, cringing, or feeling anything about things you say or they generate themselves. This limitation means there's ALWAYS going to be an authenticity barrier for AI generated communication, it can mimic our language like a pattern, but never fully understands it because it needs lived experience. To understand the impact of this limitation on AI, let's talk about a famous thought experiment called Chinese Room.
Imagine you are alone in a room, with just one book of instructions. There is a small gap under the door, and once in a while a piece of paper will be given to you, and you have to look for an answer from the book that matches the incoming message and send it back. But here is the problem, everything is in chinese language, you have no idea about the messages and the response you are writing, you are just sure that is the correct response because you picked the matching symbols to write based on the instruction book you had (which looks something like the code below)
你在吗 -> 是的,我在
你好吗 -> 我很好
你是中国人吗 -> 不,我不是
AI experience the same thing, this experiment demonstrates that there's a big difference between AI simulating understanding (by processing symbols in ways it was trained) and actually experiencing genuine comprehension or consciousness. You can say the say thing about Chimps in Number Memorization test, or Neuralink Mind Pong experiment. They don't care about math, or understand the underlying concept of numbers or counting. They don't even enjoy playing pong on a computer with their friends like we do, they just want the banana.
Loaded questions, provide loaded answers
Say you're writing an essay and your chosen topic is climate change. So you ask your AI, "Tell me about climate change," and it generates a paragraph. It sounds okay at first, but it's pretty vague. You're thinking you want more details, so you say, "Give me more details." And what does AI give you? Another single paragraph. A longer, denser paragraph stuffed with facts, effects, icebergs melting, polar bears stranded, sea-level rise, weather patterns shifting, it's like compressing an entire movie into a 30 second clip. And of course, it either skips some good stuff or just can't fit everything you had in mind.
So what's really happening here? You see, AI isn't exactly psychic, the real issue is this tricky thing called the Frame Problem, where an AI struggles to figure out exactly what details matter from the million options available. And even if you try to help AI by explicitly providing every tiny detail, clearly stating what matters or doesn't matter you run right into another issue named Complexity Brake Theory, this theory suggests that piling up details and becoming too specific just confuses your AI, it doesn't know where to start or end, struggling under the weight of too much information.
It's a bit like that classic scenario: if someone says, 'Whatever you do, don't think about an elephant" what's the first thing you picture? The elephant. AI suffers from the same flaw, except way worse. It focuses on the details you're trying the most to avoid or ignore, precisely because it doesn't understand context or intent without explicit, careful instructions.

The bottom line here is simple, AI can't read your mind and it can't spontaneously decide what's relevant for you. AI doesn't just wake up one day deciding to rule the world or plotting anything at all. AI literally cannot initiate anything on its own. Everything an AI does depends entirely 100% on human input. And humans aren't great at clearly explaining things either. We're vague. We're impatient. Sometimes our instructions contradict each other. Honestly, if humans could perfectly articulate every detail clearly, would we even have built AI? Probably not. Turns out, the machine isn't necessarily the weak link, it's often us!
Take your tinfoil hats off
In my personal opinion, the only place of discussion that truly worths our time, are the debates that center around ethical and security issues of AI. This topic also includes concerns about AI safety and the rate of capability growth versus its alignment, which is a discussion that i have plans to cover in future posts. Despite these concerns, i should emphasize that current AI technology, while powerful, has some serious limitations and is far from the superintelligent scenarios that fuel these fears. AI researchers call this issue the alignment problem, essentially, how we can design AI to want the same outcomes as us rather than some random destructive behavior.
I will address these REAL concerns in another post because it demands a dedicated discussion, because turns out this problem is kinda hard to solve. But, glass half full, it also means the first potentially "dangerous" AI (if any) is likely to be hilariously incompetent at its evil tasks, or at least very easy to spot because of the reasons we have discussed. Even the evil AI needs a system update from his human developers, isn't that stupid? Let’s relax, we'll put safeguards in place long before it hits that stage (if ever). And after all, when it comes to pulling the plug, humans remain champions of quick thinking.
Finally, i believe if we understand how these things work, what are the meanings of the buzzwords we hear in the news and social media, and learn about the process behind this magical tool, we will be more interested in using it. So before you panic and start planning for the AI apocalypse, read my blog posts to realize how similar are we to these digital creatures, and how easy we can use them to improve our lives. And don't forget, if you you learn how to use it, you will also learn how to defeat it.
Thanks for reading.