Is the possibility of a global AI takeover real?

New ethical dilemmas will crop up

Artificial Intelligence, or AI: most of us watched movies like The Matrix, a blockbuster three decades ago, but today AI is everywhere. In the last few months, chat GPT has taken the world by storm. What’s Chat GPT is an artificial intelligence chatbot that can write essays, make up poems, scripts of complete movies and even clear exams. It’s gotten so out of hand that the creator of Chat GPT Sam Altman wants us to regulate it by saying, “We think regulatory intervention by governments will be critical to mitigating the risks of increasingly powerful models.” So how powerful is AI? How can we control it? Or can we control it? Also, why should we care? I will read between the stated and the unstated lines, the obvious and the hidden, to bring you the full story.

You may hear AI and think Chat GPT, but that’s not the only AI in your lives. AI is all around us. Using Google Maps, that’s AI. Talking to Siri or Alexa, that’s AI too. Using predictive text, AI again. Using face ID to unlock your phones, that too is AI. The YouTube algorithm making videos pop up on your feeds is also artificial Intelligence; as I said, it’s everywhere. What exactly is artificial Intelligence? Where the answer involves much jargon, so let me break it down. What makes us humans special? We can think. That’s exactly what AI is trying to replicate, systems that think and solve problems like humans. It’s intelligent but artificial.

The idea is simple to recreate human-like thinking to solve problems which brings us to another term, machine learning. Essentially machines learn from experience, algorithms learn from previous data. They recognize existing patterns and use them to solve problems with no human interference.

It doesn’t sound very easy, so let me simplify this further. Imagine you’re scrolling down your YouTube feed, and you see a video about dogs, you like it, and the algorithm takes note of it. Suddenly dog videos are all over your feed. So, the algorithm has used machine learning to understand what you like and started recommending similar content. Now AI has come a long way. Computers beating humans at chess is ancient history.

Today AI is broadly characterized into three categories. First is narrow artificial Intelligence, where the system is given one task and does just that. Think of appliances, self-driving cars, streaming apps, or even Health Care. This is simple: we give a machine a task, and it does it, so the rudimentary stages of artificial Intelligence that’s what it is. Then comes the second stage, artificial general Intelligence, where AI can rival humans. They can do multiple things simultaneously as many call chat GPT, a step towards artificial general Intelligence. Then we have the third stage, where things get a little scary. It is called artificial super-Intelligence machines going Beyond Human Intelligence. They have a mind of their own. Remember all those Sci-Fi movies with the villain robot? This is that stage or, so we are told, but we have yet to talk Terminator. The current emergence of AI has raised many questions. Is it ethical that it can go out of hand, and are we too late to control this?

Some of Tech’s most prominent minds wrote a letter in March this year. It included the likes of Twitter CEO Elon Musk and Apple co-founder Steve Wozniak. They asked world leaders to hit a pause to stop AI development for six months. Let me write out part of that letter, which says, “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.” So clearly, top tech leaders are worried they believe AI is going too fast, fast enough to impact society and humanity negatively. Inherently, AI is not without its own set of problems. The first would be the way it’s used. Facial recognition to open your phone sounds great, but what happens when the same is used to spy on you? Countries use it to spy on their citizens. Some even use it for racial profiling, as China does. They use surveillance to oppress vegan Muslims.

Our arch-rival India is also considering a regulatory framework for AI, but all governments are just playing catch-up. AI is moving at breakneck speed, and countries must race much harder to control it. They may be late already like they were with social media. They thought of regulation only after things went out of control, and just like social media, AI can also be a double-edged sword. It can be used for good and bad, so we must not repeat the same mistakes we made with social media. The regulations must come before the damage is done.

The other problem is bias, you would expect machines to be neutral, but at the end of the day, they’re made by us, and as humans, we are inherently biased, so the machines we make are biased too. AI almost amplifies the existing biases in society. AI is as good as the data that it is fed. If the data is biased, so will be the machine. Imagine that an AI algorithm is trained on data involving only white men. Now in this same algorithm that is applied to women, the result will be biased, bringing us to the third issue, lethal mistakes.

Humans and machines make mistakes, but what happens when the mistake is life-threatening? What happens when a self-driving car kills someone? Who will you blame the car or the person behind the code? What if the stakes are a little higher? I’m talking about lethal autonomous weapons, aka Killer Robots. These will be able to identify and kill targets without human help. We’re not there yet, but precursors do exist. In 2020 AI drones may have killed a human for the first time.

The details are a little murky. According to the United Nations, a Turkish-made Kargu-2 drone hunted down Libya’s National Army members. The manufacturer says Kargu-2 can use machine learning to classify objects. This allows it to fire autonomously. Turkey denies using Kargu-2 in this way, but the point is that the Drone has a mind of its own, just like many other armed drones.

So, the United Nations wants to ban Killer Robots, but countries are not agreeing. The USA, for example, wants guidelines and not a complete ban. So, we can’t regulate Killer Robots yet, but can we regulate AI? That’s what open AI Sam Altman called for at the U.S. Congress. He said AI could cause significant harm to the world, so the need of the hour was regulation and leading the trend was China. They have devised a law that seeks to regulate AI in all forms. Chinese tech companies will need to register their AI products. They will go through a security assessment first. China is always ready to regulate everything, even human rights, but what about the West?

The European Union, too, is working on an AI act. The European AI Act is the West’s first law for AI systems. The regulation takes a risk-based approach. What does that mean? It means the higher the risk, the stricter the regulation. This act could be a precedent for others, especially the USA.

Our arch-rival India is also considering a regulatory framework for AI, but all governments are just playing catch-up. AI is moving at breakneck speed, and countries must race much harder to control it. They may be late already like they were with social media. They thought of regulation only after things went out of control, and just like social media, AI can also be a double-edged sword. It can be used for good and bad, so we must not repeat the same mistakes we made with social media. The regulations must come before the damage is done.

Muhammad Fahim Khan
Muhammad Fahim Khan
The writer is a freelance columnist

Must Read

Matthew McConaughey Celebrates Son Livingston’s 12th Birthday With Heartfelt Post

Matthew McConaughey celebrated his youngest son Livingston’s 12th birthday with a heartfelt Instagram post on Saturday. “At a dozen, my son and teammate,” the actor...