Question

Could AI Cause Human Extinction?

Answer
Why Trust Techopedia

Artificial intelligence (AI) represents a new generation of digital software for a number of reasons.

Its ultimate power is still untested and theoretical, and an industry that is likely to keep blooming over the next few decades, disrupting industries, taking over jobs, and hopefully freeing humanity from a lot of daily, repetitive tasks — much like the Industrial Age of machinery ushered in a new era.

But it has a unique ability unlike the invention of the conveyor belt — the ability to adapt and learn, possibly someday thinking for itself.

In this article, Techopedia explores whether the birth of AI could lead to an outcome none of us can want — the end of humanity, or at least our dominant spot on the planet.

How Does AI Differ From Conventional Computing Technology?

First and foremost, as it stands today, AI has the ability to mimic human speech and behavior to a degree not found in traditional technologies. The latest iterations of GPT (Chat Generative Pre-Trained Transformer) have shown a remarkable capacity for creating highly articulate text and speech using just a few simple prompts.

This allows it to streamline and simplify the creation of original content on websites and streaming services, but it also allows people to present GPT-generated works as their own. Other forms of generative AI, such as MidJourney, can do this with visual media and have even won top prizes in art contests, leading to widespread debate about what art is and whether machines can genuinely engage in creative expression.

Another way in which AI excels beyond traditional computing environments is its capacity to learn. Most software releases are developed over many months to perform exactly the way users demand.

AI begins with only basic programming, but its algorithms are able to retrain themselves to function better as data from its environment and its own actions are retrieved and analyzed. A model that has only a rudimentary understanding of the game of chess, or none at all, can observe how the game is played, analyze strategies that lead to victory, and essentially rewrite its code to become a grand master — all in a relatively short time. This capability can also be applied to other complex environments like supply chains, product and service development, and even market analysis and strategic business planning.

To support both of these achievements, AI has the ability to ingest and analyze vast volumes of data – far more than humans can ever hope to understand. And it also has a far greater capacity to memorize and retrieve facts without error.

Why Do Some Experts Say This May Cause Human Extinction?

One of the main dangers of AI, as suggested by many leading voices in the scientific community, is that the technology will displace humans as the smartest beings on the planet. This could lead to a loss of control that could be highly detrimental to the human race — perhaps to the point of extinction.

While few people believe that there will be a malevolent digital overlord bent on ridding the planet of these flawed, messy, and frail bags of water (us), there are two ways in which AI could go seriously wrong:

  • The technology evolves to a point where it can inure itself from human interference and take actions that make the planet unlivable
  • Humans may use the technology to create weapons or other offensive capabilities that they cannot control, leading to the destruction of friend and foe alike.

At the heart of these concerns is the fact that, at the moment, at least, the leading processes that most AI models employ are unknown.

These self-initiated computations are so complex and so intricate that not even the experts can tell why a particular model chose to put a green apple in an image of a bowl of fruit and not a red one. This lack of visibility and lack of explainable AI (XAI) leads to doubt as to why the model behaves the way it does, which leads to fear that it could start misbehaving to a calamitous degree.

What Are Some of the Counter-Arguments to This View?

The chief argument against this line of reasoning is that these fears generally apply to only one, still largely theoretical, form of AI called artificial general intelligence (AGI). AGI and its anticipated successor, artificial super intelligence (ASI), are said to mimic the human brain’s thought processes, essentially forming thinking, and probably sentient, digital intelligence – like the ones populating any number of sci-fi books and movies.

While work is progressing toward this goal, it is at an extremely rudimentary state. The computing power alone to simulate the brain’s 100 billion neurons and estimated 100 trillion synaptic updates per second is achievable but significant, particularly in terms of energy consumption.

Even more problematic is mapping all of this neural activity (which is still largely a mystery) and then recreating it in digital form.

Today’s AI is defined mostly as artificial narrow intelligence (ANI), that is, it is limited to achieving very specific outcomes. Some prime examples of ANI are the models being trained to operate autonomous vehicles. Their goal is to move passengers safely from one point to another without hitting anything, not commandeer the world’s weapons systems, and obliterate the planet. More complex models are being trained to manage city-wide or even national traffic flows, but even these are made up of multiple narrow AIs, each focused on its own area of responsibility.

Of course, there is always a chance that something can go wrong, but in all likelihood, the result will be a traffic jam or maybe a flood downriver from a dam, not the end of humanity. And even if AI does become integrated with the control systems overseeing global energy or food production or nuclear weapons arsenals, there is no reason why the same controls that prevent biological intelligence from running amok cannot be applied to artificial intelligence.

Are There Ways to Control AI?

The simplest way to prevent any unwanted result from a computer is that age-old advice from the IT department: turn it off and then turn it on again. This becomes vastly more difficult if and when AI becomes distributed across global digital footprints, but it is not impossible. And even the experts who say that global extinction could happen due to AI are quick to add the caveat, provided we don’t do anything to prevent it now.

But given that we are currently dealing with narrow AI, not the all-powerful AGI, there is more benefit to not controlling it to a great extent than clamping down on it.

Under narrow models, the idea is to tell the software what needs to be done, not how to do it. If the results are unsatisfactory, let it know what is wrong so it can try again – just like with any human worker. And only after the model has gained proficiency with one task should it be entrusted with another – again, just like with humans.

Should I Be Worried About AI?

So, is AI dangerous? At this point, no.

Fears of an all-powerful super-brain are vastly overblown, and even the impact of today’s AI on business and life in general are still more hype than reality. AI is still just a technology, and the tech industry has a long history of overpromising and under-delivering.

Related Terms

Arthur Cole
Technology Writer
Arthur Cole
Technology Writer

Arthur Cole is a freelance technology journalist who has been covering IT and enterprise developments for more than 20 years. He contributes to a wide variety of leading technology web sites, including IT Business Edge, Enterprise Networking Planet, Point B and Beyond and multiple vendor services.