AI Killer Robots: Governments’ Ethical Quandary of Lethal Autonomous Weapons Systems

Why Trust Techopedia

Professor Dave Waters was right when he said, “The potential benefits of artificial intelligence are huge, so are the dangers.”

A worthy contender for the top spot on the list of dangerous AIs is lethal autonomous weapons systems (LAWS).

Also known as slaughter bots, these AI weapons “use sensor suites and computer algorithms to independently identify a target and employ an onboard weapon system to engage and destroy the target without manual human control of the system.”

The reality is that killer robots are not a distant threat. The U.S. has reportedly made a significant investment in Air Force AI drones, and there are ample examples of the technology being used in the conflict between Russia and Ukraine, as well as in the Middle East.

These AI weapons require no human to pull the trigger or even be part of the decision-making process that leads to death.

Cue the red flag.

Advertisements

The concept of “human out of the loop” autonomy is an ethical minefield, and because an outright ban is not on the table, the world is wondering — what are the governing powers going to do about the threat of AI weapons?

Key Takeaways

  • LAWS require no human involvement to function.
  • Algorithmic bias and system inaccuracy in LAWS can lead to catastrophic outcomes.
  • The Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy is not legally binding and relies on voluntary constraints.
  • The Vienna Conference on Autonomous Weapons Systems and the Challenge of Regulation urged for the legal restriction of AI weapons.

The Moral Panic Surrounding Killer Robots

In 2019, the U.N. Secretary General, António Guterres, stated that LAWS are “politically unacceptable, morally repugnant, and should be prohibited by international law.”

While this has not become a reality, many share Guterres’ conviction that AI in warfare is morally indefensible.

But what are the risks of autonomous weapons?

AI is well known for making mistakes, but when it comes to killer robots, malfunctions could be catastrophic. Imagine if AI’s infamous algorithmic bias led to discrimination that had deadly consequences.

This is a chief concern of Bonnie Docherty, lecturer at Harvard Law School’s International Human Rights Clinic. She said in an interview to the the Harvard Gazette:

“There’s ample evidence that artificial intelligence can become biased. We in the human-rights community are very concerned about this being used in machines that are designed to kill.”

Another ongoing anxiety is civilians being mistaken for combatants. The 2013 statement, “Scientists’ Call to Ban Autonomous Lethal Robots,” communicated doubts that AI could ever obtain “the functionality required for accurate target identification” so as not to cause high levels of collateral damage.

Then there are others who argue that “AI-assisted weapons could be more accurate than human-guided ones, potentially reducing both collateral damage…and the numbers of soldiers killed and maimed.”

Even after a decade of advances in the field, there are still no guarantees that LAWS will not harm civilians, and even if there were, that would not resolve the overarching ethical tension of allowing algorithms to make life-and-death decisions.

Humanity at a Crossroads: The Challenge of Autonomous Weapons Systems Regulation

The contending perspectives concerning the pros and cons of LAWS extend to how different nations believe they should be regulated.

In March 2024, the U.S. State Department hosted an assembly at the University of Maryland to discuss the adoption of voluntary constraints, as outlined in the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy.

52 countries have signed the declaration, but as critics have rightly expressed, the document is not legally binding and issues no threat of penalty if signatories fail to comply.

In contrast to the humble 150 participants that attended the U.S. plenary, the Vienna Conference on Autonomous Weapons Systems and the Challenge of Regulation that took place in April 2024 brought together around 1,000 participants from more than 140 countries.

It was, however, not only size that differentiated these two assemblies. The discussions in Vienna were notably dystopian and urged for legal restrictions on AI weapons.

Austria’s Minister for Foreign Affairs, Alexander Schallenberg, opened the conference with a chilling comparison: “This is, I believe, the Oppenheimer moment of our generation.”

The prospect of removing humans from the life and death processes of war energized Schallenberg with a sense of urgency.

“Now is the time to agree on international rules and norms to ensure human control. At least let us make sure that the most profound and far-reaching decision: who lives and who dies remains in the hands of humans and not of machines.”

Much of the discussion centered around accountability. Who can be held responsible if humans are no longer pulling the trigger?

The president of the International Committee of the Red Cross, Mirjana Spoljaric Egger, said:

“We cannot ensure compliance with international humanitarian law anymore if there is no human control over the use of these weapons.”

While accountability is certainly a key consideration, the ease of weaponizing AI in comparison to, say, harnessing nuclear arsenals was also raised as a red flag.

Schallenberg predicted that “It probably very quickly will end up in the hands of non-governmental actors or terrorist groups.”

Estonian programmer Jaan Tallinn also warned that once LAWS “become able to perfectly distinguish between humans, they will make it significantly easier to carry out genocides and target the killings that seek specific human characteristics.”

No efforts were spared in vividly portraying the potential horrors that killer robots could perpetrate.

Tallinn,however, added a note of optimism: “We have acted preventatively on banning blinding laser weapons, not to mention constraints on biological, chemical, and nuclear weapons.”

This can serve as a hopeful sign that preventative measures are possible, but the need to act quickly and enforce regulations is paramount.

To really push the point home, the proceedings ended the same way they had started, with another frightening comparison from Schallenberg:

“We all know the movies: Terminator, The Matrix, and whatever they’re called. We don’t want killer robots.”

The Bottom Line

Nearly a thousand years after Saint Augustine advanced his Just-War Theory, Thomas Aquinas developed comprehensive criteria of what constituted a just war. Among other points, he stated that the use of force must distinguish between militia and civilians. This has come to be known as the principle of discrimination in modern military ethics.

It is this principle that protects civilians, and it is this principle that LAWS potentially threatens.

Many declare that human life does not concern deadly AI, while others contend that killer robots could be more ethical than their human counterparts. Wherever you fall on this issue, everyone can surely agree that LAWS require strict regulation.

We can all echo Schallenberg’s words: “We don’t want killer robots.”

FAQs

Are killer robots legal?

What is an example of weaponized AI?

Which countries have lethal autonomous weapons?

What are the examples of autonomous weapon systems?

What kinds of international standards and regulations are currently in place for autonomous weapons?

Advertisements

Related Reading

Related Terms

Advertisements
John Raspin
Technology Journalist
John Raspin
Technology Journalist

John Raspin spent eight years in academia before joining Techopedia as a technology journalist in 2024. He holds a degree in Creative Writing and a PhD in English Literature. His interests lie in AI and he writes fun and authoritative articles on the latest trends and technological advancements. When he's not thinking about LLMs, he enjoys running, reading and writing songs.