Is AI Warfare Upon Us? How AI is Transforming Warfare in North Korea

Why Trust Techopedia

Is the era of AI warfare upon us? Last week, North Korean leader Kim Jong Un tested a series of suicide drones, which the country reportedly intends to use to wage war against South Korea and Seoul.

“It is necessary to develop and produce more suicide drones of various types to be used in tactical infantry and special operations units, as well as strategic reconnaissance and multi-purpose attack drones,” Kim Jong-un said during the tests, according to the state’s Korean Central News Agency.

However, it’s not just North Korea that’s looking to use AI as part of its military operations. Back in July, Reuters reported that Ukraine is rolling out its own AI-enabled war drones, which could operate in swarms and enact strikes on Russian targets.

These examples are just two of many, and they represent a new era of AI-enabled warfare. But what does this mean exactly? Let’s dive in.

Key Takeaways

  • North Korea has tested suicide drones for potential use in warfare and orders scientists to integrate AI into drone warfare.
  • AI enters modern warfare, particularly for target identification and decision-making.
  • The Russia-Ukraine conflict has seen the use of AI-powered drones and enhanced military intelligence.
  • The increased role of AI in warfare raises concerns about autonomous decision-making and human oversight.
  • There is a growing need for international rules of engagement to regulate AI in warfare.

How AI Is Being Used in Modern Warfare

Besides piloting drones, one of the most common ways that AI is being used is to identify potential targets. For example, in the Israel-Hamas conflict, the IDF has reportedly used an AI-powered database to identify 37,000 potential targets with links to Hamas or PIJ.

Intelligence sources suggest that the IDF has set pre-authorized thresholds for the number of civilians that could be killed in a given strike.

Advertisements

The system, known as Lavender, uses artificial intelligence and machine learning to automatically help make decisions about military action.

We’ve also seen heavy use of AI in the Russia-Ukraine war, in the form of drone-based combat, with the use of swarms of suicide drones and smart missiles. In one instance, a swarm of aerial and sea drones were used to attack the Admiral Makarov, one of Russia’s most prominent vessels on the Black Sea.

This conflict has also seen AI used to improve military intelligence. According to National Defence, the combatants are using AI not only to analyze satellite images, but also to geolocate and process open-source data like social media photos in geographically sensitive locations.

Military groups can develop neural networks, which combine photos, drone video footage, and satellite imagery to enable better decision making on the ground.

Why Using AI in Warfare Matters

AI is one of many technologies being weaponized in warfare, but it carries with it some unique risks. Arguably the most pressing is that AI systems can act autonomously and decide the fate of human lives.

While many of these solutions are used with careful human oversight, we are seeing an increase in automation around military action, which could obscure the human cost of a given strike.

On the other side of the coin, the International Review of the Red Cross suggests that improved intelligence could also help to reduce civilian casualties.

“AI and machine learning-based decision support systems may enable better decisions by humans in conducting hostilities in compliance with international humanitarian law and minimizing risks for civilians by facilitating quicker and more widespread collection and analysis of available information,” A report released in 2020 said.

Though it’s important to note that overreliance on predictions could lead to “worse decisions or violations of international humanitarian law,” particularly due to technological limitations such as unpredictability, lack of explainability, and bias.

That being said, we shouldn’t forget that human beings can weaponize any technology. If we look back to the first World War we can see how Chlorine, a useful chemical for disinfecting water, was used to create gas that killed thousands of soldiers.

Even during the Iraq war, we saw drone technology being used to launch strikes on targets that very often included innocents.

In any case, if AI is used in military operations, there needs to be complete transparency over how decisions are made. Likewise, significant military actions that result in the loss of human life should still ultimately be decided by humans, who can be held accountable for those choices.

The Bottom Line

AI still has a long way to go — but it is already changing the face of human society. From AI-generated content to automated drones, it appears that nothing is safe as we march into the future.

It is also perhaps a scary consequence that if any country brings AI into warfare, other countries will feel the need to do the same or risk falling behind on the battlefield.

In the future, human society will need to have significant rules of engagement surrounding the use of AI in warfare — or we will see serious human rights violations.

Advertisements

Related Reading

Related Terms

Advertisements
Tim Keary
Technology Specialist
Tim Keary
Technology Specialist

Tim Keary is a freelance technology writer and reporter covering AI, cybersecurity, and enterprise technology. Before joining Techopedia full-time in 2023, his work appeared on VentureBeat, Forbes Advisor, and other notable technology platforms, where he covered the latest trends and innovations in technology.