Living in a World Where AI Is Trained on the Battlefield

Why Trust Techopedia

Artificial intelligence (AI) is quickly making its way into enterprises throughout the commercial and government sectors of the economy, so it shouldn’t come as much of a surprise that it has caught the attention of militaries around the world as well.

But while images of Terminator-style autonomous robot soldiers are first and foremost in the public imagination (thanks to decades of science fiction stories), AI is being developed across a wide range of use cases, including intelligence gathering, logistics, and medical services.

Nevertheless, smart weapons are part of that mix, and much of the research taking place in this direction is classified. So, before we go any further, it might help to consider the current state of AI-driven warfare and if there are any causes for concern.

Key Takeaways

  • AI is increasingly being integrated into military operations, with ongoing conflicts like the Russia-Ukraine war serving as a “living lab for AI warfare”, according to some commentators.
  • Using AI in warfare raises many concerns and ethical considerations, particularly regarding decision-making transparency and the potential for rapid escalation.
  • The speed of AI-driven decision-making, accelerated by the “OODA loop” in military operations, raises questions about human control and oversight.
  • Intelligent military technology also lacks clear rules and guardrails and, combined with the secretive nature of military AI development, creates an issue of visibility.

AI and Real-World Training

The Russia-Ukraine war is already considered to be a “living lab for AI warfare”, according to National Defense Magazine. While AI might not be center stage, the magazine argues that it is being used to adapt and fine-tune AI technologies for deployment across a range of military capabilities, including creating tighter cohesion between units on the front line and those back at command headquarters. This is intended to produce a more dynamic and agile fighting force.

Among the known capabilities deployed in Ukraine are geospatial intelligence, used for surveillance and reconnaissance, as well as facial recognition, cyber technologies (both offensive and defensive), and deep fakes. All of these tools are being trained on real data from a real war, which means they will likely be even more effective in the next conflict.

Viewed through a different lens, AI does stand a chance of reducing errors and weeding out many of the inefficiencies of modern warfare – just like it does in commercial settings. This offers up the possibility of more accurate targeting to reduce civilian casualties and friendly fire, as well as shorter wars, less destruction, and perhaps even a muted post-war economic impact, which tends to affect both sides alike.

Advertisements

Still, many of the same ethical concerns that confront the civilian use of AI are present in military applications and are more critical given the nature of weapons systems and their functions.

Will AI Decision-Making Be Transparent?

As United Nations University notes, are there ways these systems can be trained to be impartial and non-biased? Will their decision-making processes be transparent? Should the same privacy regulations apply to the military, which will have to train its algorithms on vast amounts of data?

Unfortunately, the development of intelligent military technology has “no rules and few guardrails”, according to Foreign Policy, which implies countries worldwide are devising all manner of weapons in complete secrecy.

A major cause for concern is the possibility that decision-makers could come to rely too much on AI to guide their thinking – particularly when it comes to command and control of weapons systems.

A technology like ChatGPT, for example, closely mimics sentience even though it is driven by algorithms. If similar technology were to infiltrate military infrastructure, hostilities could break out far faster than they have in the past, and this is particularly troubling when contemplating nuclear capabilities.

Who’s in Charge?

Former U.S. Joint Chiefs of Staff Chairman Gen. Mark Milley echoes this concern by pointing out how AI speeds up the “OODA loop” (observe, orient, direct, act). This is the basic process that militaries use against adversaries.

Right now, U.S. policy is to have humans in charge of this process, and over the use of autonomous weapons as well.

But, the extent to which AI can influence these decisions is unclear. And, of course, there is no way of knowing how strict other nations are when it comes to implementing AI in their command structures.

The Bottom Line

AI is not the first technology to usher in new forms of warfare, and it probably won’t be the last. But just as with nuclear weapons, mechanical warfare, submersibles, satellites, and a host of other technologies, decisions must be made as to how new weapons should be used and what moral/ethical red lines should be drawn.

But AI is evolving faster than most people realize, and it isn’t entirely clear what the end result will be — if ever there is one. Military applications are likely to be the toughest to regulate, given the desire to keep development programs under wraps. But they are also likely to be the most consequential, with perhaps millions of lives hanging in the balance.

Advertisements

Related Reading

Related Terms

Advertisements
Arthur Cole
Technology Writer
Arthur Cole
Technology Writer

Arthur Cole is a freelance technology journalist who has been covering IT and enterprise developments for more than 20 years. He contributes to a wide variety of leading technology web sites, including IT Business Edge, Enterprise Networking Planet, Point B and Beyond and multiple vendor services.