In general, explainable artificial intelligence is becoming a much heralded part of cutting-edge work in data sciences. It's helping to guide human control of an inherently volatile and dynamic type of technology – explainable AI helps answer a lot of our collective questions about how artificial intelligence will work.
To understand explainable AI, it helps to understand what "regular AI" looks like. Traditionally, as AI begins to take shape, the typical project consists of a fancy new software capability, hidden in algorithms and training sets and linear code, that's kind of a "block box" to users. They know that it works – they just don't know exactly how.
This can lead to "trust issues" where users may question the basis on which a technology makes decisions. That's what explainable AI is supposed to address: Explainable AI projects come with additional infrastructure to show end users the intent and the structure of the AI – why it does what it does.
In an age where top innovators like Bill Gates and Elon Musk are expressing concern about how artificial intelligence will work, explainable AI seems extremely attractive. Experts contend that good explainable AI could help end users understand why technologies do what they do, increase trust, and also increase ease-of-use and utilization of these technologies.
Specifically, though, DARPA explains on its own specifically why it is interested in the new projects. A page on DARPA shows that the Department of Defense anticipates a “torrent” of artificial intelligence applications, and some amount of chaos in its development.
“Continued advances promise to produce autonomous systems that will perceive, learn, decide, and act on their own,” writes David Gunning. “However, the effectiveness of these systems is limited by the machine’s current inability to explain their decisions and actions to human users. … Explainable AI – especially explainable machine learning – will be essential if future warfighters are to understand, appropriately trust, and effectively manage an emerging generation of artificially intelligent machine partners.”
Gunning’s online essay suggests that explainable AI systems will help to “provide the rationale” for technologies, show their strengths and weaknesses, and make use cases more transparent. A graphic on the page shows how a straightforward pipeline of artificial intelligence functionality from training data would be augmented by something called an explainable model and an explainable interface that will help the user answer questions. Gunning further suggests that an explainable AI program will have two major focus areas – one would be sifting through multimedia data to find what's useful to users, and a second focus would be simulating decision processes for decision support.
DARPA hopes to provide a “toolkit” that can help to develop future explainable AI systems.