What Does Explainable Artificial Intelligence (XAI) Mean?
Explainable AI (XAI) is artificial intelligence that explains how specific outcomes were generated in a way that can be understood by a human, provides users with a certain level of confidence in the accuracy of its outputs -- and is only used under the conditions for which it is intended.
The goal of XAI is to make sure that artificial intelligence programs are transparent regarding both the purpose they serve and how they work. Explainable AI is a common goal and objective for data science engineers and others trying to move forward with artificial intelligence progress.
Explainability provides transparency by allowing data scientists to screen data and algorithmic outcomes for unacceptable outcomes, including those that have inadvertent bias. It is one of the five major principles that characterize trust in AI systems. The others four principles are resiliency, lack of bias, reproducability and accountability.