Explainable Artificial Intelligence (XAI)

Definition - What does Explainable Artificial Intelligence (XAI) mean?

Explainable artificial intelligence (XAI) is a key term in AI design and in the tech community as a whole. It refers to efforts to make sure that artificial intelligence programs are transparent in their purposes and how they work. Explainable AI is a common goal and objective for engineers and others trying to move forward with artificial intelligence progress.

Techopedia explains Explainable Artificial Intelligence (XAI)

The idea behind explainable AI is that AI programs and technologies should not be “black box models” that people cannot understand. Too often, experts say, artificial intelligence programs sacrifice transparency and explainability for experimental sophisticated results. Powerful algorithms may turn out useful results without actually showing how they arrived at these results.

Explainable artificial intelligence is also a key part of applying ethics to AI. As tech leaders like Elon Musk have pointed out, artificial intelligence may end up having net positives or negatives, depending on how it is used by humans, and how it is built by humans.

Explainable AI helps in the pursuit of ethical AI, because it shows how machines are performing computational work and where there may be problems.

Share this: