In the world of artificial intelligence (AI), explainable AI (XAI) has gained an incredible amount of attention in the past few years. Many are emphasizing how important it is to the future of artificial intelligence and machine learning. (Also read: Why Does Explainable AI Matter Anyway?)
And it is important—but it is not the solution. The desire to explain black box systems’ decisions is good; but XAI tools or methods alone will never be enough. If we want to provide full assurance for these systems’ decisions, we should be discussing how to deliver “understandable AI” instead.
XAI Is Hot Right Now for the Right Reasons
More and more, AI systems are making important decisions that impact our daily lives.
From insurance claims and loans to medical diagnoses and employment, enterprises are using AI and machine learning (ML) systems with increasing frequency. However, consumers have become increasingly wary of artificial intelligence. For instance, in the realm of insurance, a mere 17% of consumers trust AI to review their insurance claims because they cannot comprehend how these black box systems reach their decisions. (Also read: Has a Global Pandemic Changed the World’s View of AI?)
Explainability for AI systems is practically as old as the field itself. In recent years, academic research has produced many promising XAI techniques and a number of software companies have emerged to provide XAI tools to the market. The issue, though, is that all of these approaches view explainability as a purely technical problem. In reality, the need for explainability and interpretability in AI is a much larger business and social problem—one that requires a more comprehensive solution than XAI can offer.
XAI Only Approximates the Black Box
It is perhaps easiest to understand how XAI works through an analogy. So, consider another black box: the human mind.
We all make decisions; and we’re more or less aware of the reasons behind those decisions (even when we’re asked to explain them!). Now imagine yourself (the XAI) observing another person’s (the original AI model) actions and inferring the rationale behind those actions. How well does that generally work for you?
With XAI, you are using a second model to interpret the original model. The “explainer” model is a best guess at the inner workings of the original model’s black box. They might approximate what is happening in the black box; they might not. How well should we expect it to approximate and “explain” non-human decisions? We can’t really know. Compounding this problem is how different model types require different explainers, which makes them more burdensome to manage alongside their respective models.
An attractive alternative is to design so-called “interpretable” models that provide visibility into the decision logic by design. Some excellent recent research suggests that such “white box” models may perform just as well as black box ones in some domains. But even these models have a significant downside: They are still often not understandable for non-technical people.
Explainable to Whom?
Another quick thought experiment: Imagine the imperfect explanations of XAI were, instead, perfect. Now, invite someone who isn’t a data scientist to review the model’s decisions—say, an executive in charge of a billion-dollar line of business who needs to decide whether to greenlight a high-impact ML model. (Also read: The Top 6 Ways AI Is Improving Business Productivity in 2021.)
The model could create an enormous competitive advantage and generate massive top-line revenue. It could also damage the company’s brand permanently or hurt the company’s stock price if the model runs amok. So it’s safe to say that executive would want some proof before the model goes live.
Looking at the outputs of some explainer models, what they would find is basically gobbledygook. That is to say, it is unreadable, decontextualized data with none of the attributes or logic they would expect when they hear the word “explanation.”
Herein lies the biggest issue for XAI as a field for use in the enterprise. Plus, interpretable models have the same issue for everyday people: The explanations require translation from technologists. The business executive, the risk organization, the compliance manager, the internal auditor, the chief counsel’s office and the board of directors cannot understand these explanations independently. What about the end user the model impacts?
Because of this, achieving trust and confidence becomes hard. External parties like regulators, consumer advocates and customers will find even less comfort.
The fact is, most “Explainable” AI tools are only explainable to a person with a strong technical background and deep familiarity with how that model operates. XAI is an important piece of the technologist’s toolkit—but it is not a practical or scalable way to “explain” AI and ML systems’ decisions.
Understandable AI: Transparency and Accessibility
The only way we’re going to get to the promised land of trust and confidence in decisions made by black box AI and ML is by enriching the explanatory domain and broadening its audience. What we need is “Understandable AI”—or AI that satisfies non-technical stakeholders’ needs in addition to XAI tools for technical teams.
The foundation for understandability is transparency. Non-technical people should have access to every decision made by the models they oversee. They should be able to search a system of record, based on key parameters, to evaluate the decisions individually and in aggregate. They should be able to perform a counterfactual analysis on individual decisions, changing specific variables to test whether the results are expected or not. (Also read: AI’s Got Some Explaining to Do.)
But we shouldn’t stop there. Understandable AI also needs to include the larger context in which the models operate. To build trust, business owners should have visibility into the human decision-making that preceded and accompanied the model throughout its life cycle. Here are just a handful of the vital questions everyone around a model should ask themselves:
- Why was this particular model considered the best choice for the business problem being addressed? What other options were considered; and why were they ruled out?
- What risks and limitations were identified during the selection process? How were they mitigated?
- What data was selected for inclusion in the model? How was it evaluated for appropriateness and potential problems?
- Were the data sources internal or external to the company? If third-party data was used, what assurances did vendors provide regarding their data governance practices?
- What did we learn during model development and training? How did those learnings inform the final product?
- How is our company ensuring problems are identified and rectified once the model is live in production?
Conclusion: XAI is One Piece of the Understandable AI Solution
Explainability alone will not solve the problem of understanding how an AI or ML model is behaving. However, it can—and should—be an important piece of the larger Understandable AI picture.
With careful selection and design, these tools provide invaluable insight for the expert modeler and technical teams, particularly before a model is put into production. But if companies innovating with these intelligent models today do not consider their non-technical stakeholders’ needs, they will almost certainly endanger the success of many important projects—projects that could benefit the public and the companies developing them.