Why Companies Use ‘Counterfactual’ Thinking in AI to Check Their Decisions

Why Trust Techopedia
Key Takeaways

Ever had a "Sliding Doors" moment? Counterfactual Explanations within AI allow you to test out the "What If?" moments in life. Companies like Spotify already employ the technology.

In today’s rapidly evolving technological landscape, our lives are increasingly entwined with artificial intelligence (AI) systems.

AI makes decisions that impact us in significant ways, from diagnosing diseases to predicting genetic mutations to predicting earthquakes, let alone the myriad ways we are beginning to use technology in everyday life.

When these AI-driven decisions do not align with our expectations or preferences, we demand more than just explanations for their choices.

We seek to understand not only why AI made a particular decision but also what steps we can take to alter that decision in our favor.

This understanding is what we refer to as a “counterfactual explanations.” It involves exploring “what if” scenarios, where we investigate how altering input data or conditions could have led to a different decision or outcome.

In contrast to explainable AI (XAI), where we merely identify the factors that influenced a decision, counterfactual explanations provide actionable insights, guiding us on how to reverse a decision. They answer how we can modify the attributes to change the decision.


These explanations are optimistic, revealing how a minimal change can alter the decision.

Why Use Counterfactual Explanations in AI?

Counterfactual explanations offer significant advantages for AI, including:

Transparency: They provide insight into the decision-making process of AI systems, making it easier to understand and interpret their choices.
Accountability: They allow for a more precise assessment of the AI’s reasoning and potential biases.
Improvement: By understanding how different inputs or conditions could lead to different outcomes, AI systems can be refined and more reliable.
Trust: Users are more likely to trust AI systems when they can grasp the reasoning behind the decisions made.

Beyond making AI explainable and trustworthy, counterfactual explanations could provide valuable insight into complex processes by revealing causal relationships in the form of causes and effects. This approach can be applied in various domains, such as:

Legal and Justice System: AI has found its way into the legal and justice systems, helping in many ways. Counterfactual explanations can help us understand what might happen if we make different legal choices. This isn’t just useful for lawyers; it’s like having a legal advisor who can answer ‘what if’ questions. Whether it’s about figuring out the effects of various legal decisions or establishing who’s responsible in a case, AI with counterfactual explanations can be a handy tool for getting the answers we need.

Medicine and Healthcare: AI systems are widely adopted by the medical and healthcare industry. Counterfactual explanations can assist in understanding the impact of various treatments, interventions, and lifestyle changes on patient outcomes. These systems can also help doctors and nurses better understand why AI suggests specific treatments by offering alternative suggestions. This can improve decision-making and can also act as a learning tool for medical professionals.

Science and Research: AI is playing an increasingly important role in scientific discoveries across various fields, from drug discovery to genomic research and from particle physics to climate science. Counterfactual explanations can help scientists and researchers explore causality in complex systems. By manipulating variables and observing how they affect outcomes, researchers can gain a deeper understanding of cause-and-effect relationships in these fields, leading to new discoveries.

Job Hiring: Employment organizations using AI for their hiring process can provide rejected candidates with suggestions on how they can minimally improve their qualifications for future positions, increasing transparency and fairness in the hiring process.

Autonomous Cars: Counterfactual explanations can be used to build a “what-if” tool to test the efficacy of AI models in autonomous cars, ensuring their safety and reliability.

Examples of Counterfactual Explanations in the Real World

Counterfactual explanation is being used in various real-world applications. Some of these applications are mentioned below:

Spotify’s Counterfactual Analysis: Personalized Music Recommendations
Spotify is employing counterfactual reasoning to uncover the causal impact of content recommendations on user engagement. The aim is to comprehend the intricate relationship between recommendations and user engagement. This involves considering what might have happened if different choices were made, akin to the movie “Sliding Doors.”

Spotify’s researchers have developed a machine learning model to capture counterfactual analysis, aiming to predict the effects of different actions and improve personalized music recommendations.

Counterfactual Thinking in Drug Discovery

The University of Rochester has developed a “counterfactual” method called MMACE (Molecular Model Agonistic Counterfactual Explanations) to empower AI models used for drug discovery to answer counterfactual questions such as why a molecule is predicted to permeate the blood-brain barrier, why a small molecule is predicted to be soluble, and why a molecule is expected to inhibit HIV. The key objective of this method is to help a researcher gain insights into drug discovery and AI models used for drug discovery.

Making AI Models Robust Against Adversarial Attacks

Counterfactual reasoning is emerging as a critical technique to bolster the resilience of autonomous driving AI models against adversarial attacks. Since autonomous cars heavily depend on machine learning-driven computer vision, they are exposed to threats involving precisely manipulated images that aim to cause errors, such as tricking a car into ignoring a traffic sign.

Counterfactual reasoning allows for analyzing these vulnerabilities by asking “what if” questions and studying how AI systems respond, leading to a more profound understanding of their environment. This heightened awareness enables the detection of deceptive cues, serving as a defense against cyberattacks.

Transforming Medical Diagnosis

The use of counterfactual reasoning in AI significantly improves medical diagnostics. Unlike traditional methods that are slow and isolated, AI with counterfactual systems speeds up analysis and explains why a diagnosis is made.

A 2020 study from Babylon Health and the University College of London shows that AI using counterfactuals can diagnose diseases as well as human doctors. This AI explores all possible causes and outcomes, even unusual ones, which boosts its problem-solving abilities and diagnostic accuracy. It’s a big step forward in medical diagnosis.

The Bottom Line

The power of counterfactual thinking in AI decision-making is profound. It not only makes AI more transparent, accountable, and trustworthy but also provides actionable insights that can shape our future decisions and outcomes.

As AI continues to integrate into various aspects of our lives, harnessing the potential of counterfactual explanations is essential for making informed and proactive choices. It’s time to embrace the “what if” scenarios and use them as a tool for personal and societal growth in this AI-driven world.


Related Reading

Related Terms

Dr. Tehseen Zia
Tenured Associate Professor
Dr. Tehseen Zia
Tenured Associate Professor

Dr. Tehseen Zia has Doctorate and more than 10 years of post-Doctorate research experience in Artificial Intelligence (AI). He is Tenured Associate Professor and leads AI research at Comsats University Islamabad, and co-principle investigator in National Center of Artificial Intelligence Pakistan. In the past, he has worked as research consultant on European Union funded AI project Dream4cars.