Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 65 tok/s
Gemini 2.5 Pro 47 tok/s Pro
GPT-5 Medium 39 tok/s Pro
GPT-5 High 32 tok/s Pro
GPT-4o 97 tok/s Pro
Kimi K2 164 tok/s Pro
GPT OSS 120B 466 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

Good Counterfactuals and Where to Find Them: A Case-Based Technique for Generating Counterfactuals for Explainable AI (XAI) (2005.13997v1)

Published 26 May 2020 in cs.AI

Abstract: Recently, a groundswell of research has identified the use of counterfactual explanations as a potentially significant solution to the Explainable AI (XAI) problem. It is argued that (a) technically, these counterfactual cases can be generated by permuting problem-features until a class change is found, (b) psychologically, they are much more causally informative than factual explanations, (c) legally, they are GDPR-compliant. However, there are issues around the finding of good counterfactuals using current techniques (e.g. sparsity and plausibility). We show that many commonly-used datasets appear to have few good counterfactuals for explanation purposes. So, we propose a new case based approach for generating counterfactuals using novel ideas about the counterfactual potential and explanatory coverage of a case-base. The new technique reuses patterns of good counterfactuals, present in a case-base, to generate analogous counterfactuals that can explain new problems and their solutions. Several experiments show how this technique can improve the counterfactual potential and explanatory coverage of case-bases that were previously found wanting.

Citations (136)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

  • The paper proposes a twin-systems approach to generate sparse and plausible counterfactuals using case-based reasoning.
  • It validates the technique across 20 datasets, demonstrating improved explanatory coverage for opaque deep learning models.
  • The method leverages counterfactual potential to optimize class change decisions, enhancing transparency in AI predictions.

Case-Based Counterfactual Generation for Explainable AI

This paper addresses the application of Case-Based Reasoning (CBR) to the generation of counterfactual explanations within the domain of Explainable AI (XAI). Recent XAI literature suggests counterfactual explanations are preferable to factual ones due to their higher causal informativeness and compliance with legal standards such as GDPR. Yet, generating "good" counterfactuals—those that are sparse, plausible, and directly impactful on the decision boundary—is often challenging.

Introduction

The authors contextualize the paper within the historical role of CBR in providing explanations akin to human reasoning through precedents. Traditional CBR approaches utilize factual cases, but this paper focuses on counterfactual cases—nearest unlike neighbors—that aim to elucidate potential changes to predictions. The authors explore the existing scarcity of good counterfactuals in CBR case-bases and propose a twin-systems approach using a case-based methodology to enhance explanatory competence in opaque deep learning models.

Methodology

The paper introduces the notion of counterfactual potential in case-bases, measured by assessing the proportion of case pairs that qualify as good counterfactuals. An examination of 20 datasets reveals that existing case-bases contain few counterfactuals that meet this criterion. Consequently, the authors propose a novel technique that leverages the structure of good counterfactuals within case-bases to create new explanations for novel queries.

Case-Based Counterfactual Generation Technique

This proposed technique involves:

  1. Identifying explanation cases (XCs) from the case-base that serve as clues for generating new counterfactuals.
  2. Building new counterfactuals by transfering values from the XC to the query case while preserving the sparsity and plausibility of modifications.
  3. Utilizing the underlying ML model to validate the class change.
  4. Adapting if necessary, by evaluating nearest neighbors to achieve valid class changes when initial candidate counterfactuals fail.

Experimental Findings

Two experiments underpin the paper. The first maps the counterfactual potential across standard datasets, demonstrating the paucity of naturally occurring good counterfactuals. The second evaluates the proposed technique across various datasets, showing substantial improvement in explanatory coverage with synthetic counterfactuals. The adaptation step further optimizes counterfactual distance, proving critical in achieving closer yet effective alterations.

Implications and Future Work

The authors achieve a considerable advancement in leveraging CBR for counterfactual XAI, addressing sparsity and plausibility limitations prevalent in perturbation-based approaches. The paper emphasizes the need for explanation competence—a parallel to predictive competence—and offers a systematic approach backed by empirical results to enhance the utility of counterfactual explanations in AI systems.

Future work should incorporate extensive user trials to better understand psychological aspects influencing the effectiveness of explanations and explore the applicability across a wider range of datasets and domains. Overall, the application of CBR for generating counterfactuals offers promising directions for improving transparency in complex AI systems.

Youtube Logo Streamline Icon: https://streamlinehq.com