Emergent Mind

Realistic Counterfactual Explanations with Learned Relations

(2202.07356)
Published Feb 15, 2022 in stat.ML and cs.LG

Abstract

Many existing methods of counterfactual explanations ignore the intrinsic relationships between data attributes and thus fail to generate realistic counterfactuals. Moreover, the existing models that account for relationships require domain knowledge, which limits their applicability in complex real-world applications. In this paper, we propose a novel approach to realistic counterfactual explanations that preserve the relationships and minimise experts' interventions. The model directly learns the relationships by a variational auto-encoder with minimal domain knowledge and then learns to perturb the latent space accordingly. We conduct extensive experiments on both synthetic and real-world datasets. The experimental results demonstrate that the proposed model learns relationships from the data and preserves these relationships in generated counterfactuals. In particular, it outperforms other methods in terms of Mahalanobis distance, and the constraint feasibility score.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.