Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal Sufficient Subsets (2009.11023v2)

Published 23 Sep 2020 in cs.CL

Abstract: For neural models to garner widespread public trust and ensure fairness, we must have human-intelligible explanations for their predictions. Recently, an increasing number of works focus on explaining the predictions of neural models in terms of the relevance of the input features. In this work, we show that feature-based explanations pose problems even for explaining trivial models. We show that, in certain cases, there exist at least two ground-truth feature-based explanations, and that, sometimes, neither of them is enough to provide a complete view of the decision-making process of the model. Moreover, we show that two popular classes of explainers, Shapley explainers and minimal sufficient subsets explainers, target fundamentally different types of ground-truth explanations, despite the apparently implicit assumption that explainers should look for one specific feature-based explanation. These findings bring an additional dimension to consider in both developing and choosing explainers.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Oana-Maria Camburu (29 papers)
  2. Eleonora Giunchiglia (17 papers)
  3. Jakob Foerster (101 papers)
  4. Thomas Lukasiewicz (125 papers)
  5. Phil Blunsom (87 papers)
Citations (23)

Summary

We haven't generated a summary for this paper yet.