Papers
Topics
Authors
Recent
Search
2000 character limit reached

Explaining the Predictions of Any Image Classifier via Decision Trees

Published 4 Nov 2019 in cs.LG, cs.AI, and stat.ML | (1911.01058v2)

Abstract: Despite outstanding contribution to the significant progress of AI, deep learning models remain mostly black boxes, which are extremely weak in explainability of the reasoning process and prediction results. Explainability is not only a gateway between AI and society but also a powerful tool to detect flaws in the model and biases in the data. Local Interpretable Model-agnostic Explanation (LIME) is a recent approach that uses an interpretable model to form a local explanation for the individual prediction result. The current implementation of LIME adopts the linear regression as its interpretable function. However, being so restricted and usually over-simplifying the relationships, linear models fail in situations where nonlinear associations and interactions exist among features and prediction results. This paper implements a decision Tree-based LIME approach, which uses a decision tree model to form an interpretable representation that is locally faithful to the original model. Tree-LIME approach can capture nonlinear interactions among features in the data and creates plausible explanations. Various experiments show that the Tree-LIME explanation of multiple black-box models can achieve more reliable performance in terms of understandability, fidelity, and efficiency.

Citations (8)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (3)

Collections

Sign up for free to add this paper to one or more collections.