Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 44 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 13 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 86 tok/s Pro
Kimi K2 208 tok/s Pro
GPT OSS 120B 447 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Concept-based Explanations using Non-negative Concept Activation Vectors and Decision Tree for CNN Models (2211.10807v1)

Published 19 Nov 2022 in cs.CV and cs.AI

Abstract: This paper evaluates whether training a decision tree based on concepts extracted from a concept-based explainer can increase interpretability for Convolutional Neural Networks (CNNs) models and boost the fidelity and performance of the used explainer. CNNs for computer vision have shown exceptional performance in critical industries. However, it is a significant barrier when deploying CNNs due to their complexity and lack of interpretability. Recent studies to explain computer vision models have shifted from extracting low-level features (pixel-based explanations) to mid-or high-level features (concept-based explanations). The current research direction tends to use extracted features in developing approximation algorithms such as linear or decision tree models to interpret an original model. In this work, we modify one of the state-of-the-art concept-based explanations and propose an alternative framework named TreeICE. We design a systematic evaluation based on the requirements of fidelity (approximate models to original model's labels), performance (approximate models to ground-truth labels), and interpretability (meaningful of approximate models to humans). We conduct computational evaluation (for fidelity and performance) and human subject experiments (for interpretability) We find that Tree-ICE outperforms the baseline in interpretability and generates more human readable explanations in the form of a semantic tree structure. This work features how important to have more understandable explanations when interpretability is crucial.

Citations (4)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube