Emergent Mind

Abstract

During the last decade, deep neural networks (DNN) have demonstrated impressive performances solving a wide range of problems in various domains such as medicine, finance, law, etc. Despite their great performances, they have long been considered as black-box systems, providing good results without being able to explain them. However, the inability to explain a system decision presents a serious risk in critical domains such as medicine where people's lives are at stake. Several works have been done to uncover the inner reasoning of deep neural networks. Saliency methods explain model decisions by assigning weights to input features that reflect their contribution to the classifier decision. However, not all features are necessary to explain a model decision. In practice, classifiers might strongly rely on a subset of features that might be sufficient to explain a particular decision. The aim of this article is to propose a method to simplify the prediction explanation of One-Dimensional (1D) Convolutional Neural Networks (CNN) by identifying sufficient and necessary features-sets. We also propose an adaptation of Layer-wise Relevance Propagation for 1D-CNN. Experiments carried out on multiple datasets show that the distribution of relevance among features is similar to that obtained with a well known state of the art model. Moreover, the sufficient and necessary features extracted perceptually appear convincing to humans.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.