Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
169 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Proactive Pseudo-Intervention: Causally Informed Contrastive Learning For Interpretable Vision Models (2012.03369v2)

Published 6 Dec 2020 in cs.CV and cs.AI

Abstract: Deep neural networks excel at comprehending complex visual signals, delivering on par or even superior performance to that of human experts. However, ad-hoc visual explanations of model decisions often reveal an alarming level of reliance on exploiting non-causal visual cues that strongly correlate with the target label in training data. As such, deep neural nets suffer compromised generalization to novel inputs collected from different sources, and the reverse engineering of their decision rules offers limited interpretability. To overcome these limitations, we present a novel contrastive learning strategy called {\it Proactive Pseudo-Intervention} (PPI) that leverages proactive interventions to guard against image features with no causal relevance. We also devise a novel causally informed salience mapping module to identify key image pixels to intervene, and show it greatly facilitates model interpretability. To demonstrate the utility of our proposals, we benchmark on both standard natural images and challenging medical image datasets. PPI-enhanced models consistently deliver superior performance relative to competing solutions, especially on out-of-domain predictions and data integration from heterogeneous sources. Further, our causally trained saliency maps are more succinct and meaningful relative to their non-causal counterparts.

Summary

We haven't generated a summary for this paper yet.