Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
42 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Be Careful When Evaluating Explanations Regarding Ground Truth (2311.04813v1)

Published 8 Nov 2023 in cs.CV and cs.LG

Abstract: Evaluating explanations of image classifiers regarding ground truth, e.g. segmentation masks defined by human perception, primarily evaluates the quality of the models under consideration rather than the explanation methods themselves. Driven by this observation, we propose a framework for $\textit{jointly}$ evaluating the robustness of safety-critical systems that $\textit{combine}$ a deep neural network with an explanation method. These are increasingly used in real-world applications like medical image analysis or robotics. We introduce a fine-tuning procedure to (mis)align model$\unicode{x2013}$explanation pipelines with ground truth and use it to quantify the potential discrepancy between worst and best-case scenarios of human alignment. Experiments across various model architectures and post-hoc local interpretation methods provide insights into the robustness of vision transformers and the overall vulnerability of such AI systems to potential adversarial attacks.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Hubert Baniecki (22 papers)
  2. Maciej Chrabaszcz (3 papers)
  3. Andreas Holzinger (26 papers)
  4. Bastian Pfeifer (8 papers)
  5. Anna Saranti (6 papers)
  6. Przemyslaw Biecek (43 papers)
Citations (2)

Summary

We haven't generated a summary for this paper yet.