Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 133 tok/s
Gemini 2.5 Pro 51 tok/s Pro
GPT-5 Medium 28 tok/s Pro
GPT-5 High 30 tok/s Pro
GPT-4o 125 tok/s Pro
Kimi K2 188 tok/s Pro
GPT OSS 120B 448 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

A psychophysics approach for quantitative comparison of interpretable computer vision models (1912.05011v1)

Published 24 Nov 2019 in cs.CV, cs.AI, and cs.HC

Abstract: The field of transparent Machine Learning (ML) has contributed many novel methods aiming at better interpretability for computer vision and ML models in general. But how useful the explanations provided by transparent ML methods are for humans remains difficult to assess. Most studies evaluate interpretability in qualitative comparisons, they use experimental paradigms that do not allow for direct comparisons amongst methods or they report only offline experiments with no humans in the loop. While there are clear advantages of evaluations with no humans in the loop, such as scalability, reproducibility and less algorithmic bias than with humans in the loop, these metrics are limited in their usefulness if we do not understand how they relate to other metrics that take human cognition into account. Here we investigate the quality of interpretable computer vision algorithms using techniques from psychophysics. In crowdsourced annotation tasks we study the impact of different interpretability approaches on annotation accuracy and task time. In order to relate these findings to quality measures for interpretability without humans in the loop we compare quality metrics with and without humans in the loop. Our results demonstrate that psychophysical experiments allow for robust quality assessment of transparency in machine learning. Interestingly the quality metrics computed without humans in the loop did not provide a consistent ranking of interpretability methods nor were they representative for how useful an explanation was for humans. These findings highlight the potential of methods from classical psychophysics for modern machine learning applications. We hope that our results provide convincing arguments for evaluating interpretability in its natural habitat, human-ML interaction, if the goal is to obtain an authentic assessment of interpretability.

Citations (6)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.