Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 34 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 27 tok/s Pro
GPT-5 High 30 tok/s Pro
GPT-4o 80 tok/s Pro
Kimi K2 198 tok/s Pro
GPT OSS 120B 461 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

Manifold-based Shapley for SAR Recognization Network Explanation (2401.03128v1)

Published 6 Jan 2024 in cs.AI

Abstract: Explainable artificial intelligence (XAI) holds immense significance in enhancing the deep neural network's transparency and credibility, particularly in some risky and high-cost scenarios, like synthetic aperture radar (SAR). Shapley is a game-based explanation technique with robust mathematical foundations. However, Shapley assumes that model's features are independent, rendering Shapley explanation invalid for high dimensional models. This study introduces a manifold-based Shapley method by projecting high-dimensional features into low-dimensional manifold features and subsequently obtaining Fusion-Shap, which aims at (1) addressing the issue of erroneous explanations encountered by traditional Shap; (2) resolving the challenge of interpretability that traditional Shap faces in complex scenarios.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (21)
  1. “Analytical interpretation of the gap of cnn’s cognition between sar and optical target recognition,” Neural Networks, vol. 165, pp. 982–986, 2023.
  2. Lloyd S Shapley et al., “A value for n-person games,” 1953.
  3. “A unified approach to interpreting model predictions,” Advances in neural information processing systems, vol. 30, 2017.
  4. “Algorithms to estimate shapley value feature attributions,” Nature Machine Intelligence, pp. 1–12, 2023.
  5. “Explaining image classifiers by counterfactual generation,” in International Conference on Learning Representations, 2018.
  6. “Fairwashing explanations with off-manifold detergent,” in International Conference on Machine Learning. PMLR, 2020, pp. 314–323.
  7. “Explaining individual predictions when features are dependent: More accurate approximations to shapley values,” Artificial Intelligence, vol. 298, pp. 103502, 2021.
  8. “Shapley explainability on the data manifold,” in International Conference on Learning Representations, 2020.
  9. “Weightedshap: analyzing and improving shapley based feature attributions,” Advances in Neural Information Processing Systems, vol. 35, pp. 34363–34376, 2022.
  10. “Counterfactual shapley additive explanations,” in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, 2022, pp. 1054–1070.
  11. “Generative adversarial nets,” Advances in neural information processing systems, vol. 27, 2014.
  12. “Umap: Uniform manifold approximation and projection,” The Journal of Open Source Software, vol. 3, no. 29, pp. 861, 2018.
  13. “Analyzing and improving the image quality of StyleGAN,” in Proc. CVPR, 2020.
  14. “Image2stylegan: How to embed images into the stylegan latent space?,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 4432–4441.
  15. “Understanding and unifying fourteen attribution methods with taylor interactions,” arXiv preprint arXiv:2303.01506, 2023.
  16. “On the (in) fidelity and sensitivity of explanations,” Advances in Neural Information Processing Systems, vol. 32, 2019.
  17. “Grad-cam: Visual explanations from deep networks via gradient-based localization,” in Proceedings of the IEEE international conference on computer vision, 2017, pp. 618–626.
  18. “On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation,” PloS one, vol. 10, no. 7, pp. e0130140, 2015.
  19. “Axiomatic attribution for deep networks,” in International conference on machine learning. PMLR, 2017, pp. 3319–3328.
  20. “Smoothgrad: removing noise by adding noise,” arXiv preprint arXiv:1706.03825, 2017.
  21. “Learning important features through propagating activation differences,” in International conference on machine learning. PMLR, 2017, pp. 3145–3153.
Citations (3)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.