Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
158 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

User Decision Guidance with Selective Explanation Presentation from Explainable-AI (2402.18016v3)

Published 28 Feb 2024 in cs.HC and cs.AI

Abstract: This paper addresses the challenge of selecting explanations for XAI (Explainable AI)-based Intelligent Decision Support Systems (IDSSs). IDSSs have shown promise in improving user decisions through XAI-generated explanations along with AI predictions, and the development of XAI made it possible to generate a variety of such explanations. However, how IDSSs should select explanations to enhance user decision-making remains an open question. This paper proposes X-Selector, a method for selectively presenting XAI explanations. It enables IDSSs to strategically guide users to an AI-suggested decision by predicting the impact of different combinations of explanations on a user's decision and selecting the combination that is expected to minimize the discrepancy between an AI suggestion and a user decision. We compared the efficacy of X-Selector with two naive strategies (all possible explanations and explanations only for the most likely prediction) and two baselines (no explanation and no AI support). The results suggest the potential of X-Selector to guide users to AI-suggested decisions and improve task performance under the condition of a high AI accuracy.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (40)
  1. Amina Adadi and Mohammed Berrada. 2018. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access 6 (2018), 52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052
  2. Visual Methods for Analyzing Probabilistic Classification Data. IEEE TVCG 20, 12 (2014), 1703–1712. https://doi.org/10.1109/TVCG.2014.2346660
  3. Grad-CAM++: Generalized Gradient-Based Visual Explanations for Deep Convolutional Networks. In 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). 839–847. https://doi.org/10.1109/WACV.2018.00097
  4. Financial Time-Series Data Analysis Using Deep Convolutional Neural Networks. In 2016 7th International Conference on Cloud Computing and Big Data (CCBD). 87–92. https://doi.org/10.1109/CCBD.2016.027
  5. Intelligent Decision Support for Power Grids Using Deep Learning on Small Datasets. In 2020 2nd International Conference on Control Systems, Mathematical Modeling, Automation and Energy Efficiency (SUMMA). 958–962. https://doi.org/10.1109/SUMMA50634.2020.9280654
  6. Deep Learning Decision Support for Sustainable Asset Management. In Advances in Asset Management and Condition Monitoring, Andrew Ball, Len Gelman, and B. K. N. Rao (Eds.). Springer International Publishing, Cham, 537–547.
  7. Using ontologies to enhance human understandability of global post-hoc explanations of black-box models. Artificial Intelligence 296 (2021), 103471. https://doi.org/10.1016/j.artint.2021.103471
  8. Subgoal-Based Explanations for Unreliable Intelligent Decision Support Systems. In Proceedings of the 28th International Conference on Intelligent User Interfaces (Sydney, NSW, Australia) (IUI ’23). Association for Computing Machinery, New York, NY, USA, 240–250. https://doi.org/10.1145/3581641.3584055
  9. Automated Rationale Generation: A Technique for Explainable AI and Its Effects on Human Perceptions. In Proc. 24th Int. Conf. IUI (Marina del Ray, California). 263–274. https://doi.org/10.1145/3301275.3302316
  10. Explanations that backfire: Explainable artificial intelligence can cause information overload. In Proceedings of the Annual Meeting of the Cognitive Science Society, Vol. 44.
  11. Yosuke Fukuchi and Seiji Yamada. 2023a. Dynamic Selection of Reliance Calibration Cues With AI Reliance Model. IEEE Access 11 (2023), 138870–138881. https://doi.org/10.1109/ACCESS.2023.3339548
  12. Yosuke Fukuchi and Seiji Yamada. 2023b. Selective Presentation of AI Object Detection Results While Maintaining Human Reliance. In 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 3527–3532. https://doi.org/10.1109/IROS55552.2023.10341684
  13. Yosuke Fukuchi and Seiji Yamada. 2023c. Selectively Providing Reliance Calibration Cues With Reliance Prediction. In Proceedings of the Annual Meeting of the Cognitive Science Society, Vol. 45. 1579–1586. https://escholarship.org/uc/item/8zp6g0mj
  14. PyTorch library for CAM methods. https://github.com/jacobgil/pytorch-grad-cam.
  15. Continuous Deep Q-Learning with Model-based Acceleration. In Proceedings of The 33rd International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 48), Maria Florina Balcan and Kilian Q. Weinberger (Eds.). PMLR, New York, New York, USA, 2829–2838. https://proceedings.mlr.press/v48/gu16.html
  16. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770–778.
  17. Lukas-Valentin Herm. 2023. Impact Of Explainable AI On Cognitive Load: Insights From An Empirical Study. In The 31st European Conference on Information Systems. 269.
  18. Kevin Anthony Hoff and Masooda Bashir. 2015. Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors 57, 3 (23 May 2015), 407–434. https://doi.org/10.1177/0018720814547570
  19. Development of an intelligent decision support system for ischemic stroke risk assessment in a population-based electronic health record database. PLOS ONE 14, 3 (03 2019), 1–16. https://doi.org/10.1371/journal.pone.0213007
  20. LayerCAM: Exploring Hierarchical Class Activation Maps For Localization. IEEE Transactions on Image Processing (2021).
  21. DACE: Distribution-Aware Counterfactual Explanation by Mixed-Integer Linear Optimization. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20, Christian Bessiere (Ed.). International Joint Conferences on Artificial Intelligence Organization, 2855–2862. https://doi.org/10.24963/ijcai.2020/395 Main track.
  22. Mathias Kraus and Stefan Feuerriegel. 2017. Decision support from financial disclosures with deep neural networks and transfer learning. Decision Support Systems 104 (2017), 38–48. https://doi.org/10.1016/j.dss.2017.10.001
  23. Min Hun Lee and Chong Jun Chew. 2023. Understanding the Effect of Counterfactual Explanations on Trust and Reliance on AI for Human-AI Collaborative Clinical Decision Making. Proc. ACM Hum.-Comput. Interact. 7, CSCW2, Article 369 (oct 2023), 22 pages. https://doi.org/10.1145/3610218
  24. Empirical investigation of how robot’s pointing gesture influences trust in and acceptance of heatmap-based XAI. In 2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). 2134–2139. https://doi.org/10.1109/RO-MAN57019.2023.10309507
  25. Experimental Investigation of Human Acceptance of AI Suggestions with Heatmap and Pointing-based XAI. In Proceedings of the 11th International Conference on Human-Agent Interaction (Gothenburg, Sweden) (HAI ’23). Association for Computing Machinery, New York, NY, USA, 291–298. https://doi.org/10.1145/3623809.3623834
  26. Modeling Reliance on XAI Indicating Its Purpose and Attention. In Proceedings of the Annual Meeting of the Cognitive Science Society, Vol. 45. 1929–1936. https://escholarship.org/uc/item/1fx742xm
  27. Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267 (2019), 1–38. https://doi.org/10.1016/j.artint.2018.07.007
  28. OpenAI. 2023. GPT-4 Technical Report. arXiv:2303.08774 [cs.CL]
  29. Intelligent Decision Support for Energy Management: A Methodology for Tailored Explainability of Artificial Intelligence Analytics. Electronics 12, 21 (2023). https://doi.org/10.3390/electronics12214430
  30. Understanding the Impact of Explanations on Advice-Taking: A User Study for AI-Based Clinical Decision Support Systems. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 568, 9 pages. https://doi.org/10.1145/3491102.3502104
  31. Raja Parasuraman and Victor Riley. 1997. Humans and Automation: Use, Misuse, Disuse, Abuse. Human Factors 39, 2 (1997), 230–253. https://doi.org/10.1518/001872097778543886 arXiv:https://doi.org/10.1518/001872097778543886
  32. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems 32. Curran Associates, Inc., 8024–8035. http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf
  33. Gloria Phillips-Wren. 2013. Intelligent Decision Support Systems. 25–44. https://doi.org/10.1002/9781118522516.ch2
  34. Evaluating the visualization of what a deep neural network has learned. IEEE TNLS 28, 11 (2016), 2660–2673.
  35. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. In 2017 IEEE International Conference on Computer Vision (ICCV). 618–626. https://doi.org/10.1109/ICCV.2017.74
  36. Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops.
  37. Text Embeddings by Weakly-Supervised Contrastive Pre-training. arXiv:2212.03533 [cs.CL]
  38. Reframing Human-AI Collaboration for Generating Free-Text Explanations. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Marine Carpuat, Marie-Catherine de Marneffe, and Ivan Vladimir Meza Ruiz (Eds.). Association for Computational Linguistics, Seattle, United States, 632–658. https://doi.org/10.18653/v1/2022.naacl-main.47
  39. Quan-shi Zhang and Song-Chun Zhu. 2018. Visual interpretability for deep learning: a survey. Frontiers of Information Technology & Electronic Engineering 19, 1 (2018), 27–39.
  40. Learning Deep Features for Discriminative Localization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
Citations (2)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets