Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 156 tok/s
Gemini 2.5 Pro 44 tok/s Pro
GPT-5 Medium 23 tok/s Pro
GPT-5 High 22 tok/s Pro
GPT-4o 109 tok/s Pro
Kimi K2 168 tok/s Pro
GPT OSS 120B 455 tok/s Pro
Claude Sonnet 4.5 32 tok/s Pro
2000 character limit reached

LLMs for XAI: Future Directions for Explaining Explanations (2405.06064v1)

Published 9 May 2024 in cs.AI, cs.CL, cs.HC, and cs.LG

Abstract: In response to the demand for Explainable Artificial Intelligence (XAI), we investigate the use of LLMs to transform ML explanations into natural, human-readable narratives. Rather than directly explaining ML models using LLMs, we focus on refining explanations computed using existing XAI algorithms. We outline several research directions, including defining evaluation metrics, prompt design, comparing LLM models, exploring further training methods, and integrating external data. Initial experiments and user study suggest that LLMs offer a promising way to enhance the interpretability and usability of XAI.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (22)
  1. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion 58 (2020), 82–115.
  2. Explainable machine learning in deployment. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT* ’20). Association for Computing Machinery, New York, NY, USA, 648–657. DOI:http://dx.doi.org/10.1145/3351095.3375624 
  3. Towards LLM-guided Causal Explainability for Black-box Text Classifiers. (Jan. 2024).
  4. John Brooke. 1996. Sus: a “quick and dirty’usability. Usability evaluation in industry 189, 3 (1996), 189–194.
  5. Natural Language Explanations for Machine Learning Classification Decisions. In 2023 International Joint Conference on Neural Networks (IJCNN). IEEE, 1–9.
  6. A survey on evaluation of large language models. ACM Transactions on Intelligent Systems and Technology (2023).
  7. Paulo Cortez and Alice Silva. 2008. Using data mining to predict secondary school student performance. EUROSIS (Jan. 2008).
  8. Dean De Cock. 2011. Ames, Iowa: Alternative to the Boston Housing Data as an End of Semester Regression Project. Journal of Statistics Education 19, 3 (Nov. 2011). DOI:http://dx.doi.org/10.1080/10691898.2011.11889627 
  9. Adapting Prompt for Few-shot Table-to-Text Generation. (Aug. 2023).
  10. Measures for explainable AI: Explanation goodness, user satisfaction, mental models, curiosity, trust, and human-AI performance. Frontiers in Computer Science 5 (2023). https://www.frontiersin.org/articles/10.3389/fcomp.2023.1096257
  11. Helen Jiang and Erwen Senge. 2021. On Two XAI Cultures: A Case Study of Non-technical Explanations in Deployed AI System. In Human Centered AI (HCAI) workshop at NeurIPS 2021. http://arxiv.org/abs/2112.01016 arXiv:2112.01016 [cs].
  12. Are Large Language Models Post Hoc Explainers? arXiv preprint arXiv:2310.05797 (2023).
  13. Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. Advances in neural information processing systems 30 (2017).
  14. From Black Boxes to Conversations: Incorporating XAI in a Conversational Agent. In World Conference on Explainable Artificial Intelligence. Springer, 71–96.
  15. Considerations for Deploying xAI Tools in the Wild: Lessons Learned from xAI Deployment in a Cybersecurity Operations Setting.. In Proposed for presentation at the ACM SIG Knowledge Discovery and Data Mining Workshop on Responsible AI held August 14-18, 2021 in Singapore, Singapore. US DOE. DOI:http://dx.doi.org/10.2172/1869535 
  16. ConvXAI : Delivering Heterogeneous AI Explanations via Conversations to Support Human-AI Scientific Writing. Computer Supported Cooperative Work and Social Computing (Oct. 2023), 384–387. DOI:http://dx.doi.org/10.1145/3584931.3607492 
  17. Explaining machine learning models with interactive natural language conversations using TalkToModel. Nature Machine Intelligence 5, 8 (2023), 873–883.
  18. Usable XAI: 10 Strategies Towards Exploiting Explainability in the LLM Era. (March 2024).
  19. Interpretation Quality Score for Measuring the Quality of Interpretability Methods. ArXiv (2022). DOI:http://dx.doi.org/10.48550/arXiv.2205.12254 
  20. Survey on explainable AI: From approaches, limitations and Applications aspects. Human-Centric Intelligent Systems 3, 3 (2023), 161–188.
  21. Evaluating the Quality of Machine Learning Explanations: A Survey on Methods and Metrics. Electronics 10, 5 (Jan. 2021), 593. DOI:http://dx.doi.org/10.3390/electronics10050593 
  22. Sibyl: Understanding and Addressing the Usability Challenges of Machine Learning In High-Stakes Decision Making. IEEE Transactions on Visualization and Computer Graphics (2021), 1–1. DOI:http://dx.doi.org/10.1109/TVCG.2021.3114864  Conference Name: IEEE Transactions on Visualization and Computer Graphics.
Citations (6)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 2 tweets and received 0 likes.

Upgrade to Pro to view all of the tweets about this paper:

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube