How should AI decisions be explained? Requirements for Explanations from the Perspective of European Law (2404.12762v2)
Abstract: This paper investigates the relationship between law and eXplainable Artificial Intelligence (XAI). While there is much discussion about the AI Act, for which the trilogue of the European Parliament, Council and Commission recently concluded, other areas of law seem underexplored. This paper focuses on European (and in part German) law, although with international concepts and regulations such as fiduciary plausibility checks, the General Data Protection Regulation (GDPR), and product safety and liability. Based on XAI-taxonomies, requirements for XAI-methods are derived from each of the legal bases, resulting in the conclusion that each legal basis requires different XAI properties and that the current state of the art does not fulfill these to full satisfaction, especially regarding the correctness (sometimes called fidelity) and confidence estimates of XAI-methods. Published in the Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society https://doi.org/10.1609/aies.v7i1.31648 .
- Advocate General Pikamäe. 16.03.2023. SCHUFA Holding (Scoring). Opinion, C‑634/21, ECLI EU:C:2023:220.
- Aws Albarghouthi. 2021. Introduction to Neural Network Verification. http://arxiv.org/pdf/2109.10317.pdf
- David Alvarez-Melis and Tommi S. Jaakkola. 2018. Towards Robust Interpretability with Self-Explaining Neural Networks. http://arxiv.org/pdf/1806.07538.pdf
- BGH. 20.09.2011. ISION. Judgement, II ZR 234/09. ZIP 2011, 2097.
- Towards eXplainable Artificial Intelligence (XAI) in Tax Law: The Need for a Minimum Legal Standard.
- Model Reporting for Certifiable AI: A Proposal from Merging EU Regulation into AI Development. http://arxiv.org/pdf/2307.11525.pdf
- Annika Buchholz and Elena Dubovitskaya. 2023. Die Geschäftsleitung und der Rat des Algorithmus. ZIP (2023), 63–73.
- Nadia Burkart and Marco F. Huber. 2021. A Survey on the Explainability of Supervised Machine Learning. Journal of Artificial Intelligence Research 70 (2021), 245–317. https://doi.org/10.1613/jair.1.12228
- Explaining decision-making algorithms through UI: Strategies to help non-expert stakeholders. In CHI 2019 - Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Conference on Human Factors in Computing Systems - Proceedings). Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3290605.3300789
- CJEU. 07.12.2023. SCHUFA Holding (Scoring). Judgement, C‑634/21, ECLI EU:C:2023:957.
- Understanding Global Feature Contributions With Additive Importance Measures. http://arxiv.org/pdf/2004.00668.pdf
- Alfred R. Cowger Jr. 2022–2023. Corporate Fiduciary Duty in the Age of Algorithms. Case Western Reserve Journal of Law, Technology & the Internet 14 (2022–2023), 136–207.
- A Critical Survey on Fairness Benefits of XAI. https://arxiv.org/pdf/2310.13007.pdf
- Bridging the Transparency Gap: What Can Explainable AI Learn From the AI Act? https://arxiv.org/pdf/2302.10766.pdf
- Measures for explainable AI: Explanation goodness, user satisfaction, mental models, curiosity, trust, and human-AI performance. Frontiers in Computer Science 5 (2023), 1096257. https://doi.org/10.3389/fcomp.2023.1096257
- Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV). In Proceedings of the 35th International Conference on Machine Learning (July 10-15, 2018) (Proceedings of Machine Learning Research, Vol. 80), Jennifer G. Dy and Andreas Krause (Eds.). PMLR, Stockholmsmässan, Stockholm, Sweden, 2673–2682. http://proceedings.mlr.press/v80/kim18d.html
- Problems with Shapley-value-based explanations as feature importance measures. https://arxiv.org/pdf/2002.11097.pdf
- What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. , 103473 pages. https://doi.org/10.1016/j.artint.2021.103473
- Scott M. Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems (Long Beach, California, USA) (NIPS’17). Curran Associates Inc., Red Hook, NY, USA, 4768–4777.
- Who Should I Trust: AI or Myself? Leveraging Human and AI Correctness Likelihood to Promote Appropriate Trust in AI-Assisted Decision-Making. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (Hamburg, Germany) (CHI ’23). Association for Computing Machinery, New York, NY, USA, Article 759, 19 pages. https://doi.org/10.1145/3544548.3581058
- Reclaiming transparency: contesting the logics of secrecy within the AI Act. European Law Open 2 (12 2022), 1–27. https://doi.org/10.1017/elo.2022.47
- Luke Merrick and Ankur Taly. 2019. The Explanation Game: Explaining Machine Learning Models with Cooperative Game Theory. https://arxiv.org/pdf/1909.08128.pdf
- Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267 (2019), 1–38. https://doi.org/10.1016/j.artint.2018.07.007
- Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences. http://arxiv.org/pdf/1712.00547v2
- Christoph Molnar. 2022. Interpretable Machine Learning (2 ed.). https://christophm.github.io/interpretable-ml-book
- Florian Möslein. 2018. Digitalisierung im Gesellschaftsrecht: Unternehmensleitung durch Algorithmen und künstliche Intelligenz? ZIP (2018), 204–212.
- From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI. ACM Comput. Surv. 55, 13s, Article 295 (jul 2023), 42 pages. https://doi.org/10.1145/3583558
- Towards Interpretable ANNs: An Exact Transformation to Multi-Class Multivariate Decision Trees. https://doi.org/10.48550/arXiv.2003.04675
- Martin Petrin. 2019. Corporate Management in the Age of AI. Columbia Business Law Review 3 (2019), 965–1030.
- "Why Should I Trust You?". In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Balaji Krishnapuram, Mohak Shah, Alex Smola, Charu Aggarwal, Dou Shen, and Rajeev Rastogi (Eds.). ACM, New York, NY, USA, 1135–1144. https://doi.org/10.1145/2939672.2939778
- Hierarchical confounder discovery in the experiment-machine learning cycle. Patterns 3, 4 (2022), 100451. https://doi.org/10.1016/j.patter.2022.100451
- Cynthia Rudin. 2018. Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. https://arxiv.org/pdf/1811.10154.pdf
- Waddah Saeed and Christian Omlin. 2023. Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities. Knowledge-Based Systems 263 (2023), 110273. https://doi.org/10.1016/j.knosys.2023.110273
- The Tower of Babel in Explainable Artificial Intelligence (XAI). In Machine Learning and Knowledge Extraction (Lecture Notes in Computer Science), Andreas Holzinger, Peter Kieseberg, Federico Cabitza, Andrea Campagner, A. Min Tjoa, and Edgar Weippl (Eds.). Springer Nature Switzerland and Imprint Springer, Cham, 65–81. https://doi.org/10.1007/978-3-031-40837-3{_}5
- Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. , 336–359 pages. https://doi.org/10.1007/s11263-019-01228-7
- Membership Inference Attacks against Machine Learning Models. http://arxiv.org/pdf/1610.05820.pdf
- Mukund Sundararajan and Amir Najmi. 2020. The Many Shapley Values for Model Explanation. In Proceedings of the 37th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 119), Hal Daumé III and Aarti Singh (Eds.). JMLR.org, 9269–9278. https://proceedings.mlr.press/v119/sundararajan20b.html
- William R. Swartout and Johanna D. Moore. 1993. Second Generation Expert Systems. https://doi.org/10.1007/978-3-642-77927-5
- Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems. http://arxiv.org/pdf/1806.07552.pdf
- Sanity Checks for Saliency Metrics. https://arxiv.org/pdf/1912.01451
- Evaluating Feature Relevance XAI in Network Intrusion Detection. In Explainable Artificial Intelligence (Communications in Computer and Information Science), Luca Longo (Ed.). Springer Nature Switzerland and Imprint Springer, Cham, 483–497. https://doi.org/10.1007/978-3-031-44064-9{_}25
- The effects of explanations on automation bias. Artificial Intelligence 322 (2023), 103952. https://doi.org/10.1016/j.artint.2023.103952
- Apostolos Vorras and Lilian Mitrou. 2021. Unboxing the Black Box of Artificial Intelligence: Algorithmic Transparency and/or a Right to Functional Explainability. In EU Internet Law in the Digital Single Market, Tatiana-Eleni Synodinou, Philippe Jougleux, Christiana Markou, and Thalia Prastitou-Merdi (Eds.). Springer International Publishing, Cham, 247–264. https://doi.org/10.1007/978-3-030-69583-5_10
- Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR. Harvard Journal of Law & Technology 31 (04 2018), 841–887. https://doi.org/10.2139/ssrn.3063289
- Sandra Wachter and Brent Daniel Mittelstadt. 2018. A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI. Columbia Business Law Review 2019 (2018), 494–620–494–620. https://api.semanticscholar.org/CorpusID:226950761
- Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. International Data Privacy Law 7 (2017), 76–99. https://doi.org/10.2139/SSRN.2903469
- Joyce Zhou and Thorsten Joachims. 2023. How to Explain and Justify Almost Any Decision: Potential Pitfalls for Accountability in AI Decision-Making. In 2023 ACM Conference on Fairness, Accountability, and Transparency. ACM, New York, NY, USA, 12–21. https://doi.org/10.1145/3593013.3593972