Emergent Mind

Abstract

Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these stakeholders' desiderata) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for explainability of artificial systems and reviews their desiderata. We provide a model that explicitly spells out the main concepts and relations necessary to consider and investigate when evaluating, adjusting, choosing, and developing explainability approaches that aim to satisfy stakeholders' desiderata. This model can serve researchers from the variety of different disciplines involved in XAI as a common ground. It emphasizes where there is interdisciplinary potential in the evaluation and the development of explainability approaches.

Overview

  • XAI aims at creating AI systems that are understandable and transparent to diverse human stakeholders.

  • Understanding is essential in XAI, serving as a bridge between explanations and stakeholder objectives, like trust and fairness.

  • Different stakeholders require personalized explanatory information that suits their expertise and context.

  • Explainability approaches should be selected based on stakeholder needs, with options ranging from ante-hoc to post-hoc explanations.

  • An interdisciplinary approach is recommended, incorporating psychology, philosophy, law, and computer science to address stakeholder desiderata.

Understanding Explainable AI

Stakeholder Desiderata in XAI

Explainable Artificial Intelligence (XAI) primarily aims at developing AI systems that are transparent and understandable to humans who interact with or are affected by these systems. A substantial body of XAI research has revolved around the creation of novel methods without necessarily considering whether these methods effectively meet the needs and expectations of different stakeholders involved with AI systems. Recognizing the varying interests, goals, and demands of these stakeholders is crucial, as they drive the push for explainability in artificial systems. These stakeholders may include users, developers, parties impacted by AI decisions, deployers, and regulators, each with their distinct set of "desiderata" or desired outcomes.

Promoting Understanding Through Explainability

One of the main contributions of the paper under discussion is its focus on the role of understanding in satisfying stakeholder desiderata. Understanding is postulated as a mediator that bridges the gap between the information provided by an explainability approach and achieving a stakeholder’s objectives. It is not merely seen as a desirable outcome but as a vehicle for achieving varied stakeholder-specific aims, such as fairness, trust, or legality in the application of AI systems.

The Nature of Explanatory Information

Explanatory information is core to fostering understanding. However, different stakeholders may require various forms and depths of explanation personalized to their level of expertise and the context of use. For example, a novice user and an AI developer might derive understanding from different types of explanations, leading to divergent impacts on their respective desiderata. The nature and presentation of explanatory information—whether statistical, contrastive, or causal—are pivotal for facilitating the degree of understanding necessary for stakeholders.

Selecting and Developing Explainability Approaches

The development and selection of explainability approaches require careful consideration of the types of explanations they generate and their relevance to specific stakeholder needs. Customary approaches fall into ante-hoc or inherently explainable systems and post-hoc or after-the-fact explanations. The chosen approach should align with the type of explanatory information required to advance stakeholders’ understanding accordance with their unique set of desiderata.

Interdisciplinary Opportunities and Insights

The paper emphasizes the interdisciplinary potential in addressing explainability in AI. It calls for psychologists to design empirical studies, philosophers to give definitions and guidelines, legal experts to bring normative constraints, and computer scientists to innovate at the technical frontier. This collaborative effort is seen as key to comprehensively addressing stakeholders’ desiderata, thereby yielding AI systems that are explainable, transparent, and ultimately more trustworthy.

In summary, XAI research must converge on the intersection of providing explanations that enhance understanding and thus satisfy the vast landscape of stakeholders' needs. This involves an iterative, empirical process of evaluating and refining explainability approaches, enriched by insights from diverse academic and practical disciplines, ensuring that as AI systems grow more complex, their workings remain accessible and comprehensible to all who are affected by them.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.