Emergent Mind

Trust in AI: Progress, Challenges, and Future Directions

(2403.14680)
Published Mar 12, 2024 in cs.CY and cs.AI

Abstract

The increasing use of AI systems in our daily life through various applications, services, and products explains the significance of trust/distrust in AI from a user perspective. AI-driven systems (as opposed to other technologies) have ubiquitously diffused in our life not only as some beneficial tools to be used by human agents but also are going to be substitutive agents on our behalf, or manipulative minds that would influence human thought, decision, and agency. Trust/distrust in AI plays the role of a regulator and could significantly control the level of this diffusion, as trust can increase, and distrust may reduce the rate of adoption of AI. Recently, varieties of studies have paid attention to the variant dimension of trust/distrust in AI, and its relevant considerations. In this systematic literature review, after conceptualization of trust in the current AI literature review, we will investigate trust in different types of human-Machine interaction, and its impact on technology acceptance in different domains. In addition to that, we propose a taxonomy of technical (i.e., safety, accuracy, robustness) and non-technical axiological (i.e., ethical, legal, and mixed) trustworthiness metrics, and some trustworthy measurements. Moreover, we examine some major trust-breakers in AI (e.g., autonomy and dignity threat), and trust makers; and propose some future directions and probable solutions for the transition to a trustworthy AI.

Overview

  • The paper conducts a systematic review on trust and distrust in AI, covering foundations, impacts on technology acceptance, and distinguishing between technical and non-technical trust measures.

  • It highlights the unique challenges of AI's autonomous features in human-machine interaction and the delicate balance required in building trust.

  • The study explore both technical (safety, accuracy, robustness) and non-technical (ethical, legal, socio-ethical impacts) metrics of AI trustworthiness and their measurement.

  • Proposes a trustworthy AI framework and future research directions focused on ethical integrity, legal compliance, and enhancing trust in human-AI interactions.

Trust in AI: A Systematic Review and Proposition of a Trustworthy AI Framework

Methodology and Findings

This study presents a comprehensive literature review concentrated on the diversified aspects of trust and distrust in AI. It meticulously explores the foundational theories, definitions, and models associated with trust in AI, alongside the impacts of these elements on technology acceptance across various domains. The review further delineates between technical and non-technical trustworthiness metrics, including safety, accuracy, robustness (technical), ethical, legal, and mixed considerations (non-technical). The primary objective is to uncover the dimensions of trust and distrust, identifying trust-breakers such as autonomy threat and dignity threat, and trust-builders in AI contexts.

Trust in Human-Machine Interaction

The literature systematically categorizes human-AI interaction into different models, emphasizing the critical significance of understanding the nuanced dynamics of trust in these interactions. It highlights the intricate balance between human trust stance and the technology-based factors of AI, underscoring the unique challenges that AI's autonomous and unpredictable nature poses. Unlike traditional technologies, AI's capability to evolve and make independent decisions introduces complex layers to building and sustaining trust.

Trustworthiness Metrics and Measurement

Exploration is deepened into both technical and axiological metrics of trustworthiness in AI systems. Technical metrics, such as system safety, accuracy, and robustness, form the groundwork of an AI system's reliability from a performance standpoint. In parallel, the review enlightens on non-technical, value-driven metrics like ethical considerations, compliance with legal standards, and the socio-ethical impact of AI systems, painting a holistic picture of trustworthiness in AI.

Challenges and Barriers in Building Trust

The synthesis of literature underlines the multifaceted challenges and barriers in fostering trust in AI. Trust-building in AI transcends beyond improving system accuracy or explainability, tapping into perceptions, user experiences, and societal norms. It significantly points out the nuanced trade-offs between ensuring privacy and transparency, advocating for a balanced approach to nurturing trust without breaching user confidentiality or autonomy.

Future Directions and Trust Framework

Emphasizing future research directions, the paper proposes developing standardized trust and trustworthiness models that encapsulate a multidimensional view of human-AI interaction. It suggests the necessity of establishing authoritative bodies that oversee and ensure the alignment of AI systems with ethical, legal, and societal values, ensuring a grounded trust in AI technologies.

Additionally, the proposition of a structured trustworthy AI framework seeks to endorse and facilitate a transition towards systems that are not only technically competent but are also ingrained with ethical integrity and legal compliance. This framework aims at addressing the lacuna in systematic evaluations concerning trust in AI, setting a precedence for future research and development focused on cultivating a trustworthy AI ecosystem.

Concluding Remarks

In conclusion, the paper articulates the indispensable role of trust in the adoption and effectiveness of AI systems in real-life scenarios. By providing a comprehensive review and proposing a foundational framework for trustworthy AI, it contributes significantly to the ongoing dialogue on ethical AI development. The findings and propositions laid out underscore the critical need for a concerted effort towards understanding, measuring, and enhancing trust in AI across all dimensions of human interaction.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.