Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Requirements for Explainability and Acceptance of Artificial Intelligence in Collaborative Work (2306.15394v1)

Published 27 Jun 2023 in cs.CY, cs.AI, and cs.HC

Abstract: The increasing prevalence of AI in safety-critical contexts such as air-traffic control leads to systems that are practical and efficient, and to some extent explainable to humans to be trusted and accepted. The present structured literature analysis examines n = 236 articles on the requirements for the explainability and acceptance of AI. Results include a comprehensive review of n = 48 articles on information people need to perceive an AI as explainable, the information needed to accept an AI, and representation and interaction methods promoting trust in an AI. Results indicate that the two main groups of users are developers who require information about the internal operations of the model and end users who require information about AI results or behavior. Users' information needs vary in specificity, complexity, and urgency and must consider context, domain knowledge, and the user's cognitive resources. The acceptance of AI systems depends on information about the system's functions and performance, privacy and ethical considerations, as well as goal-supporting information tailored to individual preferences and information to establish trust in the system. Information about the system's limitations and potential failures can increase acceptance and trust. Trusted interaction methods are human-like, including natural language, speech, text, and visual representations such as graphs, charts, and animations. Our results have significant implications for future human-centric AI systems being developed. Thus, they are suitable as input for further application-specific investigations of user needs.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (10)
  1. Explaining Trained Neural Networks with Semantic Web Technologies: First Steps (07/2017 2017), http://daselab.cs.wright.edu/nesy/NeSy17/
  2. American Psychological Association and others: Apa dictionary of psychology online (2020)
  3. Beno, M.: Robot rights in the era of robolution and the acceptance of robots from the slovak citizen’s perspective. In: 2019 IEEE International Symposium on Robotic and Sensors Environments (ROSE). pp. 1–7 (Jun 2019). https://doi.org/10.1109/ROSE.2019.8790429
  4. Bonini, D.: Atc do i trust thee? referents of trust in air traffic control. In: CHI’01 Extended Abstracts on Human Factors in Computing Systems. pp. 449–450 (2001). https://doi.org/10.1145/634067.634327
  5. Braun, M., Bleher, H., Hummel, P.: A leap of faith: Is there a formula for “trustworthy” AI? Hastings Cent. Rep. 51(3), 17–22 (May 2021), https://doi.org/10.1002/hast.1207
  6. EASA: Easa concept paper: First usable guidance for level 1 machine learning applications (2021)
  7. Krueger, F.: The Neurobiology of Trust. Cambridge University Press (2021)
  8. Munzner, T.: Visualization analysis and design. CRC press (2014)
  9. Vorm, E.S.: Assessing demand for transparency in intelligent systems using machine learning. In: 2018 Innovations in Intelligent Systems and Applications (INISTA). pp. 1–7 (Jul 2018). https://doi.org/10.1109/INISTA.2018.8466328
  10. Winkler, J.P., Vogelsang, A.: “what does my classifier learn?” a visual approach to understanding natural language text classifiers. In: Natural Language Processing and Information Systems. vol. 10260, pp. 468–479. Springer International Publishing (2017). https://doi.org/10.1007/978-3-319-59569-6_55
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Sabine Theis (3 papers)
  2. Sophie Jentzsch (6 papers)
  3. Fotini Deligiannaki (1 paper)
  4. Charles Berro (1 paper)
  5. Arne Peter Raulf (2 papers)
  6. Carmen Bruder (1 paper)
Citations (7)

Summary

We haven't generated a summary for this paper yet.