Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

An Empirical Exploration of Trust Dynamics in LLM Supply Chains (2405.16310v1)

Published 25 May 2024 in cs.HC and cs.AI

Abstract: With the widespread proliferation of AI systems, trust in AI is an important and timely topic to navigate. Researchers so far have largely employed a myopic view of this relationship. In particular, a limited number of relevant trustors (e.g., end-users) and trustees (i.e., AI systems) have been considered, and empirical explorations have remained in laboratory settings, potentially overlooking factors that impact human-AI relationships in the real world. In this paper, we argue for broadening the scope of studies addressing trust in AI' by accounting for the complex and dynamic supply chains that AI systems result from. AI supply chains entail various technical artifacts that diverse individuals, organizations, and stakeholders interact with, in a variety of ways. We present insights from an in-situ, empirical study of LLM supply chains. Our work reveals additional types of trustors and trustees and new factors impacting their trust relationships. These relationships were found to be central to the development and adoption of LLMs, but they can also be the terrain for uncalibrated trust and reliance on untrustworthy LLMs. Based on these findings, we discuss the implications for research ontrust in AI'. We highlight new research opportunities and challenges concerning the appropriate study of inter-actor relationships across the supply chain and the development of calibrated trust and meaningful reliance behaviors. We also question the meaning of building trust in the LLM supply chain.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (56)
  1. BSA (The Software Alliance). [n. d.]. AI Developers and Deployers: An Important Distinction. https://www.bsa.org/files/policy-filings/03162023aidevdep.pdf
  2. In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI & society 35 (2020), 611–623.
  3. A systematic literature review of user trust in AI-enabled systems: An HCI perspective. International Journal of Human–Computer Interaction (2022), 1–16.
  4. Repairing trust in organizations and institutions: Toward a conceptual framework. Organization Studies 36, 9 (2015), 1123–1142.
  5. David Baidoo-Anu and Leticia Owusu Ansah. 2023. Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. Journal of AI 7, 1 (2023), 52–62.
  6. Agathe Balayn and Seda Gürses. 2021. Beyond Debiasing: Regulating AI and its inequalities. EDRi Report. https://edri. org/wp-content/uploads/2021/09/EDRi_Beyond-Debiasing-Report_Online. pdf (2021).
  7. Beyond accuracy: The role of mental models in human-AI team performance. In Proceedings of the AAAI conference on human computation and crowdsourcing, Vol. 7. 2–11.
  8. Clare Birchall. 2011. Introduction to ‘Secrecy and Transparency’ The Politics of Opacity and Openness. Theory, Culture & Society 28, 7-8 (2011), 7–25.
  9. Washington Post Editorial Board. 2023. Opinion — AI could threaten creators — but only if humans let it. https://www.washingtonpost.com/opinions/2023/11/24/ai-llm-intellectual-property-crisis/
  10. Trust in clinical AI: Expanding the unit of analysis. In 1st International Conference on Hybrid Human-Artificial Intelligence: HHAI2022.
  11. Yoana Cholteeva. 2023. IBM and Meta launch AI alliance with over 50 collaborators. https://erp.today/ibm-and-meta-launch-ai-alliance-with-over-50-collaborators/
  12. Dark Patterns of Explainability, Transparency, and User Control for Intelligent Systems.. In IUI workshops, Vol. 2327.
  13. Understanding accountability in algorithmic supply chains. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. 1186–1197.
  14. Eric Corbett and Emily Denton. 2023. Interrogating the T in FAccT. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. 1624–1634.
  15. Kate Crawford and Vladan Joler. 2018. Anatomy of an AI System. Anatomy of an AI System (2018).
  16. Paul B de Laat. 2021. Companies committed to responsible AI: From principles towards implementation and regulation? Philosophy & technology 34 (2021), 1135–1193.
  17. Expanding explainability: Towards social transparency in ai systems. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–19.
  18. Michael Grynbaum and Ryan Mac. 2023. The Times sues OpenAI and Microsoft over A.I. use of copyrighted work. https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html
  19. Ranjay Gulati and Jack A Nickerson. 2008. Interorganizational trust, governance choice, and exchange performance. Organization science 19, 5 (2008), 688–708.
  20. A conceptual framework for understanding trust building and maintenance in virtual organizations. Journal of Information Technology Theory and Application (JITTA) 9, 1 (2007), 5.
  21. Regulating ChatGPT and other large generative AI models. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. 1112–1123.
  22. Pippa Hall. 2005. Interprofessional teamwork: Professional cultures as barriers. Journal of Interprofessional care 19, sup1 (2005), 188–196.
  23. Gaole He and Ujwal Gadiraju. 2022. Walking on Eggshells: Using Analogies to Promote Appropriate Reliance in Human-AI Decision Making. In Proceedings of the Workshop on Trust and Reliance on AI-Human Teams at the ACM Conference on Human Factors in Computing Systems (CHI’22).
  24. How different groups prioritize ethical values for responsible AI. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 310–323.
  25. Humans, AI, and Context: Understanding End-Users’ Trust in a Real-World Computer Vision Application. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. 77–88.
  26. Personalisation within bounds: A risk taxonomy and policy framework for the alignment of large language models with personalised feedback. arXiv preprint arXiv:2303.05453 (2023).
  27. Bran Knowles and John T Richards. 2021. The sanction of authority: Promoting public trust in AI. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. 262–271.
  28. POTs: protective optimization technologies. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. 177–188.
  29. Ik-Whan G Kwon and Taewon Suh. 2004. Factors affecting the level of trust and commitment in supply chain relationships. Journal of supply chain management 40, 1 (2004), 4–14.
  30. Towards a science of human-ai decision making: a survey of empirical studies. arXiv preprint arXiv:2112.11471 (2021).
  31. John D Lee and Katrina A See. 2004. Trust in automation: Designing for appropriate reliance. Human factors 46, 1 (2004), 50–80.
  32. Effects of Explainable Artificial Intelligence on trust and human behavior in a high-risk decision task. Computers in Human Behavior 139 (2023), 107539.
  33. Q Vera Liao and S Shyam Sundar. 2022. Designing for responsible trust in AI systems: A communication perspective. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency. 1257–1268.
  34. Tambiama Madiega. 2021. Artificial intelligence act. European Parliament: European Parliamentary Research Service (2021).
  35. An integrative model of organizational trust. Academy of management review 20, 3 (1995), 709–734.
  36. A Systematic Review on Fostering Appropriate Trust in Human-AI Interaction. arXiv preprint arXiv:2311.06305 (2023).
  37. More similar values, more trust?-the effect of value similarity on trust in human-agent interaction. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society. 777–783.
  38. Development of interorganizational trust in virtual organizations: An integrative framework. European Business Review 24, 3 (2012), 255–271.
  39. Towards accountable ai: Hybrid human-machine analyses for characterizing system failure. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 6. 126–135.
  40. Will Orr and Jenny L Davis. 2020. Attributions of ethical responsibility by Artificial Intelligence practitioners. Information, Communication & Society 23, 5 (2020), 719–735.
  41. Michael Pirson. 2009. Facing the trust gap-measuring and managing stakeholder trust. DOING WELL AND GOOD: THE HUMAN FACE OF THE NEW CAPITALISM, Julian Friedland, ed., Information Age Publishing (2009).
  42. Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 33–44.
  43. Where responsible AI meets reality: Practitioner perspectives on enablers for shifting organizational practices. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1 (2021), 1–23.
  44. Mohammad Asif Salam. 2017. The mediating role of supply chain collaboration on the relationship between technology, trust and operational performance: An empirical investigation. Benchmarking: An International Journal 24, 2 (2017), 298–317.
  45. Risk Models of National Identity Systems: A Conceptual Model of Trust and Trustworthiness. (2021).
  46. Evaluating the Social Impact of Generative AI Systems in Systems and Society. arXiv preprint arXiv:2306.05949 (2023).
  47. Hwee Hoon Tan and Augustine KH Lim. 2009. Trust in coworkers and trust in organizations. the Journal of Psychology 143, 1 (2009), 45–66.
  48. Generative artificial intelligence through ChatGPT and other large language models in ophthalmology: clinical applications and challenges. Ophthalmology Science 3, 4 (2023), 100394.
  49. Trust among supply chain partners: a review. Measuring Business Excellence 17, 1 (2013), 51–71.
  50. Trust in human-AI interaction: Scoping out models, measures, and methods. In CHI Conference on Human Factors in Computing Systems Extended Abstracts. 1–7.
  51. Siv Vangen and Chris Huxham. 2003. Nurturing collaborative relations: Building trust in interorganizational collaboration. The Journal of applied behavioral science 39, 1 (2003), 5–31.
  52. How to evaluate trust in AI-assisted decision making? A survey of empirical methodologies. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (2021), 1–39.
  53. Do humans trust advice more if it comes from ai? an analysis of human-ai interactions. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society. 763–777.
  54. Kent Walker. 2023. Our commitment to advancing bold and responsible AI, together. https://blog.google/outreach-initiatives/public-policy/our-commitment-to-advancing-bold-and-responsible-ai-together/
  55. David Gray Widder and Dawn Nafus. 2023. Dislocated accountabilities in the “AI supply chain”: Modularity and developers’ notions of responsibility. Big Data & Society 10, 1 (2023), 20539517231177620.
  56. Judging llm-as-a-judge with mt-bench and chatbot arena. Advances in Neural Information Processing Systems 36 (2024).
Citations (1)

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets