Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
98 tokens/sec
GPT-4o
8 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Negotiating the Shared Agency between Humans & AI in the Recommender System (2403.15919v4)

Published 23 Mar 2024 in cs.HC and cs.CY

Abstract: Smart recommendation algorithms have revolutionized content delivery and improved efficiency across various domains. However, concerns about user agency arise from the algorithms' inherent opacity (information asymmetry) and one-way output (power asymmetry). This study introduces a dual-control mechanism aimed at enhancing user agency, empowering users to manage both data collection and, novelly, the degree of algorithmically tailored content they receive. In a between-subject experiment with 161 participants, we evaluated the impact of varying levels of transparency and control on user experience. Results show that transparency alone is insufficient to foster a sense of agency, and may even exacerbate disempowerment compared to displaying outcomes directly. Conversely, combining transparency with user controls-particularly those allowing direct influence on outcomes-significantly enhances user agency. This research provides a proof-of-concept for a novel approach and lays the groundwork for designing more user-centered recommender systems that emphasize user autonomy and fairness in AI-driven content delivery.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (36)
  1. Cookie disclaimers: Dark patterns and lack of transparency. Computers & Security 136 (2024), 103507.
  2. Jenna Burrell. 2016. How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big data & society 3, 1 (2016), 2053951715622512.
  3. Machine learning interpretability: A survey on methods and metrics. Electronics 8, 8 (2019), 832.
  4. Tsai-Wei Chen and S Shyam Sundar. 2018. This app would like to use your current location to better serve you: Importance of user assent and system transparency in personalized mobile services. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, USA, 1–13.
  5. Lingwei Cheng and Alexandra Chouldechova. 2023. Overcoming Algorithm Aversion: A Comparison between Process and Outcome Control. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. ACM, USA, 1–27.
  6. Political polarization on twitter. In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 5-1. AAAI, USA, 89–96.
  7. The effects of transparency on trust in and acceptance of a content-based art recommender. User Modeling and User-adapted Interaction 18 (2008), 455–496.
  8. Bringing transparency design into practice. In 23rd international conference on intelligent user interfaces. ACM, USA, 211–223.
  9. Mica R Endsley. 2023. Supporting Human-AI Teams: Transparency, explainability, and situation awareness. Computers in Human Behavior 140 (2023), 107574.
  10. Towards transparency by design for artificial intelligence. Science and Engineering Ethics 26, 6 (2020), 3333–3361.
  11. Explaining explanations: An overview of interpretability of machine learning. In 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA). IEEE, IEEE, USA, 80–89.
  12. Design, development and evaluation of a human-computer trust scale. Behaviour & Information Technology 38, 10 (2019), 1004–1015.
  13. René F Kizilcec. 2016. How much information? Effects of transparency on trust in an algorithmic interface. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, USA, 2390–2395.
  14. Explaining the user experience of recommender systems. User modeling and user-adapted interaction 22 (2012), 441–504.
  15. Towards a Science of Human-AI Decision Making: An Overview of Design Space in Empirical Human-Subject Studies. In Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency. ACM, USA, 1369–1385.
  16. Bingjie Liu. 2021. In AI we trust? Effects of agency locus and transparency on uncertainty reduction in human–AI interaction. Journal of Computer-Mediated Communication 26, 6 (2021), 384–402.
  17. Cookies and web browser design: Toward realizing informed consent online. In Proceedings of the SIGCHI conference on Human factors in computing systems. ACM, USA, 46–52.
  18. Maria D Molina and S Shyam Sundar. 2022. When AI moderates online content: effects of human collaboration and interactive transparency on user trust. Journal of Computer-Mediated Communication 27, 4 (2022), zmac010.
  19. Exploring the filter bubble: the effect of using recommender systems on content diversity. In Proceedings of the 23rd International Conference on World Wide Web. ACM, USA, 677–686.
  20. Frank Pasquale. 2015. The black box society: The secret algorithms that control money and information. Harvard University Press, USA.
  21. A user-centric evaluation framework for recommender systems. In Proceedings of the fifth ACM conference on Recommender systems. ACM, USA, 157–164.
  22. Explanations as mechanisms for supporting algorithmic transparency. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, USA, 1–13.
  23. Explanation methods in deep learning: Users, values, concerns and challenges. Explainable and interpretable models in computer vision and machine learning (2018), 19–36.
  24. Getting to know you: learning new user preferences in recommender systems. In Proceedings of the 7th international conference on Intelligent user interfaces. ACM, USA, 127–134.
  25. ” Why should I trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, USA, 1135–1144.
  26. Fred Rowland. 2011. The filter bubble: what the internet is hiding from you. portal: Libraries and the Academy 11, 4 (2011), 1009–1011.
  27. Intelligent IT Systems in Business Application: Control and Transparency as Means of Building Trust in AI. In Work and AI 2030: Challenges and Strategies for Tomorrow’s Work. Springer, USA, 125–132.
  28. A meta-analysis of the utility of explainable artificial intelligence in human-AI decision-making. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society. ACM, USA, 617–626.
  29. A literature review of personalization transparency and control: introducing the transparency–awareness–control Framework. Media and Communication 9, 4 (2021), 120–133.
  30. Anyuan Shen. 2014. Recommendations as personalized marketing: insights from customer experiences. Journal of Services Marketing 28, 5 (2014), 414–427.
  31. Fairness and transparency in recommendation: The users’ perspective. In Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization. ACM, USA, 274–279.
  32. S Shyam Sundar. 2020. Rise of machine agency: A framework for studying the psychology of human-AI interaction (HAII). Journal of Computer-Mediated Communication 25, 1 (2020), 74–88.
  33. Adrian Weller. 2019. Transparency: motivations and challenges. In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. Springer, USA, 23–40.
  34. Christine T Wolf. 2019. Explainability scenarios: towards scenario-based XAI design. In Proceedings of the 24th International Conference on Intelligent User Interfaces. ACM, USA, 252–257.
  35. Bo Zhang and S Shyam Sundar. 2019. Proactive vs. reactive personalization: Can customization of privacy enhance user experience? International journal of human-computer studies 128 (2019), 86–99.
  36. PERD: Personalized emoji recommendation with dynamic user preference. In Proceedings of the 45th international ACM SIGIR conference on research and development in information retrieval. ACM, USA, 1922–1926.

Summary

We haven't generated a summary for this paper yet.