Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 131 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 26 tok/s Pro
GPT-5 High 32 tok/s Pro
GPT-4o 71 tok/s Pro
Kimi K2 192 tok/s Pro
GPT OSS 120B 385 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Experimental Interface for Multimodal and Large Language Model Based Explanations of Educational Recommender Systems (2402.07910v1)

Published 29 Jan 2024 in cs.HC

Abstract: In the age of AI, providing learners with suitable and sufficient explanations of AI-based recommendation algorithm's output becomes essential to enable them to make an informed decision about it. However, the rapid development of AI approaches for educational recommendations and their explainability is not accompanied by an equal level of evidence-based experimentation to evaluate the learning effect of those explanations. To address this issue, we propose an experimental web-based tool for evaluating multimodal and LLM based explainability approaches. Our tool provides a comprehensive set of modular, interactive, and customizable explainability elements, which researchers and educators can utilize to study the role of individual and hybrid explainability methods. We design a two-stage evaluation of the proposed tool, with learners and with educators. Our preliminary results from the first stage show high acceptance of the tool's components, user-friendliness, and an induced motivation to use the explanations for exploring more information about the recommendation.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (3)
  1. Explaining Recommendations in E-Learning: Effects on Adolescents’ Trust, in: 27th International Conference on Intelligent User Interfaces, ACM, Helsinki Finland, 2022, pp. 93–105. URL: https://dl.acm.org/doi/10.1145/3490099.3511140. doi:10.1145/3490099.3511140.
  2. A Model of Social Explanations for a Conversational Movie Recommendation System, in: Proceedings of the 7th International Conference on Human-Agent Interaction, ACM, Kyoto Japan, 2019, pp. 135–143. URL: https://dl.acm.org/doi/10.1145/3349537.3351899. doi:10.1145/3349537.3351899.
  3. C.-H. Tsai, P. Brusilovsky, Evaluating Visual Explanations for Similarity-Based Recommendations: User Perception and Performance, in: Proceedings of the 27th ACM Conference on User Modeling, Adaptation and Personalization, ACM, Larnaca Cyprus, 2019, pp. 22–30. URL: https://dl.acm.org/doi/10.1145/3320435.3320465. doi:10.1145/3320435.3320465.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Questions

We haven't generated a list of open questions mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 2 tweets and received 0 likes.

Upgrade to Pro to view all of the tweets about this paper: