Papers
Topics
Authors
Recent
2000 character limit reached

Knowledge Graphs as Context Sources for LLM-Based Explanations of Learning Recommendations (2403.03008v1)

Published 5 Mar 2024 in cs.AI

Abstract: In the era of personalized education, the provision of comprehensible explanations for learning recommendations is of a great value to enhance the learner's understanding and engagement with the recommended learning content. LLMs and generative AI in general have recently opened new doors for generating human-like explanations, for and along learning recommendations. However, their precision is still far away from acceptable in a sensitive field like education. To harness the abilities of LLMs, while still ensuring a high level of precision towards the intent of the learners, this paper proposes an approach to utilize knowledge graphs (KG) as a source of factual context, for LLM prompts, reducing the risk of model hallucinations, and safeguarding against wrong or imprecise information, while maintaining an application-intended learning context. We utilize the semantic relations in the knowledge graph to offer curated knowledge about learning recommendations. With domain-experts in the loop, we design the explanation as a textual template, which is filled and completed by the LLM. Domain experts were integrated in the prompt engineering phase as part of a study, to ensure that explanations include information that is relevant to the learner. We evaluate our approach quantitatively using Rouge-N and Rouge-L measures, as well as qualitatively with experts and learners. Our results show an enhanced recall and precision of the generated explanations compared to those generated solely by the GPT model, with a greatly reduced risk of generating imprecise information in the final learning explanation.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (14)
  1. Explaining Recommendations in E-Learning: Effects on Adolescents’ Trust, in: 27th International Conference on Intelligent User Interfaces, ACM, Helsinki Finland, 2022, pp. 93–105. doi:10.1145/3490099.3511140.
  2. Practical and Ethical Challenges of Large Language Models in Education: A Systematic Scoping Review, British Journal of Educational Technology (2023) bjet.13370. doi:10.1111/bjet.13370, arXiv:2303.13379 [cs].
  3. A Transparency Index Framework for AI in Education, in: M. M. Rodrigo, N. Matsuda, A. I. Cristea, V. Dimitrova (Eds.), Artificial Intelligence in Education. Posters and Late Breaking Results, Workshops and Tutorials, Industry and Innovation Tracks, Practitioners’ and Doctoral Consortium, volume 13356, Springer International Publishing, Cham, 2022, pp. 195–198. doi:10.1007/978-3-031-11647-6_33, series Title: Lecture Notes in Computer Science.
  4. Artificial intelligence and school leadership: challenges, opportunities and implications, School Leadership & Management (2023) 1–8. doi:10.1080/13632434.2023.2246856.
  5. S. Hargreaves, ‘Words Are Flowing Out Like Endless Rain Into a Paper Cup’: ChatGPT & Law School Assessments, SSRN Electronic Journal (2023). doi:10.2139/ssrn.4359407.
  6. Introduction: What Is a Knowledge Graph?, in: Knowledge Graphs, Springer International Publishing, Cham, 2020, pp. 1–10. doi:10.1007/978-3-030-37439-6_1.
  7. Building Contextual Knowledge Graphs for Personalized Learning Recommendations Using Text Mining and Semantic Graph Completion, in: 2023 IEEE International Conference on Advanced Learning Technologies (ICALT), IEEE, Orem, UT, USA, 2023, pp. 36–40. doi:10.1109/ICALT58122.2023.00016.
  8. Barack’s Wife Hillary: Using Knowledge Graphs for Fact-Aware Language Modeling, in: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, Florence, Italy, 2019, pp. 5962–5971. doi:10.18653/v1/P19-1598.
  9. Interactive Causal Discovery in Knowledge Graphs, in: E. Demidova, S. Dietze, J. Breslin, S. Gottschalk, P. Cimiano, B. Ell, A. Lawrynowicz, L. Moss, A.-C. N. Ngomo (Eds.), Joint Proceedings of the 6th International Workshop on Dataset PROFlLing and Search & the 1st Workshop on Semantic Explainability, volume 2465 of CEUR Workshop Proceedings, CEUR, Auckland, New Zealand, 2019, pp. 78–93. ISSN: 1613-0073.
  10. Transferrable Framework Based on Knowledge Graphs for Generating Explainable Results in Domain-Specific, Intelligent Information Retrieval, Informatics 9 (2022) 6. doi:10.3390/informatics9010006, number: 1 Publisher: Multidisciplinary Digital Publishing Institute.
  11. E. Rajabi, K. Etminani, Knowledge-graph-based explainable AI: A systematic review, Journal of Information Science (2022) 01655515221112844. doi:10.1177/01655515221112844, publisher: SAGE Publications Ltd.
  12. IEEE Standard for Learning Object Metadata, IEEE Std 1484.12.1-2020 (2020) 1–50. doi:10.1109/IEEESTD.2020.9262118.
  13. Pedagogically-Informed Implementation of Reinforcement Learning on Knowledge Graphs for Context-Aware Learning Recommendations, in: O. Viberg, I. Jivet, P. Muñoz-Merino, M. Perifanou, T. Papathoma (Eds.), Responsive and Sustainable Educational Futures, volume 14200, Springer Nature Switzerland, Cham, 2023, pp. 518–523. doi:10.1007/978-3-031-42682-7_35, series Title: Lecture Notes in Computer Science.
  14. C.-Y. Lin, ROUGE: A Package for Automatic Evaluation of Summaries, in: ACL 2004, 2004.
Citations (10)

Summary

  • The paper demonstrates how KG-based context reduces LLM hallucinations in education.
  • It utilizes enriched GPT-4 prompts combining specific queries with semantic knowledge for tailored explanations.
  • Evaluation using Rouge metrics and expert feedback confirms improved accuracy and fewer irrelevant outputs.

Knowledge Graphs and Enhanced LLM Explanations in Education

Introduction

The paper "Knowledge Graphs as Context Sources for LLM-Based Explanations of Learning Recommendations" focuses on leveraging knowledge graphs (KGs) to improve the precision of explanations generated by LLMs in educational settings. It addresses the inherent challenges of using LLMs, such as hallucinations and inaccuracies, by embedding factual context derived from KGs into LLM prompts. The primary goal is to guide these models to produce more relevant, pedagogically sound explanations for learning recommendations, thereby maintaining a high level of precision, especially in sensitive fields like education.

Knowledge Graph Utilization and Structure

Knowledge graphs serve as structured repositories that amalgamate entities and their semantic relations, effectively organizing and representing domain-specific knowledge. In this context, KGs are formed using educational materials and are pivotal to providing factual context for LLM-generated explanations. The KG structure features levels such as learning goals, courses, topics, and OERs, with semantic relations established through text mining techniques (Figure 1). Figure 1

Figure 1: Structural information added to the LLM context from the KG. Top: learning path as an output of the recommendation system. Bottom: recommended path as it appears in the KG. Area (A): hierarchical structure of the learning goal. Area (B) KG community around LO3 and LO4. Connection (C): semantic relation extracted by the relation extraction algorithm.

The KGs enrich the LLM prompts by providing hierarchical learning structures, semantic relationships, community groupings, and metadata from related learning objects. Such comprehensive contextualization steers the LLM away from potentially erroneous outputs.

Methodology: LLM Prompt Design and Explanation Generation

The methodology involves crafting enriched prompts for the GPT-4 model, augmented by contextual information derived from KGs. The prompts consist of two key components: the query task and the contextual information.

  1. Query Design: The task-oriented query is designed with specific instructions aimed at minimizing deviation from intended educational objectives. This enhances the focus and relevance of the LLM's responses.
  2. Contextual Input: Contextual parts are populated with relevant data from KGs, supplemented by expert inputs that define roles, domain-specific definitions, and supporting content. The domain experts ensure the pedagogical alignment of the generated explanations with learning goals.

These structured prompts are processed by the GPT-4 model to fill predefined explanation templates, which are designed to integrate seamlessly with expert review and learner requirements (Figure 2). Figure 2

Figure 2: Proposed approach for constructing the GPT-4 prompt, with KG-based contextualization, as well as the Chatbot-based user interaction, and the expert roles in the design for context and explanation-templates.

Evaluation and Results

The evaluation strategy employs both quantitative measures, like Rouge metrics, and qualitative feedback from learners and experts. Rouge metrics evaluate the overlap between human-generated reference texts and model-generated summaries with precision, recall, and F1-measures, showcasing the enhanced accuracy of outputs from KG-enriched prompts (Figure 3). Figure 3

Figure 3: Recall, precision, and f1-measure values of the Rouge metric, for both explanation types: 1) with KG-based contextualization (blue), and 2) without contextualization (gray).

Results indicate that explanations with KG context present fewer irrelevant text fillers compared to those without, aligning with both quantitative data and qualitative expert insights. Domain experts emphasized that while the LLM generates coherent explanations, the crafting of reflection-level content necessitates expert human intervention to tailor content to learner contexts effectively.

Conclusion

This study evidences the improved precision of LLM-generated explanations through the integration of KG-derived context. By decreasing the likelihood of generating inaccurate or irrelevant explanations, the approach stands to significantly contribute to personalized educational practices. Continued exploration involves assessing LLMs against other models, expanding sample sizes, and integrating user-specific data in compliance with GDPR to further refine explanation relevance and customization in educational environments.

Whiteboard

Paper to Video (Beta)

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.