Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Deep Trustworthy Knowledge Tracing (1805.10768v3)

Published 28 May 2018 in cs.AI

Abstract: Knowledge tracing (KT), a key component of an intelligent tutoring system, is a machine learning technique that estimates the mastery level of a student based on his/her past performance. The objective of KT is to predict a student's response to the next question. Compared with traditional KT models, deep learning-based KT (DLKT) models show better predictive performance because of the representation power of deep neural networks. Various methods have been proposed to improve the performance of DLKT, but few studies have been conducted on the reliability of DLKT. In this work, we claim that the existing DLKTs are not reliable in real education environments. To substantiate the claim, we show limitations of DLKT from various perspectives such as knowledge state update failure, catastrophic forgetting, and non-interpretability. We then propose a novel regularization to address these problems. The proposed method allows us to achieve trustworthy DLKT. In addition, the proposed model which is trained on scenarios with forgetting can also be easily extended to scenarios without forgetting.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Heonseok Ha (6 papers)
  2. Uiwon Hwang (14 papers)
  3. Yongjun Hong (4 papers)
  4. Jahee Jang (1 paper)
  5. Sungroh Yoon (163 papers)
Citations (6)

Summary

We haven't generated a summary for this paper yet.