Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
164 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Enhancing Deep Knowledge Tracing via Diffusion Models for Personalized Adaptive Learning (2405.05134v1)

Published 25 Apr 2024 in cs.CY, cs.AI, and cs.LG

Abstract: In contrast to pedagogies like evidence-based teaching, personalized adaptive learning (PAL) distinguishes itself by closely monitoring the progress of individual students and tailoring the learning path to their unique knowledge and requirements. A crucial technique for effective PAL implementation is knowledge tracing, which models students' evolving knowledge to predict their future performance. Based on these predictions, personalized recommendations for resources and learning paths can be made to meet individual needs. Recent advancements in deep learning have successfully enhanced knowledge tracking through Deep Knowledge Tracing (DKT). This paper introduces generative AI models to further enhance DKT. Generative AI models, rooted in deep learning, are trained to generate synthetic data, addressing data scarcity challenges in various applications across fields such as NLP and computer vision (CV). This study aims to tackle data shortage issues in student learning records to enhance DKT performance for PAL. Specifically, it employs TabDDPM, a diffusion model, to generate synthetic educational records to augment training data for enhancing DKT. The proposed method's effectiveness is validated through extensive experiments on ASSISTments datasets. The experimental results demonstrate that the AI-generated data by TabDDPM significantly improves DKT performance, particularly in scenarios with small data for training and large data for testing.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (30)
  1. H. Peng, S. Ma, and J. M. Spector, “Personalized adaptive learning: an emerging pedagogical approach enabled by a smart learning environment,” Smart Learning Environments, vol. 6, no. 1, pp. 1–14, 2019.
  2. G. Abdelrahman, Q. Wang, and B. Nunes, “Knowledge tracing: A survey,” ACM Computing Surveys, vol. 55, no. 11, pp. 1–37, 2023.
  3. C. Piech, J. Bassen, J. Huang, S. Ganguli, M. Sahami, L. J. Guibas, and J. Sohl-Dickstein, “Deep knowledge tracing,” Advances in neural information processing systems, vol. 28, 2015.
  4. G. Abdelrahman and Q. Wang, “Knowledge tracing with sequential key-value memory networks,” in Proceedings of the 42nd international ACM SIGIR conference on research and development in information retrieval, 2019, pp. 175–184.
  5. A. Ghosh, N. Heffernan, and A. S. Lan, “Context-aware attentive knowledge tracing,” in Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, 2020, pp. 2330–2339.
  6. H. Nakagawa, Y. Iwasawa, and Y. Matsuo, “Graph-based knowledge tracing: modeling student proficiency using graph neural network,” in IEEE/WIC/ACM International Conference on Web Intelligence, 2019, pp. 156–163.
  7. Q. Liu, Z. Huang, Y. Yin, E. Chen, H. Xiong, Y. Su, and G. Hu, “Ekt: Exercise-aware knowledge tracing for student performance prediction,” IEEE Transactions on Knowledge and Data Engineering, vol. 33, no. 1, pp. 100–115, 2019.
  8. G. Abdelrahman and Q. Wang, “Deep graph memory networks for forgetting-robust knowledge tracing,” IEEE Transactions on Knowledge and Data Engineering, 2022.
  9. W. Lee, J. Chun, Y. Lee, K. Park, and S. Park, “Contrastive learning for knowledge tracing,” in Proceedings of the ACM Web Conference 2022, 2022, pp. 2330–2338.
  10. X. Song, J. Li, Q. Lei, W. Zhao, Y. Chen, and A. Mian, “Bi-clkt: Bi-graph contrastive learning based knowledge tracing,” Knowledge-Based Systems, vol. 241, p. 108274, 2022.
  11. S. Lee, Y. Choi, J. Park, B. Kim, and J. Shin, “Consistency and monotonicity regularization for neural knowledge tracing,” arXiv preprint arXiv:2105.00607, 2021.
  12. Y. Cao, S. Li, Y. Liu, Z. Yan, Y. Dai, P. S. Yu, and L. Sun, “A comprehensive survey of ai-generated content (aigc): A history of generative ai from gan to chatgpt,” arXiv preprint arXiv:2303.04226, 2023.
  13. A. Ramesh, M. Pavlov, G. Goh, S. Gray, C. Voss, A. Radford, M. Chen, and I. Sutskever, “Zero-shot text-to-image generation,” in International Conference on Machine Learning.   PMLR, 2021, pp. 8821–8831.
  14. X. Qiu, T. Sun, Y. Xu, Y. Shao, N. Dai, and X. Huang, “Pre-trained models for natural language processing: A survey,” Science China Technological Sciences, vol. 63, no. 10, pp. 1872–1897, 2020.
  15. Y. Song and S. Ermon, “Generative modeling by estimating gradients of the data distribution,” Advances in neural information processing systems, vol. 32, 2019.
  16. P. Dhariwal and A. Nichol, “Diffusion models beat gans on image synthesis,” Advances in neural information processing systems, vol. 34, pp. 8780–8794, 2021.
  17. D. Saxena and J. Cao, “Generative adversarial networks (gans) challenges, solutions, and future directions,” ACM Computing Surveys (CSUR), vol. 54, no. 3, pp. 1–42, 2021.
  18. Y. Song, J. Sohl-Dickstein, D. P. Kingma, A. Kumar, S. Ermon, and B. Poole, “Score-based generative modeling through stochastic differential equations,” arXiv preprint arXiv:2011.13456, 2020.
  19. R. Yang and S. Mandt, “Lossy image compression with conditional diffusion models,” arXiv preprint arXiv:2209.06950, 2022.
  20. R. Yang, P. Srivastava, and S. Mandt, “Diffusion probabilistic modeling for video generation,” Entropy, vol. 25, no. 10, p. 1469, 2023.
  21. X. Li, J. Thickstun, I. Gulrajani, P. S. Liang, and T. B. Hashimoto, “Diffusion-lm improves controllable text generation,” Advances in Neural Information Processing Systems, vol. 35, pp. 4328–4343, 2022.
  22. A. Kotelnikov, D. Baranchuk, I. Rubachev, and A. Babenko, “Tabddpm: Modelling tabular data with diffusion models,” in International Conference on Machine Learning.   PMLR, 2023, pp. 17 564–17 579.
  23. Z. C. Lipton, J. Berkowitz, and C. Elkan, “A critical review of recurrent neural networks for sequence learning,” arXiv preprint arXiv:1506.00019, 2015.
  24. J. Ho, A. Jain, and P. Abbeel, “Denoising diffusion probabilistic models,” Advances in neural information processing systems, vol. 33, pp. 6840–6851, 2020.
  25. Y. Cui, M.-W. Chu, and F. Chen, “Analyzing student process data in game-based assessments with bayesian knowledge tracing and dynamic bayesian networks.” Journal of Educational Data Mining, vol. 11, no. 1, pp. 80–100, 2019.
  26. C.-K. Yeung, “Deep-irt: Make deep learning based knowledge tracing explainable using item response theory,” arXiv preprint arXiv:1904.11738, 2019.
  27. X. Xiong, S. Zhao, E. G. Van Inwegen, and J. E. Beck, “Going deeper with deep knowledge tracing.” International Educational Data Mining Society, 2016.
  28. N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer, “Smote: synthetic minority over-sampling technique,” Journal of artificial intelligence research, vol. 16, pp. 321–357, 2002.
  29. L. Xu, M. Skoularidou, A. Cuesta-Infante, and K. Veeramachaneni, “Modeling tabular data using conditional gan,” Advances in neural information processing systems, vol. 32, 2019.
  30. A. Torfi, E. A. Fox, and C. K. Reddy, “Differentially private synthetic medical data generation using convolutional gans,” Information Sciences, vol. 586, pp. 485–500, 2022.

Summary

We haven't generated a summary for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com