Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 83 tok/s
Gemini 2.5 Pro 42 tok/s Pro
GPT-5 Medium 30 tok/s Pro
GPT-5 High 36 tok/s Pro
GPT-4o 108 tok/s Pro
Kimi K2 220 tok/s Pro
GPT OSS 120B 473 tok/s Pro
Claude Sonnet 4 39 tok/s Pro
2000 character limit reached

Knowledge Distillation Approaches for Accurate and Efficient Recommender System (2407.13952v1)

Published 19 Jul 2024 in cs.IR and cs.AI

Abstract: Despite its breakthrough in classification problems, Knowledge distillation (KD) to recommendation models and ranking problems has not been studied well in the previous literature. This dissertation is devoted to developing knowledge distillation methods for recommender systems to fully improve the performance of a compact model. We propose novel distillation methods designed for recommender systems. The proposed methods are categorized according to their knowledge sources as follows: (1) Latent knowledge: we propose two methods that transfer latent knowledge of user/item representation. They effectively transfer knowledge of niche tastes with a balanced distillation strategy that prevents the KD process from being biased towards a small number of large preference groups. Also, we propose a new method that transfers user/item relations in the representation space. The proposed method selectively transfers essential relations considering the limited capacity of the compact model. (2) Ranking knowledge: we propose three methods that transfer ranking knowledge from the recommendation results. They formulate the KD process as a ranking matching problem and transfer the knowledge via a listwise learning strategy. Further, we present a new learning framework that compresses the ranking knowledge of heterogeneous recommendation models. The proposed framework is developed to ease the computational burdens of model ensemble which is a dominant solution for many recommendation applications. We validate the benefit of our proposed methods and frameworks through extensive experiments. To summarize, this dissertation sheds light on knowledge distillation approaches for a better accuracy-efficiency trade-off of the recommendation models.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

Authors (1)

X Twitter Logo Streamline Icon: https://streamlinehq.com