With the prosperity of e-commerce and web applications, Recommender Systems (RecSys) have become an important component of our daily life, providing personalized suggestions that cater to user preferences. While Deep Neural Networks (DNNs) have made significant advancements in enhancing recommender systems by modeling user-item interactions and incorporating textual side information, DNN-based methods still face limitations, such as difficulties in understanding users' interests and capturing textual side information, inabilities in generalizing to various recommendation scenarios and reasoning on their predictions, etc. Meanwhile, the emergence of LLMs, such as ChatGPT and GPT4, has revolutionized the fields of NLP and AI, due to their remarkable abilities in fundamental responsibilities of language understanding and generation, as well as impressive generalization and reasoning capabilities. As a result, recent studies have attempted to harness the power of LLMs to enhance recommender systems. Given the rapid evolution of this research direction in recommender systems, there is a pressing need for a systematic overview that summarizes existing LLM-empowered recommender systems, to provide researchers in relevant fields with an in-depth understanding. Therefore, in this paper, we conduct a comprehensive review of LLM-empowered recommender systems from various aspects including Pre-training, Fine-tuning, and Prompting. More specifically, we first introduce representative methods to harness the power of LLMs (as a feature encoder) for learning representations of users and items. Then, we review recent techniques of LLMs for enhancing recommender systems from three paradigms, namely pre-training, fine-tuning, and prompting. Finally, we comprehensively discuss future directions in this emerging field.
The paper explores the integration of LLMs into Recommender Systems (RecSys), focusing on how LLMs enhance understanding, generation, and generalization in recommendation scenarios.
It discusses the importance of pre-training, highlighting methods like Masked Language Modeling (MLM) and Next Token Prediction (NTP), and details various fine-tuning strategies including full-model fine-tuning and parameter-efficient fine-tuning (PEFT).
The paper also explore prompting techniques such as In-context Learning (ICL) and Chain-of-Thought (CoT) prompting, and outlines future directions for improving recommendation systems using LLMs.
The integration of LLMs into Recommender Systems (RecSys) has garnered significant interest due to the enhanced capabilities that LLMs provide in language understanding, generation, generalization, and reasoning. These models have transformed traditional recommender systems by addressing limitations like understanding user interests, capturing textual side information, and generalizing across various recommendation scenarios.
Pre-training of LLMs involves training on extensive and diverse datasets to assimilate broad linguistic patterns and structures. For recommender systems, pre-training methods such as Masked Language Modeling (MLM) and Next Token Prediction (NTP) play pivotal roles:
These methods underscore the necessity of pre-training in enabling LLMs to process and predict user interactions effectively.
Fine-tuning is essential for adapting pre-trained LLMs to specific recommendation tasks. It is divided into full-model fine-tuning and parameter-efficient fine-tuning:
Full-model Fine-tuning involves altering the entire model weights. For instance:
Parameter-efficient Fine-tuning (PEFT) fine-tunes only a small subset of model weights, reducing computational costs:
These strategies highlight the adaptability and efficiency of LLMs when fine-tuned for RecSys.
Prompting leverages task-specific prompts to direct LLMs without parameter updates. Techniques include In-context Learning (ICL), Chain-of-Thought (CoT) prompting, and prompt tuning:
In-context Learning (ICL):
Prompt Tuning:
These methods improve the recommendation performance of LLMs by making effective use of prompts and contextual data.
Looking forward, several key areas need exploration:
This survey provides an in-depth exploration of the integration of LLMs into recommender systems, covering pre-training, fine-tuning, and prompting paradigms. As research progresses, systematic and comprehensive approaches are necessary to address current limitations and unlock the full potential of LLMs in recommender systems. The future developments highlighted here will contribute to the evolution of more sophisticated, reliable, and efficient recommendation technologies.