Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 27 tok/s Pro
GPT-5 High 26 tok/s Pro
GPT-4o 77 tok/s Pro
Kimi K2 200 tok/s Pro
GPT OSS 120B 427 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

PALR: Personalization Aware LLMs for Recommendation (2305.07622v3)

Published 12 May 2023 in cs.IR, cs.AI, and cs.CL

Abstract: LLMs have recently received significant attention for their exceptional capabilities. Despite extensive efforts in developing general-purpose LLMs that can be utilized in various NLP tasks, there has been less research exploring their potential in recommender systems. In this paper, we propose a novel framework, named PALR, which aiming to combine user history behaviors (such as clicks, purchases, ratings, etc.) with LLMs to generate user preferred items. Specifically, we first use user/item interactions as guidance for candidate retrieval. Then we adopt a LLM-based ranking model to generate recommended items. Unlike existing approaches that typically adopt general-purpose LLMs for zero/few-shot recommendation testing or training on small-sized LLMs (with less than 1 billion parameters), which cannot fully elicit LLMs' reasoning abilities and leverage rich item side parametric knowledge, we fine-tune a 7 billion parameters LLM for the ranking purpose. This model takes retrieval candidates in natural language format as input, with instruction which explicitly asking to select results from input candidates during inference. Our experimental results demonstrate that our solution outperforms state-of-the-art models on various sequential recommendation tasks.

Citations (83)

Summary

  • The paper presents PALR, a novel framework that leverages users' historical interactions with LLMs to generate personalized recommendations.
  • It fine-tunes a 7-billion-parameter LLaMa by converting user behaviors into natural language prompts, enhancing recommendation precision.
  • Experimental results on MovieLens-1M and Amazon Beauty demonstrate that PALR outperforms state-of-the-art methods in sequential recommendations.

Introduction to PALR

A paper presents a new framework called PALR (Personalization Aware LLMs for Recommendation), designed to enhance recommenders systems by integrating users' historical interactions—such as clicks, purchases, and ratings—with LLMs to generate preferred item recommendations for users. The authors propose a novel approach to utilizing LLMs for recommendations, emphasizing the importance of user personalization.

PALR: A Novel Recommendation Framework

The essence of the PALR framework is a multi-step process that first generates user profiles using an LLM based on their interactions with items. A retrieval module then pre-filters candidates from the vast pool of items based on these profiles. Importantly, any retrieval algorithm can be employed in this stage. Finally, the LLM is used to rank these candidates according to the user's historical behaviors.

Fine-Tuning LLM for Task Specificity

Critical to PALR's success is fine-tuning a 7-billion-parameter LLM (the LLaMa model) to accommodate the peculiarities of recommendation tasks. This process includes converting user behavior into natural language prompts that the model can understand during training, imparting the ability to discern patterns in user engagement and thus generate relevant item recommendations. The framework's flexibility was tested using two different datasets and displayed superior performance to existing state-of-the-art models in various sequential recommendation tasks.

Experimental Results and Future Implications

Experiments conducted on two public datasets, MovieLens-1M and Amazon Beauty, demonstrated PALR's significant outperformance over state-of-the-art methods. Notably, PALR showcased its effectiveness in re-ranking items, suggesting substantial improvements in the context of sequential recommendations when compared to traditional approaches. The findings encourage future exploration into optimizing LLMs for recommendation tasks, aiming to balance their powerful capabilities with the need for computational efficiency and reduced latency.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Youtube Logo Streamline Icon: https://streamlinehq.com