Emergent Mind

PALR: Personalization Aware LLMs for Recommendation

(2305.07622)
Published May 12, 2023 in cs.IR , cs.AI , and cs.CL

Abstract

LLMs have recently received significant attention for their exceptional capabilities. Despite extensive efforts in developing general-purpose LLMs that can be utilized in various NLP tasks, there has been less research exploring their potential in recommender systems. In this paper, we propose a novel framework, named PALR, which aiming to combine user history behaviors (such as clicks, purchases, ratings, etc.) with LLMs to generate user preferred items. Specifically, we first use user/item interactions as guidance for candidate retrieval. Then we adopt a LLM-based ranking model to generate recommended items. Unlike existing approaches that typically adopt general-purpose LLMs for zero/few-shot recommendation testing or training on small-sized language models (with less than 1 billion parameters), which cannot fully elicit LLMs' reasoning abilities and leverage rich item side parametric knowledge, we fine-tune a 7 billion parameters LLM for the ranking purpose. This model takes retrieval candidates in natural language format as input, with instruction which explicitly asking to select results from input candidates during inference. Our experimental results demonstrate that our solution outperforms state-of-the-art models on various sequential recommendation tasks.

Overview

  • PALR introduces a personalized recommendation system that integrates user interaction history with LLMs to improve item suggestions.

  • User profiles are generated from historical interactions via an LLM, which are then used by a retrieval module to select candidate items.

  • The framework involves fine-tuning a 7-billion-parameter LLM, LLaMa, for recommendation tasks using natural language prompts derived from user behaviors.

  • Experimental results using MovieLens-1M and Amazon Beauty datasets show PALR outperforms state-of-the-art models in sequential recommendation tasks.

  • The PALR framework demonstrates potential for enhancing LLM utilization in recommendation systems and prompts further research into optimizing LLMs for such tasks.

Introduction to PALR

A paper presents a new framework called PALR (Personalization Aware LLMs for Recommendation), designed to enhance recommenders systems by integrating users' historical interactions—such as clicks, purchases, and ratings—with LLMs to generate preferred item recommendations for users. The authors propose a novel approach to utilizing LLMs for recommendations, emphasizing the importance of user personalization.

PALR: A Novel Recommendation Framework

The essence of the PALR framework is a multi-step process that first generates user profiles using an LLM based on their interactions with items. A retrieval module then pre-filters candidates from the vast pool of items based on these profiles. Importantly, any retrieval algorithm can be employed in this stage. Finally, the LLM is used to rank these candidates according to the user's historical behaviors.

Fine-Tuning LLM for Task Specificity

Critical to PALR's success is fine-tuning a 7-billion-parameter LLM (the LLaMa model) to accommodate the peculiarities of recommendation tasks. This process includes converting user behavior into natural language prompts that the model can understand during training, imparting the ability to discern patterns in user engagement and thus generate relevant item recommendations. The framework's flexibility was tested using two different datasets and displayed superior performance to existing state-of-the-art models in various sequential recommendation tasks.

Experimental Results and Future Implications

Experiments conducted on two public datasets, MovieLens-1M and Amazon Beauty, demonstrated PALR's significant outperformance over state-of-the-art methods. Notably, PALR showcased its effectiveness in re-ranking items, suggesting substantial improvements in the context of sequential recommendations when compared to traditional approaches. The findings encourage future exploration into optimizing LLMs for recommendation tasks, aiming to balance their powerful capabilities with the need for computational efficiency and reduced latency.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.

YouTube