Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 71 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 18 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 101 tok/s Pro
Kimi K2 196 tok/s Pro
GPT OSS 120B 467 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

GenRec: Generative Sequential Recommendation with Large Language Models (2407.21191v2)

Published 30 Jul 2024 in cs.IR, cs.AI, cs.CL, and cs.LG

Abstract: Sequential recommendation is a task to capture hidden user preferences from historical user item interaction data and recommend next items for the user. Significant progress has been made in this domain by leveraging classification based learning methods. Inspired by the recent paradigm of 'pretrain, prompt and predict' in NLP, we consider sequential recommendation as a sequence to sequence generation task and propose a novel model named Generative Recommendation (GenRec). Unlike classification based models that learn explicit user and item representations, GenRec utilizes the sequence modeling capability of Transformer and adopts the masked item prediction objective to effectively learn the hidden bidirectional sequential patterns. Different from existing generative sequential recommendation models, GenRec does not rely on manually designed hard prompts. The input to GenRec is textual user item sequence and the output is top ranked next items. Moreover, GenRec is lightweight and requires only a few hours to train effectively in low-resource settings, making it highly applicable to real-world scenarios and helping to democratize LLMs in the sequential recommendation domain. Our extensive experiments have demonstrated that GenRec generalizes on various public real-world datasets and achieves state-of-the-art results. Our experiments also validate the effectiveness of the the proposed masked item prediction objective that improves the model performance by a large margin.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

  • The paper introduces GenRec, a generative sequential recommendation method that reframes prediction as a sequence-to-sequence task using LLMs and Transformers.
  • It employs an encoder-decoder architecture with token, positional, and ID embeddings alongside a masking mechanism to capture user-item interaction patterns.
  • The model outperforms baselines on datasets like Amazon and Yelp, achieving superior Hit Ratios and NDCG scores even under low-resource conditions.

Generative Sequential Recommendation Using GenRec

The paper "GenRec: Generative Sequential Recommendation with LLMs" proposes a model, GenRec, which frames sequential recommendation as a sequence-to-sequence generation task using LLMs. This approach is designed to capture user preferences by modeling interactions with Transformer-based architectures, thus enhancing sequential recommendation systems.

Model Architecture

GenRec is built on an encoder-decoder framework utilizing the power of Transformers. The architecture consists of an encoder that processes input sequences containing user-item interactions and a decoder that predicts subsequent items in the sequence. The model employs various embeddings, including token embeddings, positional embeddings, and user/item identifier embeddings, to capture cross-modal features and sequence information. Figure 1

Figure 1: An illustration of the architecture of GenRec. The input textual user item interaction sequence is first tokenized into a sequence of tokens. Token embedding, ID embedding and positional embedding are summed up to produce the bidirectional encoder input. In pretraining and finetuning, a random item is masked and the auto-regressive decoder generates the masked item. In inference, the decoder generates top 20 masked item predictions to calculate the evaluation metrics.

Masking Mechanism and Task Objectives

GenRec’s training paradigm incorporates a masked item prediction objective, aligning the pretraining and fine-tuning phases to maximize sequential context learning. During pretraining, a random item from the user sequence is masked, and GenRec predicts this masked item, using cross-entropy as the optimization criterion. Figure 2

Figure 2: An illustration of different masking mechanisms in pretraining, finetuning and inference. In pretraining, a random item in the sequence is masked while in finetuning and inference, masked items are appended to the end of the sequence. Note, the last two items in the user item interaction sequence are excluded in pretraining to avoid data leakage. Similarly, the last one item in the sequence is excluded in finetuning.

The model ensures that the context learned during pretraining on masked sequences translates effectively to the fine-tuning stage, where the task is to predict the next item in the user interaction sequence by appending a [MASK] token at the end.

Experimental Evaluation

GenRec was evaluated on multiple real-world datasets including Amazon Sports, Amazon Beauty, and Yelp datasets. The model achieved state-of-the-art results on several metrics, such as top-5 and top-10 Hit Ratios (HR) and NDCGs, indicating its effective modeling of sequential patterns within user-item interaction data.

The paper emphasizes that even under low-resource settings, GenRec maintains competitive performance compared to classification-based methods and complex generative ones, highlighting its efficiency and broad applicability.

Performance Comparison

In comparing GenRec with baselines such as Caser, HGN, and P5, GenRec demonstrated superior performance, particularly on datasets with extensive user-item interaction histories. The flexibility in modality-specific embedding and the use of the cloze style objectives bolstered this performance by incorporating both personalization and efficacy.

Conclusion

GenRec advances the application of LLMs in sequential recommendation by leveraging a generative framework that fuses model simplicity with robust performance. The direct application of Transformer-based models caters to personalized and efficient recommendation generation without relying on complex prompt engineering. Ongoing research can explore adaptations of this framework in other recommendation domains, driven by the same generative architecture principles.

This paper presents a promising adaptation of LLMs, indicating the heightened capability of sequence models to inform personalization through generative methods aligned with pretraining and fine-tuning consistency. These considerations highlight a trajectory toward more adaptable and efficient recommendation systems built on foundational natural LLMs.

Authors (2)

X Twitter Logo Streamline Icon: https://streamlinehq.com

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube