Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 47 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 28 tok/s Pro
GPT-5 High 25 tok/s Pro
GPT-4o 104 tok/s Pro
Kimi K2 156 tok/s Pro
GPT OSS 120B 474 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Data Augmentation Strategies for Improving Sequential Recommender Systems (2203.14037v1)

Published 26 Mar 2022 in cs.IR and cs.LG

Abstract: Sequential recommender systems have recently achieved significant performance improvements with the exploitation of deep learning (DL) based methods. However, although various DL-based methods have been introduced, most of them only focus on the transformations of network structure, neglecting the importance of other influential factors including data augmentation. Obviously, DL-based models require a large amount of training data in order to estimate parameters well and achieve high performances, which leads to the early efforts to increase the training data through data augmentation in computer vision and speech domains. In this paper, we seek to figure out that various data augmentation strategies can improve the performance of sequential recommender systems, especially when the training dataset is not large enough. To this end, we propose a simple set of data augmentation strategies, all of which transform original item sequences in the way of direct corruption and describe how data augmentation changes the performance. Extensive experiments on the latest DL-based model show that applying data augmentation can help the model generalize better, and it can be significantly effective to boost model performances especially when the amount of training data is small. Furthermore, it is shown that our proposed strategies can improve performances to a better or competitive level to existing strategies suggested in the prior works.

Citations (7)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.