Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 135 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 27 tok/s Pro
GPT-5 High 28 tok/s Pro
GPT-4o 80 tok/s Pro
Kimi K2 181 tok/s Pro
GPT OSS 120B 439 tok/s Pro
Claude Sonnet 4.5 38 tok/s Pro
2000 character limit reached

Beyond Similarity: Relation Embedding with Dual Attentions for Item-based Recommendation (1911.04099v1)

Published 11 Nov 2019 in cs.IR

Abstract: Given the effectiveness and ease of use, Item-based Collaborative Filtering (ICF) methods have been broadly used in industry in recent years. The key of ICF lies in the similarity measurement between items, which however is a coarse-grained numerical value that can hardly capture users' fine-grained preferences toward different latent aspects of items from a representation learning perspective. In this paper, we propose a model called REDA (latent Relation Embedding with Dual Attentions) to address this challenge. REDA is essentially a deep learning based recommendation method that employs an item relation embedding scheme through a neural network structure for inter-item relations representation. A relational user embedding is then proposed by aggregating the relation embeddings between all purchased items of a user, which not only better characterizes user preferences but also alleviates the data sparsity problem. Moreover, to capture valid meta-knowledge that reflects users' desired latent aspects and meanwhile suppress their explosive growth towards overfitting, we further propose a dual attentions mechanism, including a memory attention and a weight attention. A relation-wise optimization method is finally developed for model inference by constructing a personalized ranking loss for item relations. Extensive experiments are implemented on real-world datasets and the proposed model is shown to greatly outperform state-of-the-art methods, especially when the data is sparse.

Citations (1)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.