Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 37 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 10 tok/s Pro
GPT-5 High 15 tok/s Pro
GPT-4o 84 tok/s Pro
Kimi K2 198 tok/s Pro
GPT OSS 120B 448 tok/s Pro
Claude Sonnet 4 31 tok/s Pro
2000 character limit reached

A Neural Attention Model for Adaptive Learning of Social Friends' Preferences (1907.01644v1)

Published 29 Jun 2019 in cs.IR, cs.LG, cs.SI, and stat.ML

Abstract: Social-based recommendation systems exploit the selections of friends to combat the data sparsity on user preferences, and improve the recommendation accuracy of the collaborative filtering strategy. The main challenge is to capture and weigh friends' preferences, as in practice they do necessarily match. In this paper, we propose a Neural Attention mechanism for Social collaborative filtering, namely NAS. We design a neural architecture, to carefully compute the non-linearity in friends' preferences by taking into account the social latent effects of friends on user behavior. In addition, we introduce a social behavioral attention mechanism to adaptively weigh the influence of friends on user preferences and consequently generate accurate recommendations. Our experiments on publicly available datasets demonstrate the effectiveness of the proposed NAS model over other state-of-the-art methods. Furthermore, we study the effect of the proposed social behavioral attention mechanism and show that it is a key factor to our model's performance.

Citations (2)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.