Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 49 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 19 tok/s Pro
GPT-5 High 16 tok/s Pro
GPT-4o 103 tok/s Pro
Kimi K2 172 tok/s Pro
GPT OSS 120B 472 tok/s Pro
Claude Sonnet 4 39 tok/s Pro
2000 character limit reached

Adversarial Collaborative Auto-encoder for Top-N Recommendation (1808.05361v1)

Published 16 Aug 2018 in cs.IR

Abstract: During the past decade, model-based recommendation methods have evolved from latent factor models to neural network-based models. Most of these techniques mainly focus on improving the overall performance, such as the root mean square error for rating predictions and hit ratio for top-N recommendation, where the users' feedback is considered as the ground-truth. However, in real-world applications, the users' feedback is possibly contaminated by imperfect user behaviours, namely, careless preference selection. Such data contamination poses challenges on the design of robust recommendation methods. In this work, to address the above issue, we propose a general adversial training framework for neural network-based recommendation models, which improves both the model robustness and the overall performance. We point out the tradeoffs between performance and robustness enhancement with detailed instructions on how to strike a balance. Specifically, we implement our approach on the collaborative auto-encoder, followed by experiments on three public available datasets: MovieLens-1M, Ciao, and FilmTrust. We show that our approach outperforms highly competitive state-of-the-art recommendation methods. In addition, we carry out a thorough analysis on the noise impacts, as well as the complex interactions between model nonlinearity and noise levels. Through simple modifications, our adversarial training framework can be applied to a host of neural network-based models whose robustness and performance are expected to be both enhanced.

Citations (30)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.