Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
98 tokens/sec
GPT-4o
8 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

FusionDeepMF: A Dual Embedding based Deep Fusion Model for Recommendation (2210.05338v1)

Published 11 Oct 2022 in cs.IR and cs.LG

Abstract: Traditional Collaborative Filtering (CF) based methods are applied to understand the personal preferences of users/customers for items or products from the rating matrix. Usually, the rating matrix is sparse in nature. So there are some improved variants of the CF method that apply the increasing amount of side information to handle the sparsity problem. Only linear kernel or only non-linear kernel is applied in most of the available recommendation-related work to understand user-item latent feature embeddings from data. Only linear kernel or only non-linear kernel is not sufficient to learn complex user-item features from side information of users. Recently, some researchers have focused on hybrid models that learn some features with non-linear kernels and some other features with linear kernels. But it is very difficult to understand which features can be learned accurately with linear kernels or with non-linear kernels. To overcome this problem, we propose a novel deep fusion model named FusionDeepMF and the novel attempts of this model are i) learning user-item rating matrix and side information through linear and non-linear kernel simultaneously, ii) application of a tuning parameter determining the trade-off between the dual embeddings that are generated from linear and non-linear kernels. Extensive experiments on online review datasets establish that FusionDeepMF can be remarkably futuristic compared to other baseline approaches. Empirical evidence also shows that FusionDeepMF achieves better performances compared to the linear kernels of Matrix Factorization (MF) and the non-linear kernels of Multi-layer Perceptron (MLP).

Summary

We haven't generated a summary for this paper yet.