Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 56 tok/s
Gemini 2.5 Pro 39 tok/s Pro
GPT-5 Medium 15 tok/s Pro
GPT-5 High 16 tok/s Pro
GPT-4o 99 tok/s Pro
Kimi K2 155 tok/s Pro
GPT OSS 120B 476 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

Low rank tensor completion with sparse regularization in a transformed domain (1911.08082v1)

Published 19 Nov 2019 in math.NA, cs.LG, and cs.NA

Abstract: Tensor completion is a challenging problem with various applications. Many related models based on the low-rank prior of the tensor have been proposed. However, the low-rank prior may not be enough to recover the original tensor from the observed incomplete tensor. In this paper, we prose a tensor completion method by exploiting both the low-rank and sparse prior of tensor. Specifically, the tensor completion task can be formulated as a low-rank minimization problem with a sparse regularizer. The low-rank property is depicted by the tensor truncated nuclear norm based on tensor singular value decomposition (T-SVD) which is a better approximation of tensor tubal rank than tensor nuclear norm. While the sparse regularizer is imposed by a $\ell_{1}$-norm in a discrete cosine transformation (DCT) domain, which can better employ the local sparse property of completed data. To solve the optimization problem, we employ an alternating direction method of multipliers (ADMM) in which we only need to solve several subproblems which have closed-form solutions. Substantial experiments on real world images and videos show that the proposed method has better performances than the existing state-of-the-art methods.

Citations (12)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.