Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
149 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Block-Term Tensor Decomposition: Model Selection and Computation (2002.09759v2)

Published 22 Feb 2020 in math.NA and cs.NA

Abstract: The so-called block-term decomposition (BTD) tensor model has been recently receiving increasing attention due to its enhanced ability of representing systems and signals that are composed of \emph{blocks} of rank higher than one, a scenario encountered in numerous and diverse applications. Its uniqueness and approximation have thus been thoroughly studied. Nevertheless, the challenging problem of estimating the BTD model structure, namely the number of block terms and their individual ranks, has only recently started to attract significant attention. In this paper, a novel method of BTD model selection and computation is proposed, based on the idea of imposing column sparsity \emph{jointly} on the factors and in a \emph{hierarchical} manner and estimating the ranks as the numbers of factor columns of non-negligible magnitude. Following a block successive upper bound minimization (BSUM) approach for the proposed optimization problem is shown to result in an alternating hierarchical iteratively reweighted least squares (HIRLS) algorithm, which is fast converging and enjoys high computational efficiency, as it relies in its iterations on small-sized sub-problems with closed-form solutions. Simulation results for both synthetic examples and a hyper-spectral image de-noising application are reported, which demonstrate the superiority of the proposed scheme over the state-of-the-art in terms of success rate in rank estimation as well as computation time and rate of convergence.

Citations (2)

Summary

We haven't generated a summary for this paper yet.