Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Tensor Networks for Latent Variable Analysis. Part I: Algorithms for Tensor Train Decomposition (1609.09230v1)

Published 29 Sep 2016 in cs.NA and math.OC

Abstract: Decompositions of tensors into factor matrices, which interact through a core tensor, have found numerous applications in signal processing and machine learning. A more general tensor model which represents data as an ordered network of sub-tensors of order-2 or order-3 has, so far, not been widely considered in these fields, although this so-called tensor network decomposition has been long studied in quantum physics and scientific computing. In this study, we present novel algorithms and applications of tensor network decompositions, with a particular focus on the tensor train decomposition and its variants. The novel algorithms developed for the tensor train decomposition update, in an alternating way, one or several core tensors at each iteration, and exhibit enhanced mathematical tractability and scalability to exceedingly large-scale data tensors. The proposed algorithms are tested in classic paradigms of blind source separation from a single mixture, denoising, and feature extraction, and achieve superior performance over the widely used truncated algorithms for tensor train decomposition.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Anh-Huy Phan (18 papers)
  2. Andrzej Cichocki (73 papers)
  3. Petr Tichavsky (15 papers)
  4. George Luta (4 papers)
  5. Danilo Mandic (57 papers)
  6. Andre Uschmajew (1 paper)
Citations (14)

Summary

We haven't generated a summary for this paper yet.