Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Randomized Algorithms for Computation of Tucker decomposition and Higher Order SVD (HOSVD) (2001.07124v5)

Published 20 Jan 2020 in math.NA and cs.NA

Abstract: Big data analysis has become a crucial part of new emerging technologies such as the internet of things, cyber-physical analysis, deep learning, anomaly detection, etc. Among many other techniques, dimensionality reduction plays a key role in such analyses and facilitates feature selection and feature extraction. Randomized algorithms are efficient tools for handling big data tensors. They accelerate decomposing large-scale data tensors by reducing the computational complexity of deterministic algorithms and the communication among different levels of the memory hierarchy, which is the main bottleneck in modern computing environments and architectures. In this paper, we review recent advances in randomization for the computation of Tucker decomposition and Higher Order SVD (HOSVD). We discuss random projection and sampling approaches, single-pass, and multi-pass randomized algorithms, and how to utilize them in the computation of the Tucker decomposition and the HOSVD. Simulations on synthetic and real datasets are provided to compare the performance of some of the best and most promising algorithms.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Salman Ahmadi-Asl (15 papers)
  2. Stanislav Abukhovich (2 papers)
  3. Maame G. Asante-Mensah (2 papers)
  4. Andrzej Cichocki (73 papers)
  5. Anh Huy Phan (13 papers)
  6. Toshihisa Tanaka (19 papers)
  7. Ivan Oseledets (187 papers)
Citations (53)

Summary

We haven't generated a summary for this paper yet.