Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 77 tok/s
Gemini 2.5 Pro 33 tok/s Pro
GPT-5 Medium 25 tok/s Pro
GPT-5 High 27 tok/s Pro
GPT-4o 75 tok/s Pro
Kimi K2 220 tok/s Pro
GPT OSS 120B 465 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

cuFasterTucker: A Stochastic Optimization Strategy for Parallel Sparse FastTucker Decomposition on GPU Platform (2210.06014v1)

Published 12 Oct 2022 in cs.DC

Abstract: Currently, the size of scientific data is growing at an unprecedented rate. Data in the form of tensors exhibit high-order, high-dimensional, and highly sparse features. Although tensor-based analysis methods are very effective, the large increase in data size makes the original tensor impossible to process. Tensor decomposition decomposes a tensor into multiple low-rank matrices or tensors that can be exploited by tensor-based analysis methods. Tucker decomposition is such an algorithm, which decomposes a $n$-order tensor into $n$ low-rank factor matrices and a low-rank core tensor. However, most Tucker decomposition methods are accompanied by huge intermediate variables and huge computational load, making them unable to process high-order and high-dimensional tensors. In this paper, we propose FasterTucker decomposition based on FastTucker decomposition, which is a variant of Tucker decomposition. And an efficient parallel FasterTucker decomposition algorithm cuFasterTucker on GPU platform is proposed. It has very low storage and computational requirements, and effectively solves the problem of high-order and high-dimensional sparse tensor decomposition. Compared with the state-of-the-art algorithm, it achieves a speedup of around $15X$ and $7X$ in updating the factor matrices and updating the core matrices, respectively.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

Authors (1)

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube