Emergent Mind

Abstract

Sparse Tucker Decomposition (STD) algorithms learn a core tensor and a group of factor matrices to obtain an optimal low-rank representation feature for the \underline{H}igh-\underline{O}rder, \underline{H}igh-\underline{D}imension, and \underline{S}parse \underline{T}ensor (HOHDST). However, existing STD algorithms face the problem of intermediate variables explosion which results from the fact that the formation of those variables, i.e., matrices Khatri-Rao product, Kronecker product, and matrix-matrix multiplication, follows the whole elements in sparse tensor. The above problems prevent deep fusion of efficient computation and big data platforms. To overcome the bottleneck, a novel stochastic optimization strategy (SGD$_$Tucker) is proposed for STD which can automatically divide the high-dimension intermediate variables into small batches of intermediate matrices. Specifically, SGD$_$Tucker only follows the randomly selected small samples rather than the whole elements, while maintaining the overall accuracy and convergence rate. In practice, SGD$_$Tucker features the two distinct advancements over the state of the art. First, SGD$_$Tucker can prune the communication overhead for the core tensor in distributed settings. Second, the low data-dependence of SGD$_$Tucker enables fine-grained parallelization, which makes SGD$_$Tucker obtaining lower computational overheads with the same accuracy. Experimental results show that SGD$_$Tucker runs at least 2$X$ faster than the state of the art.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.