Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A Fast Parallel Tensor Decomposition with Optimal Stochastic Gradient Descent: an Application in Structural Damage Identification (2111.02632v1)

Published 4 Nov 2021 in cs.LG

Abstract: Structural Health Monitoring (SHM) provides an economic approach which aims to enhance understanding the behavior of structures by continuously collects data through multiple networked sensors attached to the structure. This data is then utilized to gain insight into the health of a structure and make timely and economic decisions about its maintenance. The generated SHM sensing data is non-stationary and exists in a correlated multi-way form which makes the batch/off-line learning and standard two-way matrix analysis unable to capture all of these correlations and relationships. In this sense, the online tensor data analysis has become an essential tool for capturing underlying structures in higher-order datasets stored in a tensor $\mathcal{X} \in \mathbb{R} {I_1 \times \dots \times I_N} $. The CANDECOMP/PARAFAC (CP) decomposition has been extensively studied and applied to approximate X by N loading matrices A(1), . . . ,A(N) where N represents the order of the tensor. We propose a novel algorithm, FP-CPD, to parallelize the CANDECOMP/PARAFAC (CP) decomposition of a tensor $\mathcal{X} \in \mathbb{R} {I_1 \times \dots \times I_N} $. Our approach is based on stochastic gradient descent (SGD) algorithm which allows us to parallelize the learning process and it is very useful in online setting since it updates $\mathcal{X}{t+1}$ in one single step. Our SGD algorithm is augmented with Nesterov's Accelerated Gradient (NAG) and perturbation methods to accelerate and guarantee convergence. The experimental results using laboratory-based and real-life structural datasets indicate fast convergence and good scalability.

Summary

We haven't generated a summary for this paper yet.