Emergent Mind

Abstract

We consider solving the low rank matrix sensing problem with Factorized Gradient Descend (FGD) method when the true rank is unknown and over-specified, which we refer to as over-parameterized matrix sensing. If the ground truth signal $\mathbf{X}* \in \mathbb{R}{d*d}$ is of rank $r$, but we try to recover it using $\mathbf{F} \mathbf{F}\top$ where $\mathbf{F} \in \mathbb{R}{d*k}$ and $k>r$, the existing statistical analysis falls short, due to a flat local curvature of the loss function around the global maxima. By decomposing the factorized matrix $\mathbf{F}$ into separate column spaces to capture the effect of extra ranks, we show that $|\mathbf{F}t \mathbf{F}t - \mathbf{X}*|_{F}2$ converges to a statistical error of $\tilde{\mathcal{O}} ({k d \sigma2/n})$ after $\tilde{\mathcal{O}}(\frac{\sigma{r}}{\sigma}\sqrt{\frac{n}{d}})$ number of iterations where $\mathbf{F}t$ is the output of FGD after $t$ iterations, $\sigma2$ is the variance of the observation noise, $\sigma_{r}$ is the $r$-th largest eigenvalue of $\mathbf{X}*$, and $n$ is the number of sample. Our results, therefore, offer a comprehensive picture of the statistical and computational complexity of FGD for the over-parameterized matrix sensing problem.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.