Emergent Mind

Abstract

We study the following lesser-known low rank (LR) recovery problem: recover an $n \times q$ rank-$r$ matrix, $X* =[x*_1 , x*_2,..., x*_q]$, with $r \ll \min(n,q)$, from $m$ independent linear projections of each of its $q$ columns, i.e., from $yk := Ak x*_k , k \in [q]$, when $yk$ is an $m$-length vector with $m < n$. The matrices $Ak$ are known and mutually independent for different $k$. We introduce a novel gradient descent (GD) based solution called AltGD-Min. We show that, if the $A_k$s are i.i.d. with i.i.d. Gaussian entries, and if the right singular vectors of $X*$ satisfy the incoherence assumption, then $\epsilon$-accurate recovery of $X*$ is possible with order $(n+q) r2 \log(1/\epsilon)$ total samples and order $ mq nr \log (1/\epsilon)$ time. Compared with existing work, this is the fastest solution. For $\epsilon < r{1/4}$, it also has the best sample complexity. A simple extension of AltGD-Min also provably solves LR Phase Retrieval, which is a magnitude-only generalization of the above problem. AltGD-Min factorizes the unknown $X$ as $X = UB$ where $U$ and $B$ are matrices with $r$ columns and rows respectively. It alternates between a (projected) GD step for updating $U$, and a minimization step for updating $B$. Its each iteration is as fast as that of regular projected GD because the minimization over $B$ decouples column-wise. At the same time, we can prove exponential error decay for it, which we are unable to for projected GD. Finally, it can also be efficiently federated with a communication cost of only $nr$ per node, instead of $nq$ for projected GD.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.