Emergent Mind

Restricted Low-Rank Approximation via ADMM

(1512.01748)
Published Dec 6, 2015 in cs.NA and cs.DS

Abstract

The matrix low-rank approximation problem with additional convex constraints can find many applications and has been extensively studied before. However, this problem is shown to be nonconvex and NP-hard; most of the existing solutions are heuristic and application-dependent. In this paper, we show that, other than tons of application in current literature, this problem can be used to recover a feasible solution for SDP relaxation. By some sophisticated tricks, it can be equivalently posed in an appropriate form for the Alternating Direction Method of Multipliers (ADMM) to solve. The two updates of ADMM include the basic matrix low-rank approximation and projection onto a convex set. Different from the general non-convex problems, the sub-problems in each step of ADMM can be solved exactly and efficiently in spite of their non-convexity. Moreover, the algorithm will converge exponentially under proper conditions. The simulation results confirm its superiority over existing solutions. We believe that the results in this paper provide a useful tool for this important problem and will help to extend the application of ADMM to the non-convex regime.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.