Emergent Mind

Abstract

We describe how the low-rank structure in an SDP can be exploited to reduce the per-iteration cost of a convex primal-dual interior-point method down to $O(n{3})$ time and $O(n{2})$ memory, even at very high accuracies. A traditional difficulty is the dense Newton subproblem at each iteration, which becomes progressively ill-conditioned as progress is made towards the solution. Preconditioners have previously been proposed to improve conditioning, but these can be expensive to set up, and become ineffective as the preconditioner itself becomes increasingly ill-conditioned at high accuracies. Instead, we present a \emph{well-conditioned reformulation} of the Newton subproblem that is cheap to set up, and whose condition number is guaranteed to remain bounded over all iterations of the interior-point method. In theory, applying an inner iterative method to the reformulation reduces the per-iteration cost of the outer interior-point method to $O(n{3})$ time and $O(n{2})$ memory. We also present a \emph{well-conditioned preconditioner} that theoretically increases the outer per-iteration cost to $O(n{3}r{3})$ time and $O(n{2}r{2})$ memory, where $r$ is an upper-bound on the solution rank, but in practice greatly improves the convergence of the inner iterations.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.