Emergent Mind

Flows in Almost Linear Time via Adaptive Preconditioning

(1906.10340)
Published Jun 25, 2019 in cs.DS

Abstract

We present algorithms for solving a large class of flow and regression problems on unit weighted graphs to $(1 + 1 / poly(n))$ accuracy in almost-linear time. These problems include $\ellp$-norm minimizing flow for $p$ large ($p \in [\omega(1), o(\log{2/3} n) ]$), and their duals, $\ellp$-norm semi-supervised learning for $p$ close to $1$. As $p$ tends to infinity, $\ellp$-norm flow and its dual tend to max-flow and min-cut respectively. Using this connection and our algorithms, we give an alternate approach for approximating undirected max-flow, and the first almost-linear time approximations of discretizations of total variation minimization objectives. This algorithm demonstrates that many tools previous viewed as limited to linear systems are in fact applicable to a much wider range of convex objectives. It is based on the the routing-based solver for Laplacian linear systems by Spielman and Teng (STOC '04, SIMAX '14), but require several new tools: adaptive non-linear preconditioning, tree-routing based ultra-sparsification for mixed $\ell2$ and $\ell_p$ norm objectives, and decomposing graphs into uniform expanders.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.