Emergent Mind

Improved Iteration Complexities for Overconstrained $p$-Norm Regression

(2111.01848)
Published Nov 2, 2021 in cs.DS and math.OC

Abstract

In this paper we obtain improved iteration complexities for solving $\ellp$ regression. We provide methods which given any full-rank $\mathbf{A} \in \mathbb{R}{n \times d}$ with $n \geq d$, $b \in \mathbb{R}n$, and $p \geq 2$ solve $\min{x \in \mathbb{R}d} \left|\mathbf{A} x - b\right|p$ to high precision in time dominated by that of solving $\widetilde{O}p(d{\frac{p-2}{3p-2}})$ linear systems in $\mathbf{A}\top \mathbf{D} \mathbf{A}$ for positive diagonal matrices $\mathbf{D}$. This improves upon the previous best iteration complexity of $\widetilde{O}p(n{\frac{p-2}{3p-2}})$ (Adil, Kyng, Peng, Sachdeva 2019). As a corollary, we obtain an $\widetilde{O}(d{1/3}\epsilon{-2/3})$ iteration complexity for approximate $\ell\infty$ regression. Further, for $q \in (1, 2]$ and dual norm $q = p/(p-1)$ we provide an algorithm that solves $\ellq$ regression in $\widetilde{O}(d{\frac{p-2}{2p-2}})$ iterations. To obtain this result we analyze row reweightings (closely inspired by $\ellp$-norm Lewis weights) which allow a closer connection between $\ell2$ and $\ellp$ regression. We provide adaptations of two different iterative optimization frameworks which leverage this connection and yield our results. The first framework is based on iterative refinement and multiplicative weights based width reduction and the second framework is based on highly smooth acceleration. Both approaches yield $\widetilde{O}p(d{\frac{p-2}{3p-2}})$ iteration methods but the second has a polynomial dependence on $p$ (as opposed to the exponential dependence of the first algorithm) and provides a new alternative to the previous state-of-the-art methods for $\ellp$ regression for large $p$.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.