Papers
Topics
Authors
Recent
Search
2000 character limit reached

Leverage Score Sampling for Faster Accelerated Regression and ERM

Published 22 Nov 2017 in stat.ML, cs.LG, and math.OC | (1711.08426v1)

Abstract: Given a matrix $\mathbf{A}\in\mathbb{R}{n\times d}$ and a vector $b \in\mathbb{R}{d}$, we show how to compute an $\epsilon$-approximate solution to the regression problem $ \min_{x\in\mathbb{R}{d}}\frac{1}{2} |\mathbf{A} x - b|{2}{2} $ in time $ \tilde{O} ((n+\sqrt{d\cdot\kappa{\text{sum}}})\cdot s\cdot\log\epsilon{-1}) $ where $\kappa_{\text{sum}}=\mathrm{tr}\left(\mathbf{A}{\top}\mathbf{A}\right)/\lambda_{\min}(\mathbf{A}{T}\mathbf{A})$ and $s$ is the maximum number of non-zero entries in a row of $\mathbf{A}$. Our algorithm improves upon the previous best running time of $ \tilde{O} ((n+\sqrt{n \cdot\kappa_{\text{sum}}})\cdot s\cdot\log\epsilon{-1})$. We achieve our result through a careful combination of leverage score sampling techniques, proximal point methods, and accelerated coordinate descent. Our method not only matches the performance of previous methods, but further improves whenever leverage scores of rows are small (up to polylogarithmic factors). We also provide a non-linear generalization of these results that improves the running time for solving a broader class of ERM problems.

Citations (21)

Summary

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.