Emergent Mind

Subspace Embedding and Linear Regression with Orlicz Norm

(1806.06430)
Published Jun 17, 2018 in cs.DS and cs.LG

Abstract

We consider a generalization of the classic linear regression problem to the case when the loss is an Orlicz norm. An Orlicz norm is parameterized by a non-negative convex function $G:\mathbb{R}+\rightarrow\mathbb{R}+$ with $G(0)=0$: the Orlicz norm of a vector $x\in\mathbb{R}n$ is defined as $ |x|G=\inf\left{\alpha>0\large\mid\sum{i=1}n G(|xi|/\alpha)\leq 1\right}. $ We consider the cases where the function $G(\cdot)$ grows subquadratically. Our main result is based on a new oblivious embedding which embeds the column space of a given matrix $A\in\mathbb{R}{n\times d}$ with Orlicz norm into a lower dimensional space with $\ell2$ norm. Specifically, we show how to efficiently find an embedding matrix $S\in\mathbb{R}{m\times n},m<n$ such that $\forall x\in\mathbb{R}{d},\Omega(1/(d\log n)) \cdot |Ax|G\leq |SAx|2\leq O(d2\log n) \cdot |Ax|G.$ By applying this subspace embedding technique, we show an approximation algorithm for the regression problem $\min{x\in\mathbb{R}d} |Ax-b|G$, up to a $O(d\log2 n)$ factor. As a further application of our techniques, we show how to also use them to improve on the algorithm for the $\ellp$ low rank matrix approximation problem for $1\leq p<2$.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.