Emergent Mind

On Sketching the $q$ to $p$ norms

(1806.06429)
Published Jun 17, 2018 in cs.DS and cs.CC

Abstract

We initiate the study of data dimensionality reduction, or sketching, for the $q\to p$ norms. Given an $n \times d$ matrix $A$, the $q\to p$ norm, denoted $|A|{q \to p} = \sup{x \in \mathbb{R}d \backslash \vec{0}} \frac{|Ax|p}{|x|q}$, is a natural generalization of several matrix and vector norms studied in the data stream and sketching models, with applications to datamining, hardness of approximation, and oblivious routing. We say a distribution $S$ on random matrices $L \in \mathbb{R}{nd} \rightarrow \mathbb{R}k$ is a $(k,\alpha)$-sketching family if from $L(A)$, one can approximate $|A|_{q \to p}$ up to a factor $\alpha$ with constant probability. We provide upper and lower bounds on the sketching dimension $k$ for every $p, q \in [1, \infty]$, and in a number of cases our bounds are tight. While we mostly focus on constant $\alpha$, we also consider large approximation factors $\alpha$, as well as other variants of the problem such as when $A$ has low rank.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.