Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
GPT-5.1
GPT-5.1 133 tok/s
Gemini 3.0 Pro 55 tok/s Pro
Gemini 2.5 Flash 164 tok/s Pro
Kimi K2 202 tok/s Pro
Claude Sonnet 4.5 39 tok/s Pro
2000 character limit reached

On Sketching the $q$ to $p$ norms (1806.06429v1)

Published 17 Jun 2018 in cs.DS and cs.CC

Abstract: We initiate the study of data dimensionality reduction, or sketching, for the $q\to p$ norms. Given an $n \times d$ matrix $A$, the $q\to p$ norm, denoted $|A|{q \to p} = \sup{x \in \mathbb{R}d \backslash \vec{0}} \frac{|Ax|p}{|x|_q}$, is a natural generalization of several matrix and vector norms studied in the data stream and sketching models, with applications to datamining, hardness of approximation, and oblivious routing. We say a distribution $S$ on random matrices $L \in \mathbb{R}{nd} \rightarrow \mathbb{R}k$ is a $(k,\alpha)$-sketching family if from $L(A)$, one can approximate $|A|{q \to p}$ up to a factor $\alpha$ with constant probability. We provide upper and lower bounds on the sketching dimension $k$ for every $p, q \in [1, \infty]$, and in a number of cases our bounds are tight. While we mostly focus on constant $\alpha$, we also consider large approximation factors $\alpha$, as well as other variants of the problem such as when $A$ has low rank.

Citations (7)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.