Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
110 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

A unified framework for linear dimensionality reduction in L1 (1405.1332v5)

Published 6 May 2014 in cs.DS, cs.NA, math.MG, and math.PR

Abstract: For a family of interpolation norms $| \cdot |{1,2,s}$ on $\mathbb{R}n$, we provide a distribution over random matrices $\Phi_s \in \mathbb{R}{m \times n}$ parametrized by sparsity level $s$ such that for a fixed set $X$ of $K$ points in $\mathbb{R}n$, if $m \geq C s \log(K)$ then with high probability, $\frac{1}{2} | x |{1,2,s} \leq | \Phi_s (x) |1 \leq 2 | x|{1,2,s}$ for all $x\in X$. Several existing results in the literature reduce to special cases of this result at different values of $s$: for $s=n$, $| x|{1,2,n} \equiv | x |{1}$ and we recover that dimension reducing linear maps can preserve the $\ell_1$-norm up to a distortion proportional to the dimension reduction factor, which is known to be the best possible such result. For $s=1$, $|x |{1,2,1} \equiv | x |{2}$, and we recover an $\ell_2 / \ell_1$ variant of the Johnson-Lindenstrauss Lemma for Gaussian random matrices. Finally, if $x$ is $s$-sparse, then $| x |_{1,2,s} = | x |_1$ and we recover that $s$-sparse vectors in $\ell_1n$ embed into $\ell_1{\mathcal{O}(s \log(n))}$ via sparse random matrix constructions.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Felix Krahmer (68 papers)
  2. Rachel Ward (80 papers)
Citations (13)

Summary

We haven't generated a summary for this paper yet.