Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 177 tok/s
Gemini 2.5 Pro 44 tok/s Pro
GPT-5 Medium 29 tok/s Pro
GPT-5 High 32 tok/s Pro
GPT-4o 119 tok/s Pro
Kimi K2 202 tok/s Pro
GPT OSS 120B 432 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Almost Ramanujan Expanders from Arbitrary Expanders via Operator Amplification (2209.07024v2)

Published 15 Sep 2022 in cs.DS, cs.DM, and math.CO

Abstract: We give an efficient algorithm that transforms any bounded degree expander graph into another that achieves almost optimal (namely, near-quadratic, $d \leq 1/\lambda{2+o(1)}$) trade-off between (any desired) spectral expansion $\lambda$ and degree $d$. Furthermore, the algorithm is local: every vertex can compute its new neighbors as a subset of its original neighborhood of radius $O(\log(1/\lambda))$. The optimal quadratic trade-off is known as the Ramanujan bound, so our construction gives almost Ramanujan expanders from arbitrary expanders. The locality of the transformation preserves structural properties of the original graph, and thus has many consequences. Applied to Cayley graphs, our transformation shows that any expanding finite group has almost Ramanujan expanding generators. Similarly, one can obtain almost optimal explicit constructions of quantum expanders, dimension expanders, monotone expanders, etc., from existing (suboptimal) constructions of such objects. Another consequence is a "derandomized" random walk on the original (suboptimal) expander with almost optimal convergence rate. Our transformation also applies when the degree is not bounded or the expansion is not constant. We obtain our results by a generalization of Ta-Shma's technique in his breakthrough paper [STOC 2017], used to obtain explicit almost optimal binary codes. Specifically, our spectral amplification extends Ta-Shma's analysis of bias amplification from scalars to matrices of arbitrary dimension in a very natural way. Curiously, while Ta-Shma's explicit bias amplification derandomizes a well-known probabilistic argument (underlying the Gilbert--Varshamov bound), there seems to be no known probabilistic (or other existential) way of achieving our explicit ("high-dimensional") spectral amplification.

Citations (3)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.