Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 134 tok/s
Gemini 2.5 Pro 41 tok/s Pro
GPT-5 Medium 24 tok/s Pro
GPT-5 High 26 tok/s Pro
GPT-4o 92 tok/s Pro
Kimi K2 193 tok/s Pro
GPT OSS 120B 439 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

A strong direct product theorem in terms of the smooth rectangle bound (1209.0263v3)

Published 3 Sep 2012 in cs.CC

Abstract: A strong direct product theorem states that, in order to solve k instances of a problem, if we provide less than k times the resource required to compute one instance, then the probability of overall success is exponentially small in k. In this paper, we consider the model of two-way public-coin communication complexity and show a strong direct product theorem for all relations in terms of the smooth rectangle bound, introduced by Jain and Klauck as a generic lower bound method in this model. Our result therefore uniformly implies a strong direct product theorem for all relations for which an (asymptotically) optimal lower bound can be provided using the smooth rectangle bound, for example Inner Product, Greater-Than, Set-Disjointness, Gap-Hamming Distance etc. Our result also implies near optimal direct product results for several important functions and relations used to show exponential separations between classical and quantum communication complexity, for which near optimal lower bounds are provided using the rectangle bound, for example by Raz [1999], Gavinsky [2008] and Klartag and Regev [2011]. In fact we are not aware of any relation for which it is known that the smooth rectangle bound does not provide an optimal lower bound. This lower bound subsumes many of the other lower bound methods, for example the rectangle bound (a.k.a the corruption bound), the smooth discrepancy bound (a.k.a the \gamma_2 bound) which in turn subsumes the discrepancy bound, the subdistribution bound and the conditional min-entropy bound. We show our result using information theoretic arguments. A key tool we use is a sampling protocol due to Braverman [2012], in fact a modification of it used by Kerenidis, Laplante, Lerays, Roland and Xiao [2012].

Citations (29)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.