Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 71 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 24 tok/s Pro
GPT-5 High 25 tok/s Pro
GPT-4o 124 tok/s Pro
Kimi K2 200 tok/s Pro
GPT OSS 120B 463 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Non-signalling parallel repetition using de Finetti reductions (1411.1582v1)

Published 6 Nov 2014 in quant-ph and cs.CC

Abstract: In the context of multiplayer games, the parallel repetition problem can be phrased as follows: given a game $G$ with optimal winning probability $1-\alpha$ and its repeated version $Gn$ (in which $n$ games are played together, in parallel), can the players use strategies that are substantially better than ones in which each game is played independently? This question is relevant in physics for the study of correlations and plays an important role in computer science in the context of complexity and cryptography. In this work the case of multiplayer non-signalling games is considered, i.e., the only restriction on the players is that they are not allowed to communicate during the game. For complete-support games (games where all possible combinations of questions have non-zero probability to be asked) with any number of players we prove a threshold theorem stating that the probability that non-signalling players win more than a fraction $1-\alpha+\beta$ of the $n$ games is exponentially small in $n\beta2$, for every $0\leq \beta \leq \alpha$. For games with incomplete support we derive a similar statement, for a slightly modified form of repetition. The result is proved using a new technique, based on a recent de Finetti theorem, which allows us to avoid central technical difficulties that arise in standard proofs of parallel repetition theorems.

Citations (26)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.