Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 87 tok/s
Gemini 2.5 Pro 45 tok/s Pro
GPT-5 Medium 32 tok/s Pro
GPT-5 High 29 tok/s Pro
GPT-4o 105 tok/s Pro
Kimi K2 202 tok/s Pro
GPT OSS 120B 461 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

Randomization can be as helpful as a glimpse of the future in online computation (1511.05886v2)

Published 18 Nov 2015 in cs.DS and cs.CC

Abstract: We provide simple but surprisingly useful direct product theorems for proving lower bounds on online algorithms with a limited amount of advice about the future. As a consequence, we are able to translate decades of research on randomized online algorithms to the advice complexity model. Doing so improves significantly on the previous best advice complexity lower bounds for many online problems, or provides the first known lower bounds. For example, if $n$ is the number of requests, we show that: (1) A paging algorithm needs $\Omega(n)$ bits of advice to achieve a competitive ratio better than $H_k=\Omega(\log k)$, where $k$ is the cache size. Previously, it was only known that $\Omega(n)$ bits of advice were necessary to achieve a constant competitive ratio smaller than $5/4$. (2) Every $O(n{1-\varepsilon})$-competitive vertex coloring algorithm must use $\Omega(n\log n)$ bits of advice. Previously, it was only known that $\Omega(n\log n)$ bits of advice were necessary to be optimal. For certain online problems, including the MTS, $k$-server, paging, list update, and dynamic binary search tree problem, our results imply that randomization and sublinear advice are equally powerful (if the underlying metric space or node set is finite). This means that several long-standing open questions regarding randomized online algorithms can be equivalently stated as questions regarding online algorithms with sublinear advice. For example, we show that there exists a deterministic $O(\log k)$-competitive $k$-server algorithm with advice complexity $o(n)$ if and only if there exists a randomized $O(\log k)$-competitive $k$-server algorithm without advice. Technically, our main direct product theorem is obtained by extending an information theoretical lower bound technique due to Emek, Fraigniaud, Korman, and Ros\'en [ICALP'09].

Citations (37)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

Authors (1)