Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 71 tok/s
Gemini 2.5 Pro 54 tok/s Pro
GPT-5 Medium 24 tok/s Pro
GPT-5 High 25 tok/s Pro
GPT-4o 124 tok/s Pro
Kimi K2 200 tok/s Pro
GPT OSS 120B 463 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Pure entropic regularization for metrical task systems (1906.04270v3)

Published 10 Jun 2019 in cs.DS and math.MG

Abstract: We show that on every $n$-point HST metric, there is a randomized online algorithm for metrical task systems (MTS) that is $1$-competitive for service costs and $O(\log n)$-competitive for movement costs. In general, these refined guarantees are optimal up to the implicit constant. While an $O(\log n)$-competitive algorithm for MTS on HST metrics was developed by Bubeck et al. (SODA 2019), that approach could only establish an $O((\log n)2)$-competitive ratio when the service costs are required to be $O(1)$-competitive. Our algorithm can be viewed as an instantiation of online mirror descent with the regularizer derived from a multiscale conditional entropy. In fact, our algorithm satisfies a set of even more refined guarantees; we are able to exploit this property to combine it with known random embedding theorems and obtain, for any $n$-point metric space, a randomized algorithm that is $1$-competitive for service costs and $O((\log n)2)$-competitive for movement costs.

Citations (20)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.