Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

On the Adaptivity Gap of Stochastic Orienteering (1311.3623v2)

Published 14 Nov 2013 in cs.DS

Abstract: The input to the stochastic orienteering problem consists of a budget $B$ and metric $(V,d)$ where each vertex $v$ has a job with deterministic reward and random processing time (drawn from a known distribution). The processing times are independent across vertices. The goal is to obtain a non-anticipatory policy to run jobs at different vertices, that maximizes expected reward, subject to the total distance traveled plus processing times being at most $B$. An adaptive policy is one that can choose the next vertex to visit based on observed random instantiations. Whereas, a non-adaptive policy is just given by a fixed ordering of vertices. The adaptivity gap is the worst-case ratio of the expected rewards of the optimal adaptive and non-adaptive policies. We prove an $\Omega(\log\log B){1/2}$ lower bound on the adaptivity gap of stochastic orienteering. This provides a negative answer to the $O(1)$ adaptivity gap conjectured earlier, and comes close to the $O(\log\log B)$ upper bound. This result holds even on a line metric. We also show an $O(\log\log B)$ upper bound on the adaptivity gap for the correlated stochastic orienteering problem, where the reward of each job is random and possibly correlated to its processing time. Using this, we obtain an improved quasi-polynomial time approximation algorithm for correlated stochastic orienteering.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Nikhil Bansal (61 papers)
  2. Viswanath Nagarajan (47 papers)
Citations (37)

Summary

We haven't generated a summary for this paper yet.