Improved No-Regret Algorithms for Stochastic Shortest Path with Linear MDP
(2112.09859)Abstract
We introduce two new no-regret algorithms for the stochastic shortest path (SSP) problem with a linear MDP that significantly improve over the only existing results of (Vial et al., 2021). Our first algorithm is computationally efficient and achieves a regret bound $\widetilde{O}\left(\sqrt{d3B{\star}2T{\star} K}\right)$, where $d$ is the dimension of the feature space, $B{\star}$ and $T{\star}$ are upper bounds of the expected costs and hitting time of the optimal policy respectively, and $K$ is the number of episodes. The same algorithm with a slight modification also achieves logarithmic regret of order $O\left(\frac{d3B{\star}4}{c{\min}2\text{gap}{\min}}\ln5\frac{dB{\star} K}{c{\min}} \right)$, where $\text{gap}{\min}$ is the minimum sub-optimality gap and $c{\min}$ is the minimum cost over all state-action pairs. Our result is obtained by developing a simpler and improved analysis for the finite-horizon approximation of (Cohen et al., 2021) with a smaller approximation error, which might be of independent interest. On the other hand, using variance-aware confidence sets in a global optimization problem, our second algorithm is computationally inefficient but achieves the first "horizon-free" regret bound $\widetilde{O}(d{3.5}B{\star}\sqrt{K})$ with no polynomial dependency on $T{\star}$ or $1/c{\min}$, almost matching the $\Omega(dB_{\star}\sqrt{K})$ lower bound from (Min et al., 2021).
We're not able to analyze this paper right now due to high demand.
Please check back later (sorry!).
Generate a summary of this paper on our Pro plan:
We ran into a problem analyzing this paper.