Emergent Mind

On the Finite-Time Performance of the Knowledge Gradient Algorithm

(2206.06847)
Published Jun 14, 2022 in stat.ML and cs.LG

Abstract

The knowledge gradient (KG) algorithm is a popular and effective algorithm for the best arm identification (BAI) problem. Due to the complex calculation of KG, theoretical analysis of this algorithm is difficult, and existing results are mostly about the asymptotic performance of it, e.g., consistency, asymptotic sample allocation, etc. In this research, we present new theoretical results about the finite-time performance of the KG algorithm. Under independent and normally distributed rewards, we derive bounds for the sample allocation of the algorithm. With these bounds, existing asymptotic results become simple corollaries. Furthermore, we derive upper and lower bounds for the probability of error and simple regret of the algorithm, and show the performance of the algorithm for the multi-armed bandit (MAB) problem. These developments not only extend the existing analysis of the KG algorithm, but can also be used to analyze other improvement-based algorithms. Last, we use numerical experiments to compare the bounds we derive and the performance of the KG algorithm.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.