Emergent Mind

From PAC to Instance-Optimal Sample Complexity in the Plackett-Luce Model

(1903.00558)
Published Mar 1, 2019 in cs.LG and stat.ML

Abstract

We consider PAC-learning a good item from $k$-subsetwise feedback information sampled from a Plackett-Luce probability model, with instance-dependent sample complexity performance. In the setting where subsets of a fixed size can be tested and top-ranked feedback is made available to the learner, we give an algorithm with optimal instance-dependent sample complexity, for PAC best arm identification, of $O\bigg(\frac{\theta{[k]}}{k}\sum{i = 2}n\max\Big(1,\frac{1}{\Delta_i2}\Big) \ln\frac{k}{\delta}\Big(\ln \frac{1}{\Deltai}\Big)\bigg)$, $\Deltai$ being the Plackett-Luce parameter gap between the best and the $i{th}$ best item, and $\theta_{[k]}$ is the sum of the \pl\, parameters for the top-$k$ items. The algorithm is based on a wrapper around a PAC winner-finding algorithm with weaker performance guarantees to adapt to the hardness of the input instance. The sample complexity is also shown to be multiplicatively better depending on the length of rank-ordered feedback available in each subset-wise play. We show optimality of our algorithms with matching sample complexity lower bounds. We next address the winner-finding problem in Plackett-Luce models in the fixed-budget setting with instance dependent upper and lower bounds on the misidentification probability, of $\Omega\left(\exp(-2 \tilde \Delta Q) \right)$ for a given budget $Q$, where $\tilde \Delta$ is an explicit instance-dependent problem complexity parameter. Numerical performance results are also reported.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.