Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 41 tok/s
Gemini 2.5 Pro 46 tok/s Pro
GPT-5 Medium 21 tok/s Pro
GPT-5 High 20 tok/s Pro
GPT-4o 91 tok/s Pro
Kimi K2 178 tok/s Pro
GPT OSS 120B 474 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

Data-Driven Online Model Selection With Regret Guarantees (2306.02869v3)

Published 5 Jun 2023 in cs.LG, cs.AI, and stat.ML

Abstract: We consider model selection for sequential decision making in stochastic environments with bandit feedback, where a meta-learner has at its disposal a pool of base learners, and decides on the fly which action to take based on the policies recommended by each base learner. Model selection is performed by regret balancing but, unlike the recent literature on this subject, we do not assume any prior knowledge about the base learners like candidate regret guarantees; instead, we uncover these quantities in a data-driven manner. The meta-learner is therefore able to leverage the realized regret incurred by each base learner for the learning environment at hand (as opposed to the expected regret), and single out the best such regret. We design two model selection algorithms operating with this more ambitious notion of regret and, besides proving model selection guarantees via regret balancing, we experimentally demonstrate the compelling practical benefits of dealing with actual regrets instead of candidate regret bounds.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (22)
  1. Regret balancing for bandit and rl model selection. arXiv preprint arXiv:2006.05491, 2020.
  2. M. Abeille and A. Lazaric. Linear thompson sampling revisited. In Artificial Intelligence and Statistics, pages 176–184. PMLR, 2017.
  3. Corralling a band of bandit algorithms. In Conference on Learning Theory, pages 12–38. PMLR, 2017.
  4. Corralling stochastic bandit algorithms. arXiv preprint arXiv:2006.09255, 2020.
  5. Tuning bandit algorithms in stochastic environments. In Algorithmic Learning Theory: 18th International Conference, ALT 2007, Sendai, Japan, October 1-4, 2007. Proceedings 18, pages 150–165. Springer, 2007.
  6. Finite-time analysis of the multiarmed bandit problem. Mach. Learn., 47(2–3):235–256, 2002a.
  7. The nonstochastic multiarmed bandit problem. SIAM Journal on Computing, 32(1):48–77, 2002b.
  8. Rate-adaptive model selection over a collection of black-box contextual bandit algorithms. arXiv preprint arXiv:2006.03632, 2020.
  9. Osom: A simultaneously optimal algorithm for multi-armed and linear contextual bandits. In International Conference on Artificial Intelligence and Statistics, pages 1844–1854, 2020.
  10. Dynamic balancing for model selection in bandits and rl. In Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 2276–2285. PMLR, 2021.
  11. Adapting to misspecification in contextual bandits. In Advances in Neural Information Processing Systems, 2020.
  12. Model selection for contextual bandits. In Advances in Neural Information Processing Systems, pages 14741–14752, 2019.
  13. Problem-complexity adaptive model selection for stochastic linear bandits. arXiv preprint arXiv:2006.02612, 2020.
  14. Uniform, nonparametric, non-asymptotic confidence sequences. arXiv preprint arXiv:1810.08240, 2018.
  15. T. Lattimore and C. Szepesvári. Bandit algorithms. Cambridge University Press, 2020.
  16. Online model selection for reinforcement learning with function approximation. arXiv preprint arXiv:2011.09750, 2020.
  17. T. V. Marinov and J. Zimmert. The pareto frontier of model selection for general contextual bandits. Advances in Neural Information Processing Systems, 34:17956–17967, 2021.
  18. M. Odalric and R. Munos. Adaptive bandits: Towards the best history-dependent strategy. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pages 570–578. JMLR Workshop and Conference Proceedings, 2011.
  19. Regret bound balancing and elimination for model selection in bandits and rl. arXiv preprint arXiv:2012.13045, 2020a.
  20. Model selection in contextual stochastic bandit problems. arXiv preprint arXiv:2003.01704, 2020b.
  21. Best of both worlds model selection. In Advances in Neural Information Processing Systems, volume 35, pages 1883–1895. Curran Associates, Inc., 2022.
  22. A model selection approach for corruption robust reinforcement learning. In International Conference on Algorithmic Learning Theory, pages 1043–1096. PMLR, 2022.
Citations (3)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube