Boosting Simple Learners (2001.11704v8)
Abstract: Boosting is a celebrated machine learning approach which is based on the idea of combining weak and moderately inaccurate hypotheses to a strong and accurate one. We study boosting under the assumption that the weak hypotheses belong to a class of bounded capacity. This assumption is inspired by the common convention that weak hypotheses are "rules-of-thumbs" from an "easy-to-learn class". (Schapire and Freund~'12, Shalev-Shwartz and Ben-David '14.) Formally, we assume the class of weak hypotheses has a bounded VC dimension. We focus on two main questions: (i) Oracle Complexity: How many weak hypotheses are needed to produce an accurate hypothesis? We design a novel boosting algorithm and demonstrate that it circumvents a classical lower bound by Freund and Schapire ('95, '12). Whereas the lower bound shows that $\Omega({1}/{\gamma2})$ weak hypotheses with $\gamma$-margin are sometimes necessary, our new method requires only $\tilde{O}({1}/{\gamma})$ weak hypothesis, provided that they belong to a class of bounded VC dimension. Unlike previous boosting algorithms which aggregate the weak hypotheses by majority votes, the new boosting algorithm uses more complex ("deeper") aggregation rules. We complement this result by showing that complex aggregation rules are in fact necessary to circumvent the aforementioned lower bound. (ii) Expressivity: Which tasks can be learned by boosting weak hypotheses from a bounded VC class? Can complex concepts that are "far away" from the class be learned? Towards answering the first question we {introduce combinatorial-geometric parameters which capture expressivity in boosting.} As a corollary we provide an affirmative answer to the second question for well-studied classes, including half-spaces and decision stumps. Along the way, we establish and exploit connections with Discrepancy Theory.
- R. Alexander “Geometric methods in the study of irregularities of distribution” In Combinatorica 10.2, 1990, pp. 115–136 DOI: 10.1007/BF02123006
- P. Assouad “Densite et dimension” In Ann. Institut Fourier 3, 1983, pp. 232–282
- Peter L. Bartlett and Mikhail Traskin “AdaBoost is Consistent” In J. Mach. Learn. Res. 8, 2007, pp. 2347–2368 URL: http://dl.acm.org/citation.cfm?id=1314574
- Gilles Blanchard, Gábor Lugosi and Nicolas Vayatis “On the Rate of Convergence of Regularized Boosting Classifiers” In J. Mach. Learn. Res. 4, 2003, pp. 861–894 URL: http://jmlr.org/papers/v4/blanchard03a.html
- “Learnability and the Vapnik-Chervonenkis dimension.” In J. Assoc. Comput. Mach. 36.4 Association for Computing Machinery (ACM), New York, NY, 1989, pp. 929–965 DOI: 10.1145/76359.76371
- Leo Breiman “Arcing the edge”, 1997
- Leo Breiman “Some Infinite Theory for Predictor Ensembles”, 2000
- “Boosting With the L2 Loss: Regression and Classification” In Journal of the American Statistical Association 98, 2003, pp. 324–339
- “Provable Regret Bounds for Deep Online Learning and Control” In CoRR abs/2110.07807, 2021 arXiv: https://arxiv.org/abs/2110.07807
- Mónika Csikós, Nabil H. Mustafa and Andrey Kupavskii “Tight Lower Bounds on the VC-dimension of Geometric Set Systems” In J. Mach. Learn. Res. 20, 2019, pp. 81:1–81:8 URL: http://jmlr.org/papers/v20/18-719.html
- “The VC dimension of k-fold union” In Information Processing Letters 101.5, 2007, pp. 181–184 DOI: https://doi.org/10.1016/j.ipl.2006.10.004
- Yoav Freund “Boosting a Weak Learning Algorithm by Majority” In Proceedings of the Third Annual Workshop on Computational Learning Theory, COLT 1990, University of Rochester, Rochester, NY, USA, August 6-8, 1990 Morgan Kaufmann, 1990, pp. 202–216 URL: http://dl.acm.org/citation.cfm?id=92640
- Jerome H. Friedman “Greedy Function Approximation: A Gradient Boosting Machine” In Annals of Statistics 29, 2000, pp. 1189–1232
- Jerome H. Friedman “Stochastic gradient boosting” Nonlinear Methods and Data Mining In Computational Statistics & Data Analysis 38.4, 2002, pp. 367–378 DOI: https://doi.org/10.1016/S0167-9473(01)00065-2
- David Galvin “Three tutorial lectures on entropy and counting” In CoRR abs/1406.7872, 2014 URL: https://arxiv.org/pdf/1406.7872.pdf
- Servane Gey “Vapnik–Chervonenkis dimension of axis-parallel cuts” In Communications in Statistics - Theory and Methods 47.9 Taylor & Francis, 2018, pp. 2291–2296 DOI: 10.1080/03610926.2017.1339088
- A.A. Giannopoulos “A Note on the Banach-Mazur Distance to the Cube” In Geometric Aspects of Functional Analysis Basel: Birkhäuser Basel, 1995, pp. 67–73
- D. Haussler “Sphere packing numbers for subsets of the Boolean n𝑛nitalic_n-cube with bounded Vapnik-Chervonenkis dimension.” In J. Comb. Theory, Ser. A 69.2 Elsevier Science (Academic Press), San Diego, CA, 1995, pp. 217–232 DOI: 10.1016/0097-3165(95)90052-7
- Wenxin Jiang “Process consistency for AdaBoost” In Ann. Statist. 32.1 The Institute of Mathematical Statistics, 2004, pp. 13–29 DOI: 10.1214/aos/1079120128
- M. Kearns “Thoughts on Hypothesis Boosting” Unpublished, 1988
- “On the Bayes-risk consistency of regularized boosting methods” In Ann. Statist. 32.1 The Institute of Mathematical Statistics, 2004, pp. 30–55 DOI: 10.1214/aos/1079120129
- “Weak Learners and Improved Rates of Convergence in Boosting” In Advances in Neural Information Processing Systems 13, Papers from Neural Information Processing Systems (NIPS) 2000, Denver, CO, USA MIT Press, 2000, pp. 280–286 URL: http://papers.nips.cc/paper/1906-weak-learners-and-improved-rates-of-convergence-in-boosting
- Shie Mannor, Ron Meir and Tong Zhang “The Consistency of Greedy Algorithms for Classification” In Computational Learning Theory, 15th Annual Conference on Computational Learning Theory, COLT 2002, Sydney, Australia, July 8-10, 2002, Proceedings 2375, Lecture Notes in Computer Science Springer, 2002, pp. 319–333 DOI: 10.1007/3-540-45435-7“˙22
- “Boosting Algorithms as Gradient Descent” In In Advances in Neural Information Processing Systems 12 MIT Press, 2000, pp. 512–518
- Jiří Matoušek “Geometric Discrepancy” In Boolean Function Complexity Springer Science & Business Media, 2009, pp. 847 DOI: 10.1017/cbo9780511526633
- Jiří Matoušek “Tight Upper Bounds for the Discrepancy of Half-Spaces” In Discrete & Computational Geometry 13, 1995, pp. 593–601
- Jiří Matoušek, Emo Welzl and Lorenz Wernisch “Discrepancy and approximations for bounded VC-dimension” In Combinatorica 13.4, 1993, pp. 455–466
- Indraneel Mukherjee and Robert E. Schapire “A theory of multiclass boosting” In J. Mach. Learn. Res. 14.1, 2013, pp. 437–497 URL: http://dl.acm.org/citation.cfm?id=2502596
- J.von Neumann “Zur Theorie der Gesellschaftsspiele” In Mathematische Annalen 100, 1928, pp. 295–320 URL: http://eudml.org/doc/159291
- Norbert Sauer “On the Density of Families of Sets” In J. Comb. Theory, Ser. A 13.1, 1972, pp. 145–147
- Robert E. Schapire “The strength of weak learnability” In Machine Learning 5.2, 1990, pp. 197–227 DOI: 10.1007/BF00116037
- Robert E. Schapire and Yoav Freund “Boosting: Foundations and Algorithms” Cambridge University Press, 2012, pp. 1–30 DOI: 10.1017/CBO9781107415324.004
- “Understanding Machine Learning” Cambridge University Press, 2014 DOI: 10.1017/CBO9781107298019
- L.G. Valiant “A Theory of the Learnable” In Commun. ACM 27.11 New York, NY, USA: Association for Computing Machinery, 1984, pp. 1134–1142 DOI: 10.1145/1968.1972
- Paul A. Viola and Michael J. Jones “Rapid Object Detection using a Boosted Cascade of Simple Features” In 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2001), with CD-ROM, 8-14 December 2001, Kauai, HI, USA IEEE Computer Society, 2001, pp. 511–518 DOI: 10.1109/CVPR.2001.990517
- Tong Zhang “Statistical behavior and consistency of classification methods based on convex risk minimization” In The Annals of Statistics 32, 2004, pp. 56–134