2000 character limit reached
Gradient Boosting Neural Networks: GrowNet (2002.07971v2)
Published 19 Feb 2020 in cs.LG and stat.ML
Abstract: A novel gradient boosting framework is proposed where shallow neural networks are employed as ``weak learners''. General loss functions are considered under this unified framework with specific examples presented for classification, regression, and learning to rank. A fully corrective step is incorporated to remedy the pitfall of greedy function approximation of classic gradient boosting decision tree. The proposed model rendered outperforming results against state-of-the-art boosting methods in all three tasks on multiple datasets. An ablation study is performed to shed light on the effect of each model components and model hyperparameters.
- Sarkhan Badirli (6 papers)
- Xuanqing Liu (21 papers)
- Zhengming Xing (1 paper)
- Avradeep Bhowmik (5 papers)
- Khoa Doan (8 papers)
- Sathiya S. Keerthi (1 paper)