2000 character limit reached
Implicitly Maximizing Margins with the Hinge Loss (2006.14286v1)
Published 25 Jun 2020 in cs.LG and stat.ML
Abstract: A new loss function is proposed for neural networks on classification tasks which extends the hinge loss by assigning gradients to its critical points. We will show that for a linear classifier on linearly separable data with fixed step size, the margin of this modified hinge loss converges to the $\ell_2$ max-margin at the rate of $\mathcal{O}( 1/t )$. This rate is fast when compared with the $\mathcal{O}(1/\log t)$ rate of exponential losses such as the logistic loss. Furthermore, empirical results suggest that this increased convergence speed carries over to ReLU networks.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.