Emergent Mind

A Novel Sparse Regularizer

(2301.07285)
Published Jan 18, 2023 in cs.LG

Abstract

$Lp$-norm regularization schemes such as $L0$, $L1$, and $L2$-norm regularization and $Lp$-norm-based regularization techniques such as weight decay, LASSO, and elastic net compute a quantity which depends on model weights considered in isolation from one another. This paper introduces a regularizer based on minimizing a novel measure of entropy applied to the model during optimization. In contrast with $Lp$-norm-based regularization, this regularizer is concerned with the spatial arrangement of weights within a weight matrix. This novel regularizer is an additive term for the loss function and is differentiable, simple and fast to compute, scale-invariant, requires a trivial amount of additional memory, and can easily be parallelized. Empirically this method yields approximately a one order-of-magnitude improvement in the number of nonzero model parameters required to achieve a given level of test accuracy when training LeNet300 on MNIST.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.