Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 64 tok/s
Gemini 2.5 Pro 50 tok/s Pro
GPT-5 Medium 30 tok/s Pro
GPT-5 High 35 tok/s Pro
GPT-4o 77 tok/s Pro
Kimi K2 174 tok/s Pro
GPT OSS 120B 457 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

A Novel Sparse Regularizer (2301.07285v5)

Published 18 Jan 2023 in cs.LG

Abstract: $L_p$-norm regularization schemes such as $L_0$, $L_1$, and $L_2$-norm regularization and $L_p$-norm-based regularization techniques such as weight decay, LASSO, and elastic net compute a quantity which depends on model weights considered in isolation from one another. This paper introduces a regularizer based on minimizing a novel measure of entropy applied to the model during optimization. In contrast with $L_p$-norm-based regularization, this regularizer is concerned with the spatial arrangement of weights within a weight matrix. This novel regularizer is an additive term for the loss function and is differentiable, simple and fast to compute, scale-invariant, requires a trivial amount of additional memory, and can easily be parallelized. Empirically this method yields approximately a one order-of-magnitude improvement in the number of nonzero model parameters required to achieve a given level of test accuracy when training LeNet300 on MNIST.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

X Twitter Logo Streamline Icon: https://streamlinehq.com