Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 58 tok/s
Gemini 2.5 Pro 52 tok/s Pro
GPT-5 Medium 12 tok/s Pro
GPT-5 High 17 tok/s Pro
GPT-4o 95 tok/s Pro
Kimi K2 179 tok/s Pro
GPT OSS 120B 463 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

Log-Scale Shrinkage Priors and Adaptive Bayesian Global-Local Shrinkage Estimation (1801.02321v2)

Published 8 Jan 2018 in math.ST, cs.LG, and stat.TH

Abstract: Global-local shrinkage hierarchies are an important innovation in Bayesian estimation. We propose the use of log-scale distributions as a novel basis for generating familes of prior distributions for local shrinkage hyperparameters. By varying the scale parameter one may vary the degree to which the prior distribution promotes sparsity in the coefficient estimates. By examining the class of distributions over the logarithm of the local shrinkage parameter that have log-linear, or sub-log-linear tails, we show that many standard prior distributions for local shrinkage parameters can be unified in terms of the tail behaviour and concentration properties of their corresponding marginal distributions over the coefficients $\beta_j$. We derive upper bounds on the rate of concentration around $|\beta_j|=0$, and the tail decay as $|\beta_j| \to \infty$, achievable by this wide class of prior distributions. We then propose a new type of ultra-heavy tailed prior, called the log-$t$ prior with the property that, irrespective of the choice of associated scale parameter, the marginal distribution always diverges at $\beta_j = 0$, and always possesses super-Cauchy tails. We develop results demonstrating when prior distributions with (sub)-log-linear tails attain Kullback--Leibler super-efficiency and prove that the log-$t$ prior distribution is always super-efficient. We show that the log-$t$ prior is less sensitive to misspecification of the global shrinkage parameter than the horseshoe or lasso priors. By incorporating the scale parameter of the log-scale prior distributions into the Bayesian hierarchy we derive novel adaptive shrinkage procedures. Simulations show that the adaptive log-$t$ procedure appears to always perform well, irrespective of the level of sparsity or signal-to-noise ratio of the underlying model.

Citations (1)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube