Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 87 tok/s
Gemini 2.5 Pro 53 tok/s Pro
GPT-5 Medium 17 tok/s Pro
GPT-5 High 20 tok/s Pro
GPT-4o 106 tok/s Pro
Kimi K2 156 tok/s Pro
GPT OSS 120B 467 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Generative models and Bayesian inversion using Laplace approximation (2203.07755v2)

Published 15 Mar 2022 in stat.ML, cs.LG, math.ST, and stat.TH

Abstract: The Bayesian approach to solving inverse problems relies on the choice of a prior. This critical ingredient allows the formulation of expert knowledge or physical constraints in a probabilistic fashion and plays an important role for the success of the inference. Recently, Bayesian inverse problems were solved using generative models as highly informative priors. Generative models are a popular tool in machine learning to generate data whose properties closely resemble those of a given database. Typically, the generated distribution of data is embedded in a low-dimensional manifold. For the inverse problem, a generative model is trained on a database that reflects the properties of the sought solution, such as typical structures of the tissue in the human brain in magnetic resonance (MR) imaging. The inference is carried out in the low-dimensional manifold determined by the generative model which strongly reduces the dimensionality of the inverse problem. However, this proceeding produces a posterior that admits no Lebesgue density in the actual variables and the accuracy reached can strongly depend on the quality of the generative model. For linear Gaussian models we explore an alternative Bayesian inference based on probabilistic generative models which is carried out in the original high-dimensional space. A Laplace approximation is employed to analytically derive the required prior probability density function induced by the generative model. Properties of the resulting inference are investigated. Specifically, we show that derived Bayes estimates are consistent, in contrast to the approach employing the low-dimensional manifold of the generative model. The MNIST data set is used to construct numerical experiments which confirm our theoretical findings.

Citations (1)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube