Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 92 tok/s
Gemini 2.5 Pro 55 tok/s Pro
GPT-5 Medium 31 tok/s Pro
GPT-5 High 25 tok/s Pro
GPT-4o 109 tok/s Pro
Kimi K2 216 tok/s Pro
GPT OSS 120B 453 tok/s Pro
Claude Sonnet 4.5 37 tok/s Pro
2000 character limit reached

Incorporating Posterior-Informed Approximation Errors into a Hierarchical Framework to Facilitate Out-of-the-Box MCMC Sampling for Geothermal Inverse Problems and Uncertainty Quantification (1810.04350v3)

Published 10 Oct 2018 in stat.CO, cs.NA, and math.NA

Abstract: We consider geothermal inverse problems and uncertainty quantification from a Bayesian perspective. Our main goal is to make standard, out-of-the-box' Markov chain Monte Carlo (MCMC) sampling more feasible for complex simulation models by using suitable approximations. To do this, we first show how to pose both the inverse and prediction problems in a hierarchical Bayesian framework. We then show how to incorporate so-called posterior-informed model approximation error into this hierarchical framework, using a modified form of the Bayesian approximation error (BAE) approach. This enables the use of acoarse', approximate model in place of a finer, more expensive model, while accounting for the additional uncertainty and potential bias that this can introduce. Our method requires only simple probability modelling, a relatively small number of fine model simulations, and only modifies the target posterior -- any standard MCMC sampling algorithm can be used to sample the new posterior. These corrections can also be used in methods that are not based on MCMC sampling. We show that our approach can achieve significant computational speed-ups on two geothermal test problems. We also demonstrate the dangers of naively using coarse, approximate models in place of finer models, without accounting for the induced approximation errors. The naive approach tends to give overly confident and biased posteriors while incorporating BAE into our hierarchical framework corrects for this while maintaining computational efficiency and ease-of-use.

Citations (9)

Summary

We haven't generated a summary for this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Don't miss out on important new AI/ML research

See which papers are being discussed right now on X, Reddit, and more:

“Emergent Mind helps me see which AI papers have caught fire online.”

Philip

Philip

Creator, AI Explained on YouTube