Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 47 tok/s
Gemini 2.5 Pro 44 tok/s Pro
GPT-5 Medium 13 tok/s Pro
GPT-5 High 12 tok/s Pro
GPT-4o 64 tok/s Pro
Kimi K2 160 tok/s Pro
GPT OSS 120B 452 tok/s Pro
Claude Sonnet 4 36 tok/s Pro
2000 character limit reached

Error-guided likelihood-free MCMC (2010.06735v3)

Published 13 Oct 2020 in stat.ML and cs.LG

Abstract: This work presents a novel posterior inference method for models with intractable evidence and likelihood functions. Error-guided likelihood-free MCMC, or EG-LF-MCMC in short, has been developed for scientific applications, where a researcher is interested in obtaining approximate posterior densities over model parameters, while avoiding the need for expensive training of component estimators on full observational data or the tedious design of expressive summary statistics, as in related approaches. Our technique is based on two phases. In the first phase, we draw samples from the prior, simulate respective observations and record their errors $\epsilon$ in relation to the true observation. We train a classifier to distinguish between corresponding and non-corresponding $(\epsilon, \boldsymbol{\theta})$-tuples. In the second stage the said classifier is conditioned on the smallest recorded $\epsilon$ value from the training set and employed for the calculation of transition probabilities in a Markov Chain Monte Carlo sampling procedure. By conditioning the MCMC on specific $\epsilon$ values, our method may also be used in an amortized fashion to infer posterior densities for observations, which are located a given distance away from the observed data. We evaluate the proposed method on benchmark problems with semantically and structurally different data and compare its performance against the state of the art approximate Bayesian computation (ABC).

Citations (3)

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.