Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
98 tokens/sec
GPT-4o
8 tokens/sec
Gemini 2.5 Pro Pro
47 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Point Process-based Monte Carlo estimation (1412.6368v5)

Published 19 Dec 2014 in cs.CE and stat.CO

Abstract: This paper addresses the issue of estimating the expectation of a real-valued random variable of the form $X = g(\mathbf{U})$ where $g$ is a deterministic function and $\mathbf{U}$ can be a random finite- or infinite-dimensional vector. Using recent results on rare event simulation, we propose a unified framework for dealing with both probability and mean estimation for such random variables, \emph{i.e.} linking algorithms such as Tootsie Pop Algorithm (TPA) or Last Particle Algorithm with nested sampling. Especially, it extends nested sampling as follows: first the random variable $X$ does not need to be bounded any more: it gives the principle of an ideal estimator with an infinite number of terms that is unbiased and always better than a classical Monte Carlo estimator -- in particular it has a finite variance as soon as there exists $k \in \mathbb{R} > 1$ such that $\operatorname{E}[Xk] < \infty$. Moreover we address the issue of nested sampling termination and show that a random truncation of the sum can preserve unbiasedness while increasing the variance only by a factor up to 2 compared to the ideal case. We also build an unbiased estimator with fixed computational budget which supports a Central Limit Theorem and discuss parallel implementation of nested sampling, which can dramatically reduce its computational cost. Finally we extensively study the case where $X$ is heavy-tailed.

Summary

We haven't generated a summary for this paper yet.