Emergent Mind

Multilevel Quasi-Monte Carlo for Optimization under Uncertainty

(2109.14367)
Published Sep 29, 2021 in math.NA , cs.NA , and math.OC

Abstract

This paper considers the problem of optimizing the average tracking error for an elliptic partial differential equation with an uncertain lognormal diffusion coefficient. In particular, the application of the multilevel quasi-Monte Carlo (MLQMC) method to the estimation of the gradient is investigated, with a circulant embedding method used to sample the stochastic field. A novel regularity analysis of the adjoint variable is essential for the MLQMC estimation of the gradient in combination with the samples generated using the CE method. A rigorous cost and error analysis shows that a randomly shifted quasi-Monte Carlo method leads to a faster rate of decay in the root mean square error of the gradient than the ordinary Monte Carlo method, while considering multiple levels substantially reduces the computational effort. Numerical experiments confirm the improved rate of convergence and show that the MLQMC method outperforms the multilevel Monte Carlo method and the single level quasi-Monte Carlo method.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.