Emergent Mind

Abstract

Bayesian optimization (BO) is a powerful paradigm for derivative-free global optimization of a black-box objective function (BOF) that is expensive to evaluate. However, the overhead of BO can still be prohibitive for problems with highly expensive function evaluations. In this paper, we investigate how to reduce the required number of function evaluations for BO without compromise in solution quality. We explore the idea of posterior regularization to harness low fidelity (LF) data within the Gaussian process upper confidence bound (GP-UCB) framework. The LF data can arise from previous evaluations of an LF approximation of the BOF or of a related optimization task. An extra GP model called LF-GP is trained to fit the LF data. We develop an operator termed dynamic weighted product of experts (DW-POE) fusion. The regularization is induced by this operator on the posterior of the BOF. The impact of the LF GP model on the resulting regularized posterior is adaptively adjusted via Bayesian formalism. Extensive experimental results on benchmark BOF optimization tasks demonstrate the superior performance of the proposed algorithm over state-of-the-art.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.