Local Nonstationarity for Efficient Bayesian Optimization (1506.02080v1)
Abstract: Bayesian optimization has shown to be a fundamental global optimization algorithm in many applications: ranging from automatic machine learning, robotics, reinforcement learning, experimental design, simulations, etc. The most popular and effective Bayesian optimization relies on a surrogate model in the form of a Gaussian process due to its flexibility to represent a prior over function. However, many algorithms and setups relies on the stationarity assumption of the Gaussian process. In this paper, we present a novel nonstationary strategy for Bayesian optimization that is able to outperform the state of the art in Bayesian optimization both in stationary and nonstationary problems.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.