Emergent Mind

Abstract

In this paper, we present a methodology to deploy the deterministic policy gradient method, using actor-critic techniques, when the optimal policy is approximated using a parametric optimization problem, where safety is enforced via hard constraints. For continuous input space, imposing safety restrictions on the exploration needed to deploying the deterministic policy gradient method poses some technical difficulties, which we address here. We will investigate in particular policy approximations based on robust Nonlinear Model Predictive Control (NMPC), where safety can be treated explicitly. For the sake of brevity, we will detail the construction of the safe scheme in the robust linear MPC context only. The extension to the nonlinear case is possible but more complex. We will additionally present a technique to maintain the system safety throughout the learning process in the context of robust linear MPC. This paper has a companion paper treating the stochastic policy gradient case.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.