Emergent Mind

Abstract

We present differentiable predictive control (DPC) as a deep learning-based alternative to the explicit model predictive control (MPC) for unknown nonlinear systems. In the DPC framework, a neural state-space model is learned from time-series measurements of the system dynamics. The neural control policy is then optimized via stochastic gradient descent approach by differentiating the MPC loss function through the closed-loop system dynamics model. The proposed DPC method learns model-based control policies with state and input constraints, while supporting time-varying references and constraints. In embedded implementation using a Raspberry-Pi platform, we experimentally demonstrate that it is possible to train constrained control policies purely based on the measurements of the unknown nonlinear system. We compare the control performance of the DPC method against explicit MPC and report efficiency gains in online computational demands, memory requirements, policy complexity, and construction time. In particular, we show that our method scales linearly compared to exponential scalability of the explicit MPC solved via multiparametric programming.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.