Emergent Mind

Scalable Synthesis of Verified Controllers in Deep Reinforcement Learning

(2104.10219)
Published Apr 20, 2021 in eess.SY , cs.LG , and cs.SY

Abstract

There has been significant recent interest in devising verification techniques for learning-enabled controllers (LECs) that manage safety-critical systems. Given the opacity and lack of interpretability of the neural policies that govern the behavior of such controllers, many existing approaches enforce safety properties through shield, a dynamic monitoring-and-repairing mechanism that ensures a LEC does not emit actions that would violate desired safety conditions. These methods, however, have been shown to have significant scalability limitations because verification costs grow as problem dimensionality and objective complexity increase. In this paper, we propose a new automated verification pipeline capable of synthesizing high-quality safe controllers even when the problem domain involves hundreds of dimensions, or when the desired objective involves stochastic perturbations, liveness considerations, and other complex non-functional properties. Our key insight involves separating safety verification from neural controller training, and using pre-computed verified safety shields to constrain the training process. Experimental results over a range of high-dimensional benchmarks demonstrate the effectiveness of our approach in a range of stochastic linear time-invariant and time-variant systems.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.