Emergent Mind

Abstract

Learning-based control with safety guarantees usually requires real-time safety certification and modifications of possibly unsafe learning-based policies. The control barrier function (CBF) method uses a safety filter containing a constrained optimization problem to produce safe policies. However, finding a valid CBF for a general nonlinear system requires a complex function parameterization, which in general, makes the policy optimization problem difficult to solve in real time. For nonlinear systems with nonlinear state constraints, this paper proposes the novel concept of state-action CBFs, which not only characterize the safety at each state but also evaluate the control inputs taken at each state. State-action CBFs, in contrast to CBFs, enable a flexible parameterization, resulting in a safety filter that involves a convex quadratic optimization problem. This, in turn, significantly alleviates the online computational burden. To synthesize state-action CBFs, we propose a learning-based approach exploiting Hamilton-Jacobi reachability. The effect of learning errors on the effectiveness of state-action CBFs is addressed by constraint tightening and introducing a new concept called contractive CBFs. These contributions ensure formal safety guarantees for learned CBFs and control policies, enhancing the applicability of learning-based control in real-time scenarios. Simulation results on an inverted pendulum with elastic walls validate the proposed CBFs in terms of constraint satisfaction and CPU time.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.