Model-Based Reinforcement Learning for Approximate Optimal Control with Temporal Logic Specifications
(2101.07156)Abstract
In this paper we study the problem of synthesizing optimal control policies for uncertain continuous-time nonlinear systems from syntactically co-safe linear temporal logic (scLTL) formulas. We formulate this problem as a sequence of reach-avoid optimal control sub-problems. We show that the resulting hybrid optimal control policy guarantees the satisfaction of a given scLTL formula by constructing a barrier certificate. Since solving each optimal control problem may be computationally intractable, we take a learning-based approach to approximately solve this sequence of optimal control problems online without requiring full knowledge of the system dynamics. Using Lyapunov-based tools, we develop sufficient conditions under which our approximate solution maintains correctness. Finally, we demonstrate the efficacy of the developed method with a numerical example.
We're not able to analyze this paper right now due to high demand.
Please check back later (sorry!).
Generate a summary of this paper on our Pro plan:
We ran into a problem analyzing this paper.