Emergent Mind

Deep Learning of Delay-Compensated Backstepping for Reaction-Diffusion PDEs

(2308.10501)
Published Aug 21, 2023 in math.AP , cs.LG , and math.OC

Abstract

Deep neural networks that approximate nonlinear function-to-function mappings, i.e., operators, which are called DeepONet, have been demonstrated in recent articles to be capable of encoding entire PDE control methodologies, such as backstepping, so that, for each new functional coefficient of a PDE plant, the backstepping gains are obtained through a simple function evaluation. These initial results have been limited to single PDEs from a given class, approximating the solutions of only single-PDE operators for the gain kernels. In this paper we expand this framework to the approximation of multiple (cascaded) nonlinear operators. Multiple operators arise in the control of PDE systems from distinct PDE classes, such as the system in this paper: a reaction-diffusion plant, which is a parabolic PDE, with input delay, which is a hyperbolic PDE. The DeepONet-approximated nonlinear operator is a cascade/composition of the operators defined by one hyperbolic PDE of the Goursat form and one parabolic PDE on a rectangle, both of which are bilinear in their input functions and not explicitly solvable. For the delay-compensated PDE backstepping controller, which employs the learned control operator, namely, the approximated gain kernel, we guarantee exponential stability in the $L2$ norm of the plant state and the $H1$ norm of the input delay state. Simulations illustrate the contributed theory.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.