Emergent Mind

Improved Initialization of State-Space Artificial Neural Networks

(2103.14516)
Published Mar 26, 2021 in cs.LG , cs.SY , and eess.SY

Abstract

The identification of black-box nonlinear state-space models requires a flexible representation of the state and output equation. Artificial neural networks have proven to provide such a representation. However, as in many identification problems, a nonlinear optimization problem needs to be solved to obtain the model parameters (layer weights and biases). A well-thought initialization of these model parameters can often avoid that the nonlinear optimization algorithm converges to a poorly performing local minimum of the considered cost function. This paper introduces an improved initialization approach for nonlinear state-space models represented as a recurrent artificial neural network and emphasizes the importance of including an explicit linear term in the model structure. Some of the neural network weights are initialized starting from a linear approximation of the nonlinear system, while others are initialized using random values or zeros. The effectiveness of the proposed initialization approach over previously proposed methods is illustrated on two benchmark examples.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.