Emergent Mind

Abstract

Echo State Network (ESN) presents a distinguished kind of recurrent neural networks. It is built upon a sparse, random and large hidden infrastructure called reservoir. ESNs have succeeded in dealing with several non-linear problems such as prediction, classification, etc. Thanks to its rich dynamics, ESN is used as an Autoencoder (AE) to extract features from original data representations. ESN is not only used with its basic single layer form but also with the recently proposed Multi-Layer (ML) architecture. The well setting of ESN (basic and ML) architectures and training parameters is a crucial and hard labor task. Generally, a number of parameters (hidden neurons, sparsity rates, input scaling) is manually altered to achieve minimum learning error. However, this randomly hand crafted task, on one hand, may not guarantee best training results and on the other hand, it can raise the network's complexity. In this paper, a hierarchical bi-level evolutionary optimization is proposed to deal with these issues. The first level includes a multi-objective architecture optimization providing maximum learning accuracy while sustaining the complexity at a reduced standard. Multi-objective Particle Swarm Optimization (MOPSO) is used to optimize ESN structure in a way to provide a trade-off between the network complexity decreasing and the accuracy increasing. A pareto-front of optimal solutions is generated by the end of the MOPSO process. These solutions present the set of candidates that succeeded in providing a compromise between different objectives (learning error and network complexity). At the second level, each of the solutions already found undergo a mono-objective weights optimization to enhance the obtained pareto-front. Empirical results ensure the effectiveness of the evolved ESN recurrent AEs (basic and ML) for noisy and noise free data.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.