Emergent Mind

Abstract

The Extreme Learning Machine (ELM) is a single-hidden layer feedforward neural network (SLFN) learning algorithm that can learn effectively and quickly. The ELM training phase assigns the input weights and bias randomly and does not change them in the whole process. Although the network works well, the random weights in the input layer can make the algorithm less effective and impact on its performance. Therefore, we propose a new approach to determine the input weights and bias for the ELM using the restricted Boltzmann machine (RBM), which we call RBM-ELM. We compare our new approach with a well-known approach to improve the ELM and a state of the art algorithm to select the weights for the ELM. The results show that the RBM-ELM outperforms both methodologies and achieve a better performance than the ELM.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.