Emergent Mind

Abstract

Gradient Descent (GD) approximators often fail in the solution space with multiple scales of convexities, i.e., in subspace learning and neural network scenarios. To handle that, one solution is to run GD multiple times from different randomized initial states and select the best solution over all experiments. However, this idea is proved impractical in plenty of cases. Even Swarm-based optimizers like Particle Swarm Optimization (PSO) or Imperialistic Competitive Algorithm (ICA), as commonly used GD initializers, have failed to find optimal solutions in some applications. In this paper, Swarm-based optimizers like ICA and PSO are modified by a new optimization framework to improve GD optimization performance. This improvement is for applications with high number of convex localities in multiple scales. Performance of the proposed method is analyzed in a nonlinear subspace filtering objective function over EEG data. The proposed metaheuristic outperforms commonly used baseline optimizers as GD initializers in both the EEG classification accuracy and EEG loss function fitness. The optimizers have been also compared to each other in some of CEC 2014 benchmark functions, where again our method outperforms other algorithms.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.