Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Evolutionary bagging for ensemble learning (2208.02400v3)

Published 4 Aug 2022 in cs.NE and cs.AI

Abstract: Ensemble learning has gained success in machine learning with major advantages over other learning methods. Bagging is a prominent ensemble learning method that creates subgroups of data, known as bags, that are trained by individual machine learning methods such as decision trees. Random forest is a prominent example of bagging with additional features in the learning process. Evolutionary algorithms have been prominent for optimisation problems and also been used for machine learning. Evolutionary algorithms are gradient-free methods that work with a population of candidate solutions that maintain diversity for creating new solutions. In conventional bagged ensemble learning, the bags are created once and the content, in terms of the training examples, are fixed over the learning process. In our paper, we propose evolutionary bagged ensemble learning, where we utilise evolutionary algorithms to evolve the content of the bags in order to iteratively enhance the ensemble by providing diversity in the bags. The results show that our evolutionary ensemble bagging method outperforms conventional ensemble methods (bagging and random forests) for several benchmark datasets under certain constraints. We find that evolutionary bagging can inherently sustain a diverse set of bags without reduction in performance accuracy.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (3)
  1. Giang Ngo (3 papers)
  2. Rodney Beard (3 papers)
  3. Rohitash Chandra (64 papers)
Citations (87)

Summary

We haven't generated a summary for this paper yet.