Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
162 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Large-scale Benchmarking of Metaphor-based Optimization Heuristics (2402.09800v1)

Published 15 Feb 2024 in cs.NE

Abstract: The number of proposed iterative optimization heuristics is growing steadily, and with this growth, there have been many points of discussion within the wider community. One particular criticism that is raised towards many new algorithms is their focus on metaphors used to present the method, rather than emphasizing their potential algorithmic contributions. Several studies into popular metaphor-based algorithms have highlighted these problems, even showcasing algorithms that are functionally equivalent to older existing methods. Unfortunately, this detailed approach is not scalable to the whole set of metaphor-based algorithms. Because of this, we investigate ways in which benchmarking can shed light on these algorithms. To this end, we run a set of 294 algorithm implementations on the BBOB function suite. We investigate how the choice of the budget, the performance measure, or other aspects of experimental design impact the comparison of these algorithms. Our results emphasize why benchmarking is a key step in expanding our understanding of the algorithm space, and what challenges still need to be overcome to fully gauge the potential improvements to the state-of-the-art hiding behind the metaphors.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (39)
  1. Metaphor-based metaheuristics, a call for action: the elephant in the room. Swarm Intell. 16, 1 (2022), 1–6. https://doi.org/10.1007/S11721-021-00202-9
  2. Handbook of evolutionary computation. Release 97, 1 (1997), B1.
  3. The intelligent water drops algorithm: why it cannot be considered a novel algorithm - A brief discussion on the use of metaphors in optimization. Swarm Intell. 13, 3-4 (2019), 173–192. https://doi.org/10.1007/S11721-019-00165-Y
  4. PSO-X: A component-based framework for the automatic design of particle swarm optimization algorithms. IEEE Transactions on Evolutionary Computation 26, 3 (2021), 402–416.
  5. An analysis of why cuckoo search does not bring any novel ideas to optimization. Computers & Operations Research 142 (2022), 105747.
  6. Grey Wolf, Firefly and Bat Algorithms: Three Widespread Algorithms that Do Not Contain Any Novelty. In Proc. of Swarm Intelligence (ANTS) (LNCS, Vol. 12421). Springer, 121–133. https://doi.org/10.1007/978-3-030-60376-2_10
  7. Felipe Campelo and Claus Aranha. 2023. Lessons from the evolutionary computation bestiary. Artificial Life 29, 4 (2023), 421–432.
  8. Tuning as a Means of Assessing the Benefits of New Ideas in Interplay with Existing Algorithmic Modules. In Proc. of Genetic and Evolutionary Computation Conference (GECCO’21). ACM, 1375–1384. https://doi.org/10.1145/3449726.3463167
  9. Iohexperimenter: Benchmarking platform for iterative optimization heuristics. Evolutionary Computation (2023), 1–6.
  10. Opytimizer: A nature-inspired python optimizer. arXiv preprint arXiv:1912.13002 (2019).
  11. More is not always better: insights from a massive comparison of meta-heuristic algorithms over real-parameter optimization problems. In 2021 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 1–7.
  12. Towards a theory-guided benchmarking suite for discrete black-box optimization heuristics: profiling (1+λ)1𝜆(1+\lambda)( 1 + italic_λ ) EA variants on OneMax and LeadingOnes. In Proc. of Genetic and Evolutionary Computation Conference (GECCO). ACM, 951–958. https://doi.org/10.1145/3205455.3205621
  13. Marco Dorigo. 1992. Optimization, Learning and Natural Algorithms. Ph.D. Dissertation. Politecnico di Milano.
  14. EvoloPy: An open-source nature-inspired optimization framework in python. IJCCI (ECTA) 1 (2016), 171–177.
  15. COCO: A platform for comparing continuous optimizers in a black-box setting. Optimization Methods and Software 36, 1 (2021), 114–144.
  16. Real-Parameter Black-Box Optimization Benchmarking 2009: Noiseless Functions Definitions. Technical Report RR-6829. INRIA. https://hal.inria.fr/inria-00362633/document
  17. James Kennedy and Russell Eberhart. 1995. Particle swarm optimization. In Proc. of ICNN’95 - International Conference on Neural Networks, Vol. 4. 1942–1948. https://doi.org/10.1109/ICNN.1995.488968
  18. Automated Algorithm Selection: Survey and Perspectives. Evolutionary Computation 27, 1 (2019), 3–45. https://doi.org/10.1162/evco_a_00242
  19. The importance of being constrained: Dealing with infeasible solutions in differential evolution and beyond. Evolutionary Computation (2023), 1–46.
  20. Jakub Kudela. 2023. The Evolutionary Computation Methods No One Should Use. arXiv preprint arXiv:2301.01984 (2023).
  21. Reproducibility in evolutionary computation. ACM Transactions on Evolutionary Learning and Optimization 1, 4 (2021), 1–21.
  22. Performance assessment and exhaustive listing of 500+ nature-inspired metaheuristic algorithms. Swarm and Evolutionary Computation 77 (2023), 101248.
  23. Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426 (2018).
  24. Exploratory landscape analysis. In Proc. of Genetic and Evolutionary Computation Conference (GECCO). ACM, 829–836.
  25. Black-Box Optimization Revisited: Improving Algorithm Selection Wizards Through Massive Benchmarking. IEEE Trans. Evol. Comput. 26, 3 (2022), 490–500. https://doi.org/10.1109/TEVC.2021.3108185 Free version available at https://arxiv.org/abs/2010.04542.
  26. Jérémy Rapin and Olivier Teytaud. 2018. Nevergrad - A gradient-free optimization platform. https://GitHub.com/FacebookResearch/Nevergrad.
  27. Kenneth Sörensen. 2015. Metaheuristics - the metaphor exposed. International Transactions in Operational Research (ITOR) 22 (2015), 3–18.
  28. A History of Metaheuristics. In Handbook of Heuristics, Rafael Martí, Panos M. Pardalos, and Mauricio G. C. Resende (Eds.). Springer, 791–808. https://doi.org/10.1007/978-3-319-07124-4_4
  29. Ryoji Tanabe and Alex S Fukunaga. 2014. Improving the search performance of SHADE using linear population size reduction. In 2014 IEEE congress on evolutionary computation (CEC). IEEE, 1658–1665.
  30. Explainable Benchmarking for Iterative Optimization Heuristics. arXiv:2401.17842 [cs.NE]
  31. Nguyen Van Thieu and Seyedali Mirjalili. 2023. MEALPY: An open-source library for latest meta-heuristic algorithms in Python. Journal of Systems Architecture 139 (2023), 102871.
  32. A Literature Review and Critical Analysis of Metaheuristics Recently Developed. Archives of Computational Methods in Engineering (2023), 1784–1886. https://doi.org/10.1007/s11831-023-09975-0
  33. Modular Differential Evolution. In Proceedings of the Genetic and Evolutionary Computation Conference, GECCO 2023, Lisbon, Portugal, July 15-19, 2023, Sara Silva and Luís Paquete (Eds.). ACM, 864–872. https://doi.org/10.1145/3583131.3590417
  34. Reproducibility files and additional figures. (2024). Code and data repository (Zenodo): doi.org/10.5281/zenodo.10561215 Figure repository (Figshare): doi.org/10.6084/m9.figshare.25060151.
  35. Online selection of CMA-ES variants. In Proc. of Genetic and Evolutionary Computation Conference (GECCO). ACM, 951–959. https://doi.org/10.1145/3321707.3321803 Free version available at https://arxiv.org/abs/1904.07801.
  36. Towards dynamic algorithm selection for numerical black-box optimization: investigating BBOB as a use case. In Proc. of Genetic and Evolutionary Computation Conference (GECCO). ACM, 654–662. https://doi.org/10.1145/3377930.3390189 Free version available at https://arxiv.org/abs/2006.06586.
  37. NiaPy: Python microframework for building nature-inspired algorithms. Journal of Open Source Software 3, 23 (2018), 613.
  38. IOHanalyzer: Detailed Performance Analyses for Iterative Optimization Heuristics. ACM Trans. Evol. Learn. Optim. 2, Article 3 (2022). https://doi.org/10.1145/3510426 Free version available at https://arxiv.org/abs/2007.03953.
  39. SATzilla: portfolio-based algorithm selection for SAT. Journal of artificial intelligence research 32 (2008), 565–606.
Citations (3)

Summary

We haven't generated a summary for this paper yet.