Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 56 tok/s
Gemini 2.5 Pro 39 tok/s Pro
GPT-5 Medium 15 tok/s Pro
GPT-5 High 16 tok/s Pro
GPT-4o 99 tok/s Pro
Kimi K2 155 tok/s Pro
GPT OSS 120B 476 tok/s Pro
Claude Sonnet 4 38 tok/s Pro
2000 character limit reached

Tuning metaheuristics by sequential optimization of regression models (1809.03646v2)

Published 11 Sep 2018 in cs.NE

Abstract: Tuning parameters is an important step for the application of metaheuristics to problem classes of interest. In this work we present a tuning framework based on the sequential optimization of perturbed regression models. Besides providing algorithm configurations with good expected performance, the proposed methodology can also provide insights on the relevance of each parameter and their interactions, as well as models of expected algorithm performance for a given problem class, conditional on the parameter values. A test case is presented for the tuning of six parameters of a decomposition-based multiobjective optimization algorithm, in which an instantiation of the proposed framework is compared against the results obtained by the most recent version the Iterated Racing (Irace) procedure. The results suggest that the proposed approach returns solutions that are as good as those of Irace in terms of mean performance, with the advantage of providing more information on the relevance and effect of each parameter on the expected performance of the algorithm.

Citations (14)
List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-Up Questions

We haven't generated follow-up questions for this paper yet.