You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hyperparameter optimization is a critical but computationally expensive task for developing effective machine learning models. This report presents an empirical study comparing a Randomized Search (RS) baseline against two representative metaheuristics: a Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). This selection is made to contrast two primary search strategies: global exploration (GA) and local exploitation (PSO). We evaluate the ability of these algorithms to optimize the hyperparameters of three distinct machine learning models (Decision Tree, k-Nearest Neighbors, and a Convolutional Neural Network) on the grayscale CIFAR-10 dataset~\cite{krizhevsky2009learning}. To ensure a fair and balanced assessment we define a composite fitness function. We evaluate the optimizers across three quality attributes: effectiveness (solution quality), efficiency (computational cost), and stability (consistency across runs). The empirical results will be validated using statistical tests to provide statistically grounded conclusions.
62
+
Hyperparameter optimization is a critical but computationally expensive task for developing effective machine learning models. This report presents an empirical study comparing a Randomized Search (RS) baseline against two representative metaheuristics: a Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). This selection is made to contrast two primary search strategies: global exploration (GA) and local exploitation (PSO). We evaluate the ability of these algorithms to optimize the hyperparameters of three distinct machine learning models (Decision Tree, k-Nearest Neighbors, and a Convolutional Neural Network) on the grayscale CIFAR-10 dataset~\cite{krizhevsky2009learning}. To ensure a fair and balanced assessment we define a composite fitness function. We evaluate the optimizers across three quality attributes: effectiveness (solution quality), convergence (improvement over a fixed evaluation budget), and stability (consistency across runs). The empirical results will be validated using statistical tests to provide statistically grounded conclusions.
booktitle = {2018 IEEE INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING, SEPT. 17–20, 2018, AALBORG, DENMARK},
21
+
title = {USING METAHEURISTICS FOR HYPER-PARAMETER OPTIMIZATION OF CONVOLUTIONAL NEURAL NETWORKS},
22
+
year = {2018}
23
+
}
24
+
25
+
@article{cnn-explained-for-metaheuristics,
26
+
author = {Sajjad Nematzadeh and Farzad Kiani and Mahsa Torkamanian-Afshar and Nizamettin Aydin},
27
+
title = {Tuning hyperparameters of machine learning algorithms and deep neural networks using metaheuristics: A bioinformatics study on biomedical and biological cases},
28
+
journal = {Computational Biology and Chemistry Volume 97, April 2022, 107619},
author = {Raja Ben Abdessalem, Shiva Nejati, Thomas Stifter},
43
+
booktitle = {ICSE ’18: 40th International Conference on Software Engineering , May 27-June 3, 2018, Gothenburg, Sweden. ACM, New York, NY, USA, 11 pages.},
44
+
title = {Testing Vision-Based Control Systems Using Learnable
The performance of machine learning models often depends on their hyperparameters' high-level configuration variables like learning rate or batch size that control the training process. Finding the optimal set of these configurations, or Hyperparameter Optimization (HPO), is a significant and resource-intensive bottleneck in model development.
4
-
5
-
HPO can be framed as a software verification problem. In this context, the model is the software under test and a "defect" being a suboptimal hyperparameter configuration that causes the model to fail its performance specifications, such as by exhibiting high loss, poor generalization or unstable training. HPO thus functions as a automated test drivers, searching the configuration space to find a set of hyperparameters that verifies the model's performance against a pre-defined quality specification.
3
+
The performance of machine learning models relies heavily on hyperparameters—configuration variables like learning rate and batch size that control the training process. Identifying the optimal configuration is a significant bottleneck in model development due to the high computational cost of evaluation. To address this complexity, we frame Hyperparameter Optimization (HPO) as a software verification problem. In this context, the model functions as the “software under test,” where a suboptimal configuration is treated as a “defect” that causes the system to violate its performance specifications (e.g., high loss or instability). HPO therefore acts as an automated test driver, searching the configuration space to verify model performance against defined quality criteria.
6
4
7
5
\subsection{Evaluation Criteria}
8
6
9
-
We evaluate the optimizers across three quality attributes, as defined in the project proposal:
7
+
We evaluate the optimizers across 3 quality attributes:
10
8
11
9
\begin{itemize}
12
10
\item\textbf{Effectiveness}: The quality of the final solution found (i.e., the best fitness score achieved).
13
-
\item\textbf{Efficiency}: The computational cost required to find a solution, measured in both fitness evaluations and wall-clock time.
14
-
\item\textbf{Stability (Consistency)}: The consistency and reliability of the algorithm's performance across multiple independent runs.
11
+
\item\textbf{Convergence}: The rate at which the algorithm improves its best-found solution over the course of the fixed evaluation budget.
12
+
\item\textbf{Stability}: The consistency and reliability of the algorithm's performance across multiple independent runs (measured by variance).
15
13
\end{itemize}
16
14
17
15
\subsection{Research Questions}
18
16
19
-
This report seeks to answer the following research questions from the project proposal:
17
+
This report seeks to answer the following research questions:
20
18
21
19
\begin{itemize}
22
-
\item\textbf{RQ1}: How do representative metaheuristic algorithms compare against a randomized search baseline in terms of effectiveness and efficiency when performing HPO prior to training?
23
-
\item\textbf{RQ2}: What is the difference in performance stability between the selected metaheuristic algorithms and traditional solutions like the randomized search baseline?
20
+
\item\textbf{RQ1}: How do representative metaheuristic algorithms compare against a randomized search baseline in terms of effectiveness and convergence rates given a fixed evaluation budget?
21
+
\item\textbf{RQ2}: What is the difference in performance stability between the selected metaheuristic algorithms and the randomized search baseline?
\subsection{Representation and Objective Function}
5
4
6
-
HPO is a black-box optimization problem. The objective function $f(\theta)$, which represents the model's performance for a given hyperparameter configuration $\theta$, presents many challenges: it is computationally expensive to evaluate, it is non-differentiable, and the search space $\Theta$ is often complex and of mixed-types (continuous, discrete, and categorical). These properties make HPO suitable for search-based metaheuristic techniques.
5
+
HPO is a black-box optimization problem. It happens \textbf{prior to} the actual training loop. The problem is represented by arrays of possible values of each parameter type in table \ref{tab:hparam_space}. The objective function $f(\theta)$, which represents the model's performance for a given hyperparameter configuration $\theta$, presents many challenges: it is computationally expensive to evaluate, it is non-differentiable, and the search space $\Theta$ is often complex and of mixed-types (continuous, discrete, and categorical). These properties make HPO suitable for search-based metaheuristic techniques.
7
6
8
7
\subsection{Algorithm Selection}
9
8
10
9
\subsubsection{Baseline: Randomized Search}
11
10
12
-
RS is the standard scientific baseline for HPO. \citet{bergstra2012random} demonstrated empirically that RS is more efficient than Grid Search for HPO, as it does not waste evaluations on unimportant parameters. Therefore, any intelligent algorithm must demonstrate superiority over RS to be considered effective.
11
+
Random Search (RS) is the standard scientific baseline for HPO. \citet{bergstra2012random} demonstrated empirically that RS is more efficient than Grid Search for HPO. Therefore, any intelligent algorithm must demonstrate superiority over RS to be considered effective.
13
12
14
-
\subsubsection{Genetic Algorithm}
13
+
\subsubsection{Evolutionary Genetic Algorithm}
15
14
16
-
TODO
15
+
Inspired by Darwinian evolution, the Genetic Algorithm (GA) searches for optimal solutions using \textit{selection}, \textit{crossover}, and \textit{mutation}. We implement a \textbf{Memetic Algorithm} variant, which includes a local search component to escape fitness plateaus. As described in \cite{metaheuristics-cookbook}, a radius-based elitism is applied before crossover to refine the fittest individuals.
17
16
18
17
\subsubsection{Particle Swarm Optimization}
19
18
20
-
PSO models a swarm where individuals are strongly influenced by the single best-found solution. This behaviour leads to rapid convergence, often finding a "good-enough" solution quickly. This same strength can also be a weakness, as it may converge prematurely to a suboptimal solution. The swarm can rapidly cluster around the first local optimum it finds, losing diversity and becoming "stuck" before the true global optimum is found.
19
+
PSO models a swarm where individuals are influenced by both their personal best (\texttt{p\_best}) and the global best (\texttt{g\_best}) solutions. The velocity of each particle is updated using inertia weight ($w$) and acceleration coefficients ($c_1, c_2$), balancing exploration and exploitation.
0 commit comments