You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: report/sections/3_experiment.tex
+8-7Lines changed: 8 additions & 7 deletions
Original file line number
Diff line number
Diff line change
@@ -22,9 +22,9 @@ \subsubsection{Models}
22
22
23
23
\paragraph{Decision Tree (DT)} DT is the simplest model in nature. It is tree-based which suggests making predictions based on binary predicates. Its architecture is not complex for training and it is also highly explainable to non-technical persons. It is also widely used in real-world production-grade systems like autonomous vehicles \cite{autonomous-vehicle-appl}. Given its simplicity and popularity, we start our analysis on exploring what parameters minimally have to be tuned for the simplest model, and how it performs during tuning with metaheuristics. For simplicity, a prebuilt structure from \cite{dt-scikit} is used.
24
24
25
-
\paragraph{K-Nearest Neighbors (KNN)} KNN is another simple model which predicts based on the class of a nearest neighbor among existing data instances. From the perspective of explainability, a prior work \cite{mygithub-drugconsumpML} suggests that, depending on datasets, sometimes a KNN classifier could be linear-based, but in the case of an image dataset, KNN in our experiment are predicting from highly dimensional image arrays, whose predictions need to be generalized by a kernel-based explainer. The model's architecture itself is not complex but the dataset involved could be a bit heavier training task. We used it as another type of model in our experiment. For simplicity, a prebuilt structure from \cite{knn-scikit} is used.
25
+
\paragraph{K-Nearest Neighbors (KNN)} We ultilize \textit{KNeighborsClassifier} from \cite{knn-scikit} which predicts based on the class of a nearest neighbor among existing data instances. From the perspective of explainability, a prior work \cite{mygithub-drugconsumpML} suggests that, depending on datasets, sometimes a KNN classifier could be linear-based, but in the case of an image dataset, KNN in our experiment are predicting from highly dimensional image arrays, whose predictions need to be generalized by a kernel-based explainer. The model's architecture itself is not complex but the dataset involved could be a bit heavier training task. We used it as an alternative type in our experiment.
26
26
27
-
\paragraph{Convolutional Neural Network (CNN)} While the above models might exhibit a simpler architecture, CNN, on the other hand, is a common deep learning architecture in practical works, which helps learning image recognition tasks more efficiently. It is a fully-connected neural network architecture which leverages operations known as "convolutions". Each of which utilizes a subset of pixels, known as a "kernel", or a "filter", iteratively learn those patterns. Custom neural networks generally, in the real-world, involve far more hyper-parameters during their training, and thus, tuning them is computationally much heavier than those prebuilt surrogate models. However, bad hyper-parameters could also increase the resulting error rate of the model \cite{metaheuristics-cookbook}. Therefore, a suitable metaheuristic search here comes with an important role to help determine the best set of hyper-parameters with fewer resources than an exhaustive search. Figure~\ref{fig:cnn_arch} summarizes the CNN backbone used in our experiments.
27
+
\paragraph{Convolutional Neural Network (CNN)} While the above models might exhibit a simpler architecture, CNN, on the other hand, is a common deep learning architecture in practical works, which helps learning image recognition tasks more efficiently. It is a fully-connected 3-layered neural network with batch normalization, which takes operations known as "convolutions". Each of which utilizes a subset of pixels, known as a "kernel", or a "filter", iteratively learn those patterns. Custom neural networks generally, in the real-world, involve far more hyper-parameters during their training, and thus, tuning them is computationally much heavier than those prebuilt surrogate models. However, bad hyper-parameters could also increase the resulting error rate of the model \cite{metaheuristics-cookbook}. Therefore, a suitable metaheuristic search here comes with an important role to help determine the best set of hyper-parameters with fewer resources than an exhaustive search. Figure~\ref{fig:cnn_arch} summarizes the CNN backbone used in our experiments.
28
28
29
29
\begin{figure}[ht]
30
30
\centering
@@ -143,7 +143,8 @@ \subsubsection{Justification}
143
143
144
144
We identified an order of importance for each metric by its relative weight in the function.
145
145
146
-
\paragraph{Macro F1} The recall measures the fraction of a class's true positives over all samples that are in reality positive, i.e., \(\frac{TP}{TP + FN} \). The precision measures the fraction of true positives over all samples that are labeled positive in the dataset, i.e., \(\frac{TP}{TP + FP} \). By balancing the pros and cons of the precision-recall dilemma, a macro F1 score, denoted by \(\frac{\sum_{i=1}^{N} F1_i}{N} \) from N classes, is believed to be the best and the most important metric. The formula suggests it as an unweighted mean each class's F1 score. It helps penalizing poor performances on any class and avoid over-focusing on well performing ones. This is how it combines both precision and recall for each class, by giving equal importance to each class's performance.
146
+
\paragraph{Macro F1}
147
+
Macro F1 is selected as the dominant term of the composite fitness because it enforces class-wise fairness during hyperparameter optimization. Unlike accuracy or micro-averaged metrics that can be inflated by majority-class performance, Macro F1 computes the unweighted mean of each class’s F1 score, ensuring that every class contributes equally to the objective. This prevents the search process from converging to solutions that perform well only on frequent or easy classes while neglecting minority or harder classes. Since F1 already balances precision and recall through a harmonic mean, its macro-averaged form provides the strongest and most stable signal for steering the optimizer toward models that maintain consistent predictive quality across the entire label distribution.
147
148
148
149
\paragraph{Precision-Recall} Thereafter, precision and recall ranks right after the macro F1 score in importance. While the precision informs us how many samples are labeled positive, it is more important to know, how many of them are truly correctly classified, and therefore, we assign a slightly above weight to the recall rather than the precision.
149
150
@@ -189,14 +190,14 @@ \subsubsection{Search Budget} Each HPO run is allocated a fixed budget of 50 fit
189
190
190
191
\subsubsection{Stochasticity} To account for stochasticity, we perform $N = 10$ independent runs for each optimizer-model.
191
192
192
-
\subsubsection{Metrics}
193
+
\subsubsection{Evaluation Metrics}
193
194
194
195
We will collect:
195
196
196
197
\begin{itemize}
197
-
\item{Effectiveness:} The distribution (mean, median, best, worst) of the final fitness score achieved across 10 runs.
198
-
\item{Convergence:} The convergence trace (improvement over evaluations) for each run.
199
-
\item{Stability:} The variance of the final fitness scores across the 10 runs.
198
+
\item{\textbf{Effectiveness}:} The distribution (mean, median, best, worst) of the final fitness score achieved across 10 runs.
199
+
\item{\textbf{Convergence}:} The convergence trace (improvement over evaluations) for each run.
200
+
\item{\textbf{Stability}:} The variance of the final fitness scores across the 10 runs.
Copy file name to clipboardExpand all lines: report/sections/4_results.tex
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -35,7 +35,7 @@ \subsection{RQ2: Stability}
35
35
36
36
\subsection{Statistical Significance}
37
37
38
-
Wilcoxon signed-rank tests showed no significant performance differences between any optimizer pairs across all models (p > 0.05; Table \ref{tab:wilcoxon}). The nearest to significance was PSO versus RS on KNN (p = 0.094). The memetic GA variant also showed no improvement over the standard GA.
38
+
Wilcoxon signed-rank tests showed no significant performance differences between any optimizer pairs across all models ($p > 0.05$; Table \ref{tab:wilcoxon}). The nearest to significance was PSO versus RS on KNN (p = 0.094). The memetic GA variant also showed no statistically significant improvement over the standard GA.
\paragraph{Insufficient Evolutionary Iterations.} The fixed evaluation budget ($B=50$) relative to the population size ($P=30$) restricted the Genetic Algorithm to fewer than two full generations. Consequently, 60\% of the budget was consumed by initial random sampling, leaving insufficient evaluations for crossover, mutation, and selection to drive convergence.
3
+
\paragraph{Insufficient Evolutionary Iterations.} The fixed evaluation budget ($50$) relative to the population size ($30$) restricted the Genetic Algorithm to fewer than optimal generations. Consequently, 60\% of the budget was consumed by initial random sampling, leaving insufficient evaluations for crossover, mutation, and selection to drive convergence.
4
4
5
-
\paragraph{Statistical Power.} With $N=10$ independent runs per configuration, the Wilcoxon signed-rank test had limited power to detect small but consistent performance differences between algorithms.
6
-
7
-
\paragraph{Scope Validity.} Experiments were conducted only on grayscale CIFAR-10 with defined hyperparameter spaces. Results may not generalize to higher-dimensional spaces (e.g., RGB images or deeper architectures) where exploration-exploitation trade-offs could differ.
5
+
\paragraph{Statistical Power.} With $10$ independent runs per configuration, the Wilcoxon signed-rank test had limited power to detect small but consistent performance differences between algorithms.
8
6
7
+
\paragraph{Scope Validity.} Experiments were conducted only on grayscale CIFAR-10 with defined hyperparameter spaces. Results may not generalize to higher-dimensional spaces (e.g., RGB images or deeper architectures) where exploration-exploitation trade-offs could differ.
0 commit comments