Although the contemporary Artificial Intelligence ecosystem is predominantly made up of high-level libraries such as Python, PyTorch and TensorFlow, the Neural Network was implemented in C, with the aim of maximising performance, portability and a comprehensive understanding of the system.
The model in question is currently configured using a classic three-layer architecture, designed to strike a balance between computational complexity and learning capacity:
- Input Layer: 4 Neurons
- Hidden Layer: 5 Neurons
- Output Layer: 1 Neuron
The scarcity of documentation and the lack of practical implementations of Genetic Algorithms applied to Neural Networks using the C programming language were among the key motivations behind this project. In this project, GA is used as a weight optimisation, allowing the network to evolve.
The implementation is carried out in accordance with the fundamental pillars of Neuroevolution:
- Populations and Generations: Simultaneous management of multiple individuals (each individual represents a Neural Network)
- Fitness Function: An assessment of each individual’s performance to determine their survival
- Selection: Identifying the best individuals to pass on genetic material to the next generation
- Crossover: The combination of the genetic material of two parents to produce a individual with superior potential
- Mutation: Introducing random variations in weights to avoid local minimum values and promote genetic diversity
Defining the inputs — the variables that provide information to the model — is one of the most critical aspects of developing a Neural Network. In the case of NeuroFlap, the agent must be able to navigate a stochastic and dynamic environment.
To enable the agent to make decisions in real time, the network processes four key variables:
- Flappy's Height: The agent's current vertical position
- Vertical Speed: The agent’s speed is closely linked to decision-making
- Position of the Gap Center: The vertical ‘target’ between the pipes through which the agent must pass
- Horizontal Distance: The distance to the next obstacle, to help the agent to time the jump
Activation Functions introduce the non-linearity required for the model to learn complex patterns and make rapid decisions.
As part of the NeuroFlap project, the four architectures most frequently cited in the literature were implemented, with a view to enabling tests to be carried out with different convergence behaviours:
- Sigmoid Function
- ReLU
- Leaky ReLU
- Tanh
Performance Analysis of Activation Functions and Hidden Layer Topology in Neuroevolutionary Networks
The project in question is not limited to mere implementation, but rather represents an in-depth study of the efficiency of hyperparameters in Neuro-Evolutionary environments.
Two detailed experimental studies were conducted with a view to optimising the NeuroFlap architecture.
The combinations tested were:
- ReLU → Sigmoid
- ReLU → Tanh
- Leaky ReLU → Sigmoid
- Leaky ReLU → Tanh
- Sigmoid → Tanh
- Tanh → Sigmoid
Hidden Layer Scaling
The finalist architectural designs were:
- ReLU → Sigmoid
- Leaky ReLU → Tanh
- Tanh → Sigmoid
[!NOTE]
A full technical report (PDF) detailing the full analysis and discussion of the results is available in the repository (currently available in Portuguese only).
The metrics assessed were Average Fitness, Execution Time and Number of Winning Individuals.
To ensure the consistency of the results, each combination was run five times independently. In each iteration, the model was trained for up to 50 generations, with a population of 250 individuals per generation.
The diagrams can be found in the /Architectures and /Architectures x10 folders.
Following the completion of the experimental tests and a thorough analysis of the data, one model clearly stood out for its consistency, speed of convergence and effectiveness in the Flappy Bird environment:
| Architecture | Weight Optimisation | Method for Normalising Inputs | Hidden Layer Activation Function | Output Layer Activation Function | Number of Parameters |
|---|---|---|---|---|---|
| Feedforward Neural Network \ MultiLayer Perceptron | Neuroevolution – Genetic Algorithm | Max Scaling Normalization | Tanh | Sigmoid | 31 Parameters |
The portability of the C code has enabled compilation to WebAssembly, allowing the Neural Network to run directly in the browser via JavaScript with native performance.
The project is also available on the Hugging Face Spaces platform, where the winning model can be tested in an interactive environment accessible to all users.
A comprehensive guide to help both beginners and more experienced enthusiasts develop Neural Networks! :)

