Skip to content

Eliezer-Carvalho/Neuro-Flap

Repository files navigation

Neuro Flap

Neuro Flap is a project that applies a Feedforward Neural Network optimized with Genetic Alghoritms to the game Flappy Bird!

Feedforward Neural Network in C

Although the contemporary Artificial Intelligence ecosystem is predominantly made up of high-level libraries such as Python, PyTorch and TensorFlow, the Neural Network was implemented in C, with the aim of maximising performance, portability and a comprehensive understanding of the system.
The model in question is currently configured using a classic three-layer architecture, designed to strike a balance between computational complexity and learning capacity:

  • Input Layer: 4 Neurons
  • Hidden Layer: 5 Neurons
  • Output Layer: 1 Neuron
The core of the Neural Network logic, is available for reference in the directory /NeuroFlap.

Neuroevolution - Genetic Algorithms in C

The scarcity of documentation and the lack of practical implementations of Genetic Algorithms applied to Neural Networks using the C programming language were among the key motivations behind this project. In this project, GA is used as a weight optimisation, allowing the network to evolve.
The implementation is carried out in accordance with the fundamental pillars of Neuroevolution:

  • Populations and Generations: Simultaneous management of multiple individuals (each individual represents a Neural Network)
  • Fitness Function: An assessment of each individual’s performance to determine their survival
  • Selection: Identifying the best individuals to pass on genetic material to the next generation
  • Crossover: The combination of the genetic material of two parents to produce a individual with superior potential
  • Mutation: Introducing random variations in weights to avoid local minimum values and promote genetic diversity
The core of the Genetic Algorithm logic, is also available for reference in the directory /NeuroFlap.

Representation of Inputs

Defining the inputs — the variables that provide information to the model — is one of the most critical aspects of developing a Neural Network. In the case of NeuroFlap, the agent must be able to navigate a stochastic and dynamic environment.
To enable the agent to make decisions in real time, the network processes four key variables:

  • Flappy's Height: The agent's current vertical position
  • Vertical Speed: The agent’s speed is closely linked to decision-making
  • Position of the Gap Center: The vertical ‘target’ between the pipes through which the agent must pass
  • Horizontal Distance: The distance to the next obstacle, to help the agent to time the jump
Max-Scale Normalization is applied in order to standardise the inputs.

Activation Functions

Activation Functions introduce the non-linearity required for the model to learn complex patterns and make rapid decisions.
As part of the NeuroFlap project, the four architectures most frequently cited in the literature were implemented, with a view to enabling tests to be carried out with different convergence behaviours:

  • Sigmoid Function
  • ReLU
  • Leaky ReLU
  • Tanh

Performance Analysis of Activation Functions and Hidden Layer Topology in Neuroevolutionary Networks

The project in question is not limited to mere implementation, but rather represents an in-depth study of the efficiency of hyperparameters in Neuro-Evolutionary environments.
Two detailed experimental studies were conducted with a view to optimising the NeuroFlap architecture.

Activation Function Synergy

The first objective was to identify which combinations of activation functions (Hidden Layer → Output Layer) produce the best results in a model with 31 parameters.
The combinations tested were:

  • ReLU → Sigmoid
  • ReLU → Tanh
  • Leaky ReLU → Sigmoid
  • Leaky ReLU → Tanh
  • Sigmoid → Tanh
  • Tanh → Sigmoid
The top three combinations have progressed to next test.

Hidden Layer Scaling

In order to answer the question: “Are Networks with more Neurons always better ?", the hidden layer of the winning combinations was doubled, increasing the number of parameters from 31 to 61.
The finalist architectural designs were:

  • ReLU → Sigmoid
  • Leaky ReLU → Tanh
  • Tanh → Sigmoid


[!NOTE]
A full technical report (PDF) detailing the full analysis and discussion of the results is available in the repository (currently available in Portuguese only).
The metrics assessed were Average Fitness, Execution Time and Number of Winning Individuals.
To ensure the consistency of the results, each combination was run five times independently. In each iteration, the model was trained for up to 50 generations, with a population of 250 individuals per generation.
The diagrams can be found in the /Architectures and /Architectures x10 folders.

WASM and Hugging Face

Following the completion of the experimental tests and a thorough analysis of the data, one model clearly stood out for its consistency, speed of convergence and effectiveness in the Flappy Bird environment:

Architecture Weight Optimisation Method for Normalising Inputs Hidden Layer Activation Function Output Layer Activation Function Number of Parameters
Feedforward Neural Network \ MultiLayer Perceptron Neuroevolution – Genetic Algorithm Max Scaling Normalization Tanh Sigmoid 31 Parameters

The portability of the C code has enabled compilation to WebAssembly, allowing the Neural Network to run directly in the browser via JavaScript with native performance.
The project is also available on the Hugging Face Spaces platform, where the winning model can be tested in an interactive environment accessible to all users.

Neural Network Roadmap

A comprehensive guide to help both beginners and more experienced enthusiasts develop Neural Networks! :)

Languages, Libraries and Environments

C
Raylib
Linux Mint
Web Assembly
Emscripten

About

Neuro Flap is a Feedforward Neural Network applied to the game Flappy Bird, optimised using a Genetic Algorithm. It was studied different activation functions and the impact of the number of neurons in the hidden layer on performance. Developed in C using Raylib on a Linux environment.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors