This project focuses on training an autonomous driving agent using Reinforcement Learning in the CarRacing-v3 environment provided by Gymnasium. The agent learns to drive a car using visual inputs and the Proximal Policy Optimization (PPO) algorithm.
- Goal: Train a reinforcement learning agent to drive autonomously on randomly generated tracks.
- Input: Visual observations (96x96 RGB frames).
- Algorithm: PPO (Proximal Policy Optimization) from
stable-baselines3. - Output: Trained model that can navigate tracks efficiently.
| Component | Specification |
|---|---|
| OS | Linux/Windows/Mac |
| Python | 3.10.0 |
| Processor | Intel Core i7 (13th Gen) |
| RAM | Minimum 16 GB |
| GPU (Optional) | For faster training |
Install the following libraries before starting:
gymnasiumstable-baselines3box2d-pynumpymatplotlibswig(system package, for Box2D)
sudo dnf install swigSet up a virtual environment using either Conda or venv to manage dependencies.
conda create -n selfdrivingcar python=3.10.0
conda activate selfdrivingcar
pip install -r requirements.txtpython3 -m venv selfdrivingcar
source selfdrivingcar/bin/activate
pip install -r requirements.txt- Data: Agent uses pixel data from CarRacing-v3.
- Action Space: Continuous or discrete actions (steer, gas, brake).
- Training Script:
train.ipynb - Evaluation Script:
evaluate.py
To train the model:
python train.ipynbTo evaluate a pre-trained model:
python evaluate.py✅ Make sure you configure
train.ipynbandevaluate.pyaccording to your model paths and environment settings.