This tutorial demonstrates how to run PyCuVSLAM Stereo-Inertial Visual Odometry using the EuRoC MAV dataset with unrectified images
PyCuVSLAM supports multiple visual tracking modes. You can specify the desired tracking mode through the cuvslam.Tracker.OdometryConfig object when initializing visual tracking. Tracking modes can be set either by using enumeration values or directly using their respective names:
-
Stereo: Visual tracking using stereo cameras. This mode can be extended to multiple stereo cameras (PyCuVSLAM default mode, set as
OdometryMode(0)orOdometryMode.Multicamera) -
Stereo-Inertial: Visual-inertial tracking using stereo cameras combined with IMU data (set as
OdometryMode(1)orOdometryMode.Inertial) -
Mono-Depth (RGB-D): Visual tracking using a monocular camera and depth images (set as
OdometryMode(2)orOdometryMode.RGBD) -
Monocular: Visual tracking using a monocular camera. This mode provides accurate camera rotation estimation but does not estimate scale. (set as
OdometryMode(3)orOdometryMode.Mono)
PyCuVSLAM supports all tracking modes on the EuRoC MAV dataset except Mono-Depth. To experiment with Mono-Depth tracking, please refer to the TUM-RGBD dataset example. You can try different tracking modes by modifying the following line in track_euroc.py:
euroc_tracking_mode = cuvslam.Tracker.OdometryMode(1)PyCuVSLAM supports several distortion models. Each model is specified by name along with a corresponding list of coefficients during camera initialization (cuvslam.Camera(cuvslam.Distortion(...))). Supported models include:
-
Pinhole: Assumes no distortion and requires 0 coefficients. This is the default model (
Distortion.Model.PinholeorDistortion.Model(0)) -
Fisheye (Equidistant): Uses 4 distortion coefficients (
Distortion.Model.FisheyeorDistortion.Model(1)) -
Brown: Distortion model consisting of 3 radial and 2 tangential coefficients:
$k_1, k_2, k_3, p_1, p_2$ (Distortion.Model.BrownorDistortion.Model(2)) -
Polynomial: Distortion model with 8 coefficients:
$k_1, k_2, p_1, p_2, k_3, k_4, k_5, k_6$ (Distortion.Model.PolynomialorDistortion.Model(3))
The example provided in this repository uses the Brown model for the original dataset calibration and the Fisheye model for updated calibrations provided in repository. To achieve results similar to those shown in the cuVSLAM technical report, use the recalibrated camera and imu parameters.
Note: Ensure the correct number and order of distortion coefficients when initializing your cameras. If you experience poor tracking performance with unrectified cameras, consider testing with
OdometryMode.Mono. This mode typically yields smoother trajectories and accurate rotational poses when camera parameters are correct
Refer to the Installation Guide for detailed environment setup instructions
-
Download the EuRoC MH_01_easy dataset:
- Go to https://doi.org/10.3929/ethz-b-000690084
- Download "Machine Hall Datasets (ZIP, 12096.15 MB)"
- Extract
/machine_hall/MH_01_easy/MH_01_easy.zipfrom the downloaded archive (machine_hall.zip) - Extract
mav0todataset/fromMH_01_easy.zip
-
Copy the calibration files:
cd examples/euroc cp sensor_cam0.yaml dataset/mav0/cam0/sensor_recalibrated.yaml cp sensor_cam1.yaml dataset/mav0/cam1/sensor_recalibrated.yaml cp sensor_imu0.yaml dataset/mav0/imu0/sensor_recalibrated.yaml -
Ensure the dataset path is correctly set at the beginning of the visual tracking script.
In the examples/euroc folder run:
python3 track_euroc.pyYou should see the following visualization in Rerun. In Visual-Inertial mode, the red arrow pointing downward represents the gravity vector estimated by cuVSLAM during inertial tracking:
Note:
- If you experience poor Stereo-Inertial tracking, first validate that Mono tracking and Stereo Visual tracking perform correctly with the same intrinsic and extrinsic camera parameters
- If the gravity vector is consistently misaligned (not pointing downward), please double-check and update your IMU extrinsics matrix
