Skip to content

improve build support for native arm targets and multi-arch build#38

Open
raghavduddala wants to merge 1 commit intonvidia-isaac:mainfrom
raghavduddala:feature/arm-build-optimization
Open

improve build support for native arm targets and multi-arch build#38
raghavduddala wants to merge 1 commit intonvidia-isaac:mainfrom
raghavduddala:feature/arm-build-optimization

Conversation

@raghavduddala
Copy link
Copy Markdown

@raghavduddala raghavduddala commented Apr 2, 2026

Motivation

  • Building cuVSLAM natively on Jetson required several undocumented steps and workarounds.
  • The hardcoded -arch=all bypassed CMake's architecture management, and there was no way to target a specific GPU from the build script.
  • The speedup test filter bug caused false test failures.

Summary of Changes

  • Replaced hardcoded -arch=all in libs/cuda_modules/cuda_kernels/CMakeLists.txt with CMake's native
    CMAKE_CUDA_ARCHITECTURES via the CUDA_ARCHITECTURES target property
  • Add --cuda_arch flag to build_release.sh for targeting specific GPU architectures (e.g. --cuda_arch=87 for Jetson Orin Nano) - Add default CMAKE_CUDA_ARCHITECTURES=all fallback in top-level CMakeLists.txt
  • Fix GTEST_FILTER in build_release.sh to exclude both SpeedUp and Speedup test naming conventions — the case-sensitive filter was missing the image_pyramid_test speedup tests
  • Add "Build natively on Jetson" section to README documenting required dev packages (libcublas-dev, libcusolver-dev), Git LFS setup, and --cuda_arch usage

Tested the following:

  • Built with --cuda_arch=87 on Jetson Orin Nano (CUDA 12.6) with all module tests passing
    ./build_releases.sh --cuda_arch=87 --modules_test --build_lib
cuVSLAM-arch87-jetson-build
  • Built with --cuda_arch=86 on x86_64 (Ubuntu 22.04, nvidia-driver: 580, CUDA 13, RTX 3070)
    ./build_releases.sh --cuda_arch=87 --modules_test --build_lib
cuVSLAM-single-arch
  • Built without --cuda_arch on x86_64 (Ubuntu 22.04, nvidia-driver: 580, CUDA 13, RTX 3070) which builds for all architectures (SM_75 through SM_121), and all the module tests passed
    ./build_releases.sh --modules_test --build_lib
cuVSLAM-builds-for-every-arch cuVSLAM-builds-without-errors

- replace harcoded --arch=all with CMAKE_CUDA_ARCHITECTURES cmake variable
- add --cuda_arch flag to build_release.sh
- fix GTEST_FILTER to catch smallcase test name
- add native orin nano build instructions to README

Signed-off-by: raghavduddala <raghavduddala@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Building cuVSLAM for multiple archs and Module Tests

1 participant