Simple CMake + C++/CUDA scaffold to iterate on a convex hull implementation with basic unit tests.
.
├── CMakeLists.txt
├── Makefile
├── cmake/
│ └── ProjectOptions.cmake
├── include/
│ └── convex_hull/
│ ├── convex_hull.h
│ └── point.h
├── src/
│ ├── CMakeLists.txt
│ ├── convex_hull_cpu.cpp
│ ├── convex_hull_gpu.cu
│ └── convex_hull_gpu_stub.cpp
├── tests/
│ ├── CMakeLists.txt
│ ├── cuda_tests.cu
│ └── unit_tests.cpp
└── apps/
├── CMakeLists.txt
├── hull_bench.cpp
└── hull_demo.cpp
- Public API:
include/convex_hull/convex_hull.hconvex_hull::hull_cpu(points)is the CPU reference implementation (currently Andrew’s monotone chain).convex_hull::hull_gpu(points)is the GPU entry point (replace with your CUDA implementation).convex_hull::is_built_with_cuda()tells you whether CUDA sources were compiled.convex_hull::cuda_device_count()returns the visible device count (0 if unavailable).
- GPU behavior:
- When CUDA is unavailable,
src/convex_hull_gpu_stub.cppmakeshull_gpu()throw. - When CUDA is available,
src/convex_hull_gpu.cucurrently contains a placeholder kernel and then falls back to the CPU hull; swap this out for your real algorithm and tests will exercise it.
- When CUDA is unavailable,
cmake -S . -B build
cmake --build build -jmake build
make test
make demo
make benchctest --test-dir build --output-on-failure./build/apps/hull_demo --n 100000 --gpu --checkOr via make:
make demo GPU=1 N=200000 CHECK=1 SEED=42Runs B sequential hull computations of N points (useful until the GPU path is batched).
make bench N=100000 B=50 MODE=both
make bench N=200000 B=20 MODE=both HYPERFINE=1- Compile-time:
- If
nvccis found, CMake enables CUDA and compilessrc/convex_hull_gpu.cuinto theconvex_hulllibrary (CONVEX_HULL_HAS_CUDAis defined). - If CUDA is not available,
src/convex_hull_gpu_stub.cppis used instead; callingconvex_hull::hull_gpu()throws at runtime.
- If
- Runtime:
convex_hull::hull_gpu(points)is only called when you explicitly request it (the demo uses--gpu, tests call it when built with CUDA).- If no CUDA device is visible at runtime, the demo falls back to CPU;
hull_gpu()also returns the CPU hull in that case.
Yes, but it is only a placeholder: src/convex_hull_gpu.cu allocates device buffers, copies the input points to the GPU, launches a simple copy kernel, and then returns the CPU hull result. The kernel does not affect the output yet; it’s there to prove the CUDA toolchain + launch path works before you swap in your real GPU convex hull implementation.
- CPU tests live in
tests/unit_tests.cppand validate basic shapes/degeneracies plus a deterministic random fixture. - If CUDA is available,
tests/unit_tests.cppalso callshull_gpu()and checks it matcheshull_cpu()on a small fixture. tests/cuda_tests.cuis a lightweight “does CUDA run” smoke test: it checks that a device is visible and that CUDA synchronization succeeds after calling into the library.
- CUDA is optional: if no
nvccis found, it builds CPU-only. - Set
-DCMAKE_CUDA_ARCHITECTURES=...as needed (example:75).