Data-driven NVIDIA GPU / CUDA / library compatibility checker built from repository-local data files derived from official vendor and project documentation.
This project answers three common questions:
- Which CUDA Toolkit families can this NVIDIA GPU use?
- Is this GPU compatible with a requested CUDA version?
- Is this GPU compatible with a requested library build such as
libtorch 2.8.0+cu128?
The design intentionally separates:
- CUDA Toolkit support
- Framework or prebuilt binary support
That distinction matters because a GPU can remain compatible with a CUDA Toolkit family while a framework's prebuilt binaries stop shipping kernels for that architecture.
GPU compatibility advice is often scattered across release notes, forum threads, package tags, and architecture tables. This repository turns those official references into versioned data files that can be reviewed, extended, and used offline by CLIs, services, agents, and MCP servers.
- Official-document driven: the runtime does not scrape the web.
- Data-first: compatibility rules live in JSON files under
data/. - Explainable: every result includes
reason,recommendation, andreferences. - Extensible: library-specific logic lives in adapter modules.
- MCP-friendly: the service layer is independent from CLI and transport.
The core GPU decision path is:
GPU name -> normalized GPU -> compute capability -> architecture -> CUDA support range
Library checks then add a second layer:
toolkit support -> library adapter -> prebuilt binary policy -> final status
Supported statuses:
compatibleincompatiblecompatible_with_caveats
These are not the same thing.
- Toolkit support means the underlying CUDA Toolkit family still supports building or targeting the GPU architecture.
- Library binary support means an upstream project such as PyTorch or LibTorch still publishes prebuilt wheels or archives containing kernels for that architecture.
Example:
Quadro P5000is a Pascal GPU (compute capability 6.1).- The repository models Pascal as supported by CUDA Toolkit up to CUDA 12.x.
- But
libtorch 2.8.0+cu128prebuilt binaries are modeled as excluding Pascal, so the final result isincompatiblefrom the prebuilt-binary perspective even though toolkit support is still present.
python -m venv .venv
source .venv/bin/activate
pip install -e ".[dev,mcp]"Or for core CLI usage only:
pip install -e .This repository now includes an installable Codex skill at
skills/gpu-compat-checker/.
If an agent supports Codex-style GitHub skill installation, point it at:
- Repository:
ce-dric/gpu-capability-finder - Skill path:
skills/gpu-compat-checker
Example install command for Codex skill installation workflows:
python3 ~/.codex/skills/.system/skill-installer/scripts/install-skill-from-github.py \
--repo ce-dric/gpu-capability-finder \
--path skills/gpu-compat-checkerAfter installation, restart Codex so the new skill is discovered.
The installed skill includes a bundled helper script that:
- uses the local checkout if the current workspace already contains this repository
- otherwise installs the package from GitHub into a local cache automatically
- runs the same compatibility logic exposed by the CLI
Example skill-side invocation:
python3 ~/.codex/skills/gpu-compat-checker/scripts/run_gpu_compat.py gpu "Quadro P5000"
python3 ~/.codex/skills/gpu-compat-checker/scripts/run_gpu_compat.py cuda-check --gpu "RTX 4060 Ti" --cuda 11.7
python3 ~/.codex/skills/gpu-compat-checker/scripts/run_gpu_compat.py lib-check --gpu "Quadro P5000" --library libtorch --version "2.8.0+cu128" --jsonQuery supported CUDA range:
python -m gpu_compat.cli gpu "Quadro P5000"Check GPU + CUDA:
python -m gpu_compat.cli cuda-check --gpu "Quadro P5000" --cuda 12.8Check GPU + library:
python -m gpu_compat.cli lib-check --gpu "Quadro P5000" --library libtorch --version "2.8.0+cu128"JSON output:
python -m gpu_compat.cli lib-check --gpu "Quadro P5000" --library libtorch --version "2.8.0+cu128" --jsonText output:
Status: incompatible
Query Type: gpu_library_compatibility
GPU: NVIDIA Quadro P5000
Compute Capability: 6.1
Architecture: Pascal
Reason: CUDA Toolkit support is available for Pascal through CUDA 12.x, but the requested libtorch prebuilt binary line excludes Pascal from its packaged architecture policy.
Recommendation: Use an older prebuilt binary line, or build LibTorch/PyTorch from source for Pascal if you must stay on this GPU.
References:
- NVIDIA CUDA GPU Compute Capability
- NVIDIA CUDA Toolkit, Driver, and Architecture Matrix
- PyTorch previous versions
- PyTorch CUDA 12.8 binary support announcement
JSON output:
{
"normalized_gpu_name": "NVIDIA Quadro P5000",
"compute_capability": "6.1",
"architecture": "Pascal",
"query_type": "gpu_library_compatibility",
"input": {
"gpu_name": "Quadro P5000",
"library_name": "libtorch",
"library_version": "2.8.0+cu128"
},
"status": "incompatible",
"reason": "CUDA Toolkit support is available for Pascal through CUDA 12.x, but the requested libtorch prebuilt binary line excludes Pascal from its packaged architecture policy.",
"recommendation": "Use an older prebuilt binary line, or build LibTorch/PyTorch from source for Pascal if you must stay on this GPU.",
"references": [
{
"title": "NVIDIA CUDA GPU Compute Capability",
"url": "https://developer.nvidia.com/cuda-gpus"
}
]
}The repository uses editable JSON files:
data/gpus.json: GPU aliases, compute capability, architecturedata/cuda_matrix.json: architecture-level CUDA Toolkit support rangesdata/libraries/pytorch.json: PyTorch prebuilt binary policydata/libraries/libtorch.json: LibTorch prebuilt binary policy
Each entry includes source metadata so maintainers can update values without searching through code.
- Edit the relevant JSON file in
data/. - Keep the
referencesmetadata aligned with the official source used. - Add or update tests in
tests/. - Run
make test.
- Add a new data file under
data/libraries/. - Create an adapter in
src/gpu_compat/adapters/. - Register it in
src/gpu_compat/service.py. - Add adapter-specific tests in
tests/.
The adapter contract is intentionally small so future support for cuDNN, TensorRT, and ONNX Runtime can be added cleanly.
src/gpu_compat/mcp_server.py provides a minimal MCP server implementation when the optional mcp dependency is installed. The service layer can also be wrapped by other transports without changing the rule engine.
Provided tools:
get_supported_cuda_versions(gpu_name)check_gpu_cuda_compatibility(gpu_name, cuda_version)check_gpu_library_compatibility(gpu_name, library_name, library_version)
- Initial scope only covers NVIDIA GPUs.
- Initial library coverage is limited to PyTorch and LibTorch.
- The data set is intentionally conservative and starts with representative cases rather than exhaustive coverage.
- Some framework-side architecture exclusions are documented through official project discussions or release notes rather than a single canonical matrix page.
- Expand GPU coverage and aliases.
- Add cuDNN, TensorRT, and ONNX Runtime adapters.
- Add richer recommendation logic for source builds and wheel/channel selection.
- Add machine-readable provenance generation for each data row.
- Add a small HTTP API layer.
make setup
make test
make lintGitHub Actions is included at .github/workflows/ci.yml and runs pytest on pushes and pull requests.