From 7c520029c35d509f74c5c312f56910057f6df578 Mon Sep 17 00:00:00 2001 From: Jiacheng Huang Date: Tue, 7 Apr 2026 09:28:11 +0000 Subject: [PATCH] chore: update `README.md` to reflect current status --- README.md | 79 +++++++++++++++++++------------------------------------ 1 file changed, 27 insertions(+), 52 deletions(-) diff --git a/README.md b/README.md index 875a936..6b9fc6f 100644 --- a/README.md +++ b/README.md @@ -1,71 +1,46 @@ # InfiniOps -InfiniOps is a high-performance, hardware-agnostic operator library. +InfiniOps is a high-performance, cross-platform operator library supporting multiple backends: CPU, Nvidia, MetaX, Iluvatar, Moore, Cambricon, and more. -## 🛠️ Prerequisites +## Prerequisites -Ensure your environment meets the following requirements based on your target backend: +- C++17 compatible compiler +- CMake 3.18+ +- Python 3.10+ +- Hardware-specific SDKs (e.g. CUDA Toolkit, MUSA Toolkit) - - C++17 compatible compiler - - CMake 3.18+ - - Hardware-specific SDKs (e.g., CUDA Toolkit) +## Installation ---- - -## ⚙️ Installation & Building - -InfiniOps uses CMake to manage backends. - -### 1. Setup Environment - -Ensure you have the corresponding SDK installed and environment variables set up for the platform/accelerator you are working on. - -### 2. Configure and Build - -Using these commands at the root directory of this project: +Install with pip (recommended): ```bash -mkdir build && cd build - -cmake .. - -make -j$(nproc) +pip install . ``` -For the ``: - -| Option | Functionality | Default -|----------------------------------------|------------------------------------|:-: -| `-DWITH_CPU=[ON\|OFF]` | Compile the CPU implementation | n -| `-DWITH_NVIDIA=[ON\|OFF]` | Compile the NVIDIA implementation | n -| `-DWITH_METAX=[ON\|OFF]` | Compile the MetaX implementation | n -| `-DGENERATE_PYTHON_BINDINGS=[ON\|OFF]` | Generate Python bindings | n - -*Note: If no accelerator options are provided, `WITH_CPU` is enabled by default.* - -## 🚀 Running Examples -After a successful build, the executables are located in the `build/examples` directory. - -Run the GEMM example: +This auto-detects available platforms on supported backends. To specify platforms explicitly: ```bash -./examples/gemm +pip install . -C cmake.define.WITH_CPU=ON -C cmake.define.WITH_NVIDIA=ON ``` -Run the data_type example: +### CMake Options -```bash -./examples/data_type -``` +| Option | Description | Default | +|---|---|:-:| +| `-DWITH_CPU=[ON\|OFF]` | Compile the CPU implementation | OFF | +| `-DWITH_NVIDIA=[ON\|OFF]` | Compile the Nvidia implementation | OFF | +| `-DWITH_METAX=[ON\|OFF]` | Compile the MetaX implementation | OFF | +| `-DWITH_ILUVATAR=[ON\|OFF]` | Compile the Iluvatar implementation | OFF | +| `-DWITH_MOORE=[ON\|OFF]` | Compile the Moore implementation | OFF | +| `-DWITH_CAMBRICON=[ON\|OFF]` | Compile the Cambricon implementation | OFF | +| `-DAUTO_DETECT_DEVICES=[ON\|OFF]` | Auto-detect available platforms | ON | -Run the tensor example: +If no accelerator options are provided and auto-detection finds nothing, `WITH_CPU` is enabled by default. -```bash -./examples/tensor -``` +## Contributing -Run the pybind11 example: +See [CONTRIBUTING.md](CONTRIBUTING.md) for code style, commit conventions, PR workflow, development guide, and troubleshooting. -```bash -PYTHONPATH=src python ../examples/gemm.py -``` +## License + +See [LICENSE](LICENSE).