Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
79 changes: 27 additions & 52 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,71 +1,46 @@
# InfiniOps

InfiniOps is a high-performance, hardware-agnostic operator library.
InfiniOps is a high-performance, cross-platform operator library supporting multiple backends: CPU, Nvidia, MetaX, Iluvatar, Moore, Cambricon, and more.

## 🛠️ Prerequisites
## Prerequisites

Ensure your environment meets the following requirements based on your target backend:
- C++17 compatible compiler
- CMake 3.18+
- Python 3.10+
- Hardware-specific SDKs (e.g. CUDA Toolkit, MUSA Toolkit)

- C++17 compatible compiler
- CMake 3.18+
- Hardware-specific SDKs (e.g., CUDA Toolkit)
## Installation

---

## ⚙️ Installation & Building

InfiniOps uses CMake to manage backends.

### 1. Setup Environment

Ensure you have the corresponding SDK installed and environment variables set up for the platform/accelerator you are working on.

### 2. Configure and Build

Using these commands at the root directory of this project:
Install with pip (recommended):

```bash
mkdir build && cd build

cmake .. <OPTIONS>

make -j$(nproc)
pip install .
```

For the `<OPTIONS>`:

| Option | Functionality | Default
|----------------------------------------|------------------------------------|:-:
| `-DWITH_CPU=[ON\|OFF]` | Compile the CPU implementation | n
| `-DWITH_NVIDIA=[ON\|OFF]` | Compile the NVIDIA implementation | n
| `-DWITH_METAX=[ON\|OFF]` | Compile the MetaX implementation | n
| `-DGENERATE_PYTHON_BINDINGS=[ON\|OFF]` | Generate Python bindings | n

*Note: If no accelerator options are provided, `WITH_CPU` is enabled by default.*

## 🚀 Running Examples
After a successful build, the executables are located in the `build/examples` directory.

Run the GEMM example:
This auto-detects available platforms on supported backends. To specify platforms explicitly:

```bash
./examples/gemm
pip install . -C cmake.define.WITH_CPU=ON -C cmake.define.WITH_NVIDIA=ON
```

Run the data_type example:
### CMake Options

```bash
./examples/data_type
```
| Option | Description | Default |
|---|---|:-:|
| `-DWITH_CPU=[ON\|OFF]` | Compile the CPU implementation | OFF |
| `-DWITH_NVIDIA=[ON\|OFF]` | Compile the Nvidia implementation | OFF |
| `-DWITH_METAX=[ON\|OFF]` | Compile the MetaX implementation | OFF |
| `-DWITH_ILUVATAR=[ON\|OFF]` | Compile the Iluvatar implementation | OFF |
| `-DWITH_MOORE=[ON\|OFF]` | Compile the Moore implementation | OFF |
| `-DWITH_CAMBRICON=[ON\|OFF]` | Compile the Cambricon implementation | OFF |
| `-DAUTO_DETECT_DEVICES=[ON\|OFF]` | Auto-detect available platforms | ON |

Run the tensor example:
If no accelerator options are provided and auto-detection finds nothing, `WITH_CPU` is enabled by default.

```bash
./examples/tensor
```
## Contributing

Run the pybind11 example:
See [CONTRIBUTING.md](CONTRIBUTING.md) for code style, commit conventions, PR workflow, development guide, and troubleshooting.

```bash
PYTHONPATH=src python ../examples/gemm.py
```
## License

See [LICENSE](LICENSE).