You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Uses the standard ATen/LibTorch API. Use this if you need APIs not yet available in the
48
+
stable ABI. Code snippets from this implementation are shown in the
49
+
:ref:`reverting-to-non-stable-api` section.
41
50
42
51
Setting up the Build System
43
52
---------------------------
@@ -162,12 +171,12 @@ LibTorch Stable ABI (PyTorch Agnosticism)
162
171
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
163
172
164
173
In addition to CPython agnosticism, there is a second axis of wheel compatibility:
165
-
**LibTorch agnosticism**. While CPython agnosticism allows building a single wheel
174
+
LibTorch agnosticism. While CPython agnosticism allows building a single wheel
166
175
that works across multiple Python versions (3.9, 3.10, 3.11, etc.), LibTorch agnosticism
167
176
allows building a single wheel that works across multiple PyTorch versions (2.10, 2.11, 2.12, etc.).
168
177
These two concepts are orthogonal and can be combined.
169
178
170
-
To achieve LibTorch agnosticism, you must use the **LibTorch Stable ABI**, which provides
179
+
To achieve LibTorch agnosticism, you must use the LibTorch Stable ABI, which provides
171
180
a stable C API for interacting with PyTorch tensors and operators. For example, instead of
172
181
using ``at::Tensor``, you must use ``torch::stable::Tensor``. For comprehensive
173
182
documentation on the stable ABI, including migration guides, supported types, and
@@ -178,7 +187,10 @@ The setup.py above already includes ``TORCH_TARGET_VERSION=0x020a000000000000``,
178
187
the extension targets the LibTorch Stable ABI with a minimum supported PyTorch version of 2.10. The version format is:
179
188
``[MAJ 1 byte][MIN 1 byte][PATCH 1 byte][ABI TAG 5 bytes]``, so 2.10.0 = ``0x020a000000000000``.
180
189
181
-
See the section below for examples of code using the LibTorch Stable ABI.
190
+
The sections below contain examples of code using the LibTorch Stable ABI.
191
+
If the stable API/ABI does not contain what you need, see the :ref:`reverting-to-non-stable-api` section
192
+
or the `extension_cpp/ subdirectory <https://github.com/pytorch/extension-cpp/tree/main/extension_cpp>`_
193
+
in the extension-cpp repository for the equivalent examples using the non-stable API.
182
194
183
195
184
196
Defining the custom op and adding backend implementations
@@ -317,27 +329,6 @@ in a separate ``STABLE_TORCH_LIBRARY_IMPL`` block:
317
329
m.impl("mymuladd", TORCH_BOX(&mymuladd_cuda));
318
330
}
319
331
320
-
Reverting to the Non-Stable LibTorch API
321
-
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
322
-
323
-
The LibTorch Stable ABI/API is still under active development, and certain APIs may not
324
-
yet be available in ``torch/csrc/stable``, ``torch/headeronly``, or the C shims
325
-
(``torch/csrc/stable/c/shim.h``).
326
-
327
-
If you need an API that is not yet available in the stable ABI/API, you can revert to
328
-
the regular ATen API by:
329
-
330
-
1. Removing ``-DTORCH_TARGET_VERSION`` from your ``extra_compile_args``
331
-
2. Using ``TORCH_LIBRARY`` instead of ``STABLE_TORCH_LIBRARY``
332
-
3. Using ``TORCH_LIBRARY_IMPL`` instead of ``STABLE_TORCH_LIBRARY_IMPL``
333
-
4. Reverting to ATen APIs (e.g. using ``at::Tensor`` instead of ``torch::stable::Tensor`` etc.)
334
-
335
-
Note that doing so means you will need to build separate wheels for each PyTorch
336
-
version you want to support.
337
-
338
-
For reference, see the `PyTorch 2.9.1 version of this tutorial <https://github.com/pytorch/tutorials/blob/10eefc3b761a5b5407862b2336493b7ab859640f/advanced_source/cpp_custom_ops.rst>`_
339
-
which uses the non-stable API, as well as `this commit of the extension-cpp repository <https://github.com/pytorch/extension-cpp/tree/0ec4969c7bc8e15a8456e5eb9d9ca0a7ec15bc95>`_.
340
-
341
332
Adding ``torch.compile`` support for an operator
342
333
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
343
334
@@ -673,6 +664,93 @@ When defining the operator, we must specify that it mutates the out Tensor in th
673
664
Do not return any mutated Tensors as outputs of the operator as this will
674
665
cause incompatibility with PyTorch subsystems like ``torch.compile``.
675
666
667
+
.. _reverting-to-non-stable-api:
668
+
669
+
Reverting to the Non-Stable LibTorch API
670
+
----------------------------------------
671
+
672
+
The LibTorch Stable ABI/API is still under active development, and certain APIs may not
673
+
yet be available in ``torch/csrc/stable``, ``torch/headeronly``, or the C shims
674
+
(``torch/csrc/stable/c/shim.h``).
675
+
676
+
If you need an API that is not yet available in the stable ABI/API, you can revert to
677
+
the regular ATen API. Note that doing so means you will need to build separate wheels
678
+
for each PyTorch version you want to support.
679
+
680
+
We provide code snippets for ``mymuladd`` below to illustrate. The changes for the
681
+
CUDA variant, ``mymul`` and ``myadd_out`` are similar in nature and can be found in the
0 commit comments