Skip to content

Commit c473f6a

Browse files
authored
Merge branch 'main' into test_mosaic_tutorial_2.11
2 parents 6e486af + 084b358 commit c473f6a

3 files changed

Lines changed: 4 additions & 4 deletions

File tree

intermediate_source/compiled_autograd_tutorial.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ Compiled Autograd: Capturing a larger backward graph for ``torch.compile``
1616

1717
* PyTorch 2.4
1818
* Complete the `Introduction to torch.compile <https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html>`_
19-
* Read through the TorchDynamo and AOTAutograd sections of `Get Started with PyTorch 2.x <https://pytorch.org/get-started/pytorch-2.0/>`_
19+
* Read through the TorchDynamo and AOTAutograd sections of `Get Started with PyTorch 2.x <https://pytorch.org/get-started/pytorch-2-x/>`_
2020

2121
Overview
2222
--------

intermediate_source/pinmem_nonblock.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -127,7 +127,7 @@
127127
# 1. The device must have at least one free DMA (Direct Memory Access) engine. Modern GPU architectures such as Volterra,
128128
# Tesla, or H100 devices have more than one DMA engine.
129129
#
130-
# 2. The transfer must be done on a separate, non-default cuda stream. In PyTorch, cuda streams can be handles using
130+
# 2. The transfer must be done on a separate, non-default cuda stream. In PyTorch, cuda streams can be handled using
131131
# :class:`~torch.cuda.Stream`.
132132
#
133133
# 3. The source data must be in pinned memory.

recipes_source/intel_neural_compressor_for_pytorch.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ Features
1717

1818
- **Accuracy-driven Tuning:** Intel® Neural Compressor supports accuracy-driven automatic tuning process, provides ``autotune`` API for user usage.
1919

20-
- **Kinds of Quantization:** Intel® Neural Compressor supports a variety of quantization methods, including classic INT8 quantization, weight-only quantization and the popular FP8 quantization. Neural compressor also provides the latest research in simulation work, such as MX data type emulation quantization. For more details, please refer to `Supported Matrix <https://github.com/intel/neural-compressor/blob/master/docs/source/3x/PyTorch.md#supported-matrix>`_.
20+
- **Kinds of Quantization:** Intel® Neural Compressor supports a variety of quantization methods, including classic INT8 quantization, weight-only quantization and the popular FP8 quantization. Neural compressor also provides the latest research in simulation work, such as MX data type emulation quantization. For more details, please refer to `Supported Matrix <https://github.com/intel/neural-compressor/blob/master/docs/source/PyTorch.md#supported-matrix>`_.
2121

2222
Getting Started
2323
---------------
@@ -158,4 +158,4 @@ To leverage accuracy-driven automatic tuning, a specified tuning space is necess
158158
Tutorials
159159
---------
160160

161-
More detailed tutorials are available in the official Intel® Neural Compressor `doc <https://intel.github.io/neural-compressor/latest/docs/source/Welcome.html>`_.
161+
More detailed tutorials are available in the official Intel® Neural Compressor `doc <https://www.intel.com/content/www/us/en/developer/tools/oneapi/neural-compressor.html>`_.

0 commit comments

Comments
 (0)