Skip to content

Commit 26cff23

Browse files
author
danko
committed
Fix of small typo in pinmem_nonblock.py
1 parent 612d2a0 commit 26cff23

1 file changed

Lines changed: 1 addition & 1 deletion

File tree

intermediate_source/pinmem_nonblock.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -127,7 +127,7 @@
127127
# 1. The device must have at least one free DMA (Direct Memory Access) engine. Modern GPU architectures such as Volterra,
128128
# Tesla, or H100 devices have more than one DMA engine.
129129
#
130-
# 2. The transfer must be done on a separate, non-default cuda stream. In PyTorch, cuda streams can be handles using
130+
# 2. The transfer must be done on a separate, non-default cuda stream. In PyTorch, cuda streams can be handled using
131131
# :class:`~torch.cuda.Stream`.
132132
#
133133
# 3. The source data must be in pinned memory.

0 commit comments

Comments
 (0)