You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: inference/README.md
+5-3Lines changed: 5 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -40,7 +40,7 @@ How it works: The model fills up the max available vRAM on the first device pass
40
40
41
41
**Decrease MAX_vRAM if you run into CUDA OOM. This happens because each input takes up additional space on the device.**
42
42
43
-
**NOTE: Total MAX_vRAM across all devices must be > size of the model in GB. If not, you'll need to offload parts of the model to CPU: [refer to this section on running on consumer hardware](#running-on-consumer-hardware).**
43
+
**NOTE: Total MAX_vRAM across all devices must be > size of the model in GB. If not, `bot.py` automatically offloads the rest of the model to RAM and disk. It will use up all available RAM. To allocate a specified amount of RAM: [refer to this section on running on consumer hardware](#running-on-consumer-hardware).**
44
44
45
45
## Running on specific GPUs
46
46
If you have multiple GPUs but would only like to use a specific device(s), [use the same steps as in this section on running on multiple devices](#running-on-multiple-gpus) and only specify the devices you'd like to use.
@@ -58,12 +58,14 @@ If you have multiple GPUs, each <48 GB vRAM, [the steps mentioned in this sectio
58
58
- <48 GB vRAM combined across multiple GPUs
59
59
- Running into Out-Of-Memory (OOM) issues
60
60
61
-
In which case, add the flag `-r CPU_RAM` where CPU_RAM is the maximum amount of RAM you'd like to allocate to loading model. Note: This significantly reduces inference speeds.
61
+
In which case, add the flag `-r CPU_RAM` where CPU_RAM is the maximum amount of RAM you'd like to allocate to loading model. Note: This significantly reduces inference speeds.
62
+
63
+
The model will load without specifying `-r`, however, it is not recommended because it will allocate all available RAM to the model. To limit how much RAM the model can use, add `-r`.
62
64
63
65
If the total vRAM + CPU_RAM < the size of the model in GiB, the rest of the model will be offloaded to a folder "offload" at the root of the directory. Note: This significantly reduces inference speeds.
64
66
65
67
- Example: `-g 0:12 -r 20` will first load up to 12 GiB of the model into the CUDA device 0, then load up to 20 GiB into RAM, and load the rest into the "offload" directory.
0 commit comments