| hide |
|
|---|
<p>
Fine-tune Llama 3.1 8B on a custom dataset using TRL.
</p>
</a>
<a href="/examples/single-node-training/axolotl"
class="feature-cell">
<h3>
Axolotl
</h3>
<p>
Fine-tune Llama 4 on a custom dataset using Axolotl.
</p>
</a>
<p>
Fine-tune LLM on multiple nodes
with TRL, Accelerate, and Deepspeed.
</p>
</a>
<a href="/examples/distributed-training/axolotl"
class="feature-cell sky">
<h3>
Axolotl
</h3>
<p>
Fine-tune LLM on multiple nodes
with Axolotl.
</p>
</a>
<a href="/examples/distributed-training/ray-ragen"
class="feature-cell sky">
<h3>
Ray+RAGEN
</h3>
<p>
Fine-tune an agent on multiple nodes
with RAGEN, verl, and Ray.
</p>
</a>
<a href="/examples/distributed-training/open-r1"
class="feature-cell sky">
<h3>
Open-R1
</h3>
<p>
Fine-tune to reproduce Deepseek-R1 with separate inference and training nodes.
</p>
</a>
<p>
Run multi-node NCCL tests with MPI
</p>
</a>
<a href="/examples/clusters/rccl-tests"
class="feature-cell sky">
<h3>
RCCL tests
</h3>
<p>
Run multi-node RCCL tests with MPI
</p>
</a>
<a href="/examples/clusters/a3mega"
class="feature-cell sky">
<h3>
A3 Mega
</h3>
<p>
Set up GCP A3 Mega clusters with optimized networking
</p>
</a>
<a href="/examples/clusters/a3high"
class="feature-cell sky">
<h3>
A3 High
</h3>
<p>
Set up GCP A3 High clusters with optimized networking
</p>
</a>
Deploy DeepSeek distilled models with SGLang
Deploy Llama 3.1 with vLLM
Deploy Llama 4 with TGI
Deploy a DeepSeek distilled model with NIM
Deploy DeepSeek models with TensorRT-LLM
<p>
Deploy and fine-tune LLMs on AMD
</p>
</a>
<a href="/examples/accelerators/tpu"
class="feature-cell sky">
<h3>
TPU
</h3>
<p>
Deploy and fine-tune LLMs on TPU
</p>
</a>
<a href="/examples/accelerators/intel"
class="feature-cell sky">
<h3>
Intel Gaudi
</h3>
<p>
Deploy and fine-tune LLMs on Intel Gaudi
</p>
</a>
<a href="/examples/accelerators/tenstorrent"
class="feature-cell sky">
<h3>
Tenstorrent
</h3>
<p>
Deploy and fine-tune LLMs on Tenstorrent
</p>
</a>