Skip to content
This repository was archived by the owner on Sep 18, 2025. It is now read-only.

Commit d70e142

Browse files
authored
Remove CPREFIX (#135)
* Remove cleanup.sh * Fix Dockerfile-ub22 for logs folder * Remove CPREFIX * Removed CPREFIX
1 parent db5e739 commit d70e142

3 files changed

Lines changed: 4 additions & 4 deletions

File tree

PyTorch/vLLM_Tutorials/Deploying_vLLM/Dockerfile-1.21.0-ub22-vllm-v0.7.2+Gaudi

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ FROM vault.habana.ai/gaudi-docker/1.21.0/ubuntu24.04/habanalabs/pytorch-installe
33
# BUILD_ARGS="--build-arg http_proxy --build-arg https_proxy --build-arg no_proxy"
44
# CNAME=vllm-v0.7.2-gaudi-ub22:1.21.0-555
55
# docker pull vault.habana.ai/gaudi-docker/1.21.0/ubuntu24.04/habanalabs/pytorch-installer-2.6.0
6-
# docker build -f Dockerfile-1.21.0-ub22-vllm-v0.7.2+Gaudi -t ${CPREFIX}${CNAME} $BUILD_ARGS .
6+
# docker build -f Dockerfile-1.21.0-ub22-vllm-v0.7.2+Gaudi -t ${CNAME} $BUILD_ARGS .
77

88
ENV OMPI_MCA_btl_vader_single_copy_mechanism=none
99

PyTorch/vLLM_Tutorials/Deploying_vLLM/Dockerfile-1.21.0-ub24-vllm-v0.7.2+Gaudi

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ FROM vault.habana.ai/gaudi-docker/1.21.0/ubuntu24.04/habanalabs/pytorch-installe
33
# BUILD_ARGS="--build-arg http_proxy --build-arg https_proxy --build-arg no_proxy"
44
# CNAME=vllm-v0.7.2-gaudi-ub24:1.21.0-555
55
# docker pull vault.habana.ai/gaudi-docker/1.21.0/ubuntu24.04/habanalabs/pytorch-installer-2.6.0
6-
# docker build -f Dockerfile-1.21.0-ub24-vllm-v0.7.2+Gaudi -t ${CPREFIX}${CNAME} $BUILD_ARGS .
6+
# docker build -f Dockerfile-1.21.0-ub24-vllm-v0.7.2+Gaudi -t ${CNAME} $BUILD_ARGS .
77

88
ENV OMPI_MCA_btl_vader_single_copy_mechanism=none
99

PyTorch/vLLM_Tutorials/Deploying_vLLM/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -188,7 +188,7 @@ docker run -it --rm \
188188
-e tensor_parallel_size=4 \
189189
-e max_model_len=8192 \
190190
--name vllm-server1 \
191-
${CPREFIX}${CNAME}
191+
${CNAME}
192192
```
193193

194194
```
@@ -209,7 +209,7 @@ docker run -it --rm \
209209
-e tensor_parallel_size=4 \
210210
-e max_model_len=8192 \
211211
--name vllm-server2 \
212-
${CPREFIX}${CNAME}
212+
${CNAME}
213213
```
214214
4) To view vllm-server logs, run this in a separate terminal:
215215
```bash

0 commit comments

Comments
 (0)