Skip to content

Commit 56b670b

Browse files
SecAI-Hubclaude
andcommitted
Fix build: add libcurl-devel dep + LLAMA_CURL=OFF fallback for llama.cpp
llama.cpp b5200 requires libcurl at configure time (common/CMakeLists.txt:90). Add libcurl-devel to build dependencies so CUDA and Vulkan builds get CURL support. Add -DLLAMA_CURL=OFF to the CPU-only fallback as a safety net in case libcurl-devel is unavailable on a minimal build host. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
1 parent f7f263d commit 56b670b

1 file changed

Lines changed: 2 additions & 2 deletions

File tree

files/scripts/build-services.sh

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ SRC_DIR="/tmp/secure-ai-build"
1111
echo "=== Building Secure AI services ==="
1212

1313
# Install build dependencies
14-
dnf install -y golang python3 python3-pip cmake gcc gcc-c++ 2>/dev/null || true
14+
dnf install -y golang python3 python3-pip cmake gcc gcc-c++ libcurl-devel 2>/dev/null || true
1515

1616
mkdir -p "$INSTALL_DIR" "$SRC_DIR"
1717

@@ -188,7 +188,7 @@ cmake -B build -DGGML_CUDA=ON -DGGML_VULKAN=ON -DBUILD_SHARED_LIBS=OFF \
188188
{ rm -rf build && cmake -B build -DGGML_VULKAN=ON -DBUILD_SHARED_LIBS=OFF \
189189
-DCMAKE_BUILD_TYPE=Release 2>/dev/null; } || \
190190
{ rm -rf build && cmake -B build -DBUILD_SHARED_LIBS=OFF \
191-
-DCMAKE_BUILD_TYPE=Release; }
191+
-DLLAMA_CURL=OFF -DCMAKE_BUILD_TYPE=Release; }
192192
cmake --build build --target llama-server -j"$(nproc)"
193193
install -m 755 build/bin/llama-server /usr/bin/llama-server
194194
echo " -> /usr/bin/llama-server"

0 commit comments

Comments
 (0)