High-Performance AI-Native Web Server — built in C & Assembly for ultra-fast AI inference and streaming.
-
Updated
Jan 20, 2026 - C
High-Performance AI-Native Web Server — built in C & Assembly for ultra-fast AI inference and streaming.
This repository features an application example for Siemens' Industrial AI Vision Blueprint
g023's TurboXInf 🚀: 2x+ faster inference for Qwen3-1.77B or Qwen3.5-2B on RTX 3060! Custom Triton INT8 GEMV kernels halve memory traffic by fusing dequantization, paired with torch.compile. Hits 113 tok/s (vs 56.4 baseline) with no quality loss with INT8 even better results for INT4. MIT License.
A comprehensive framework for multi-node, multi-GPU scalable LLM inference on HPC systems using vLLM and Ollama. Includes distributed deployment templates, benchmarking workflows, and chatbot/RAG pipelines for high-throughput, production-grade AI services
Add a description, image, and links to the ai-inference-server topic page so that developers can more easily learn about it.
To associate your repository with the ai-inference-server topic, visit your repo's landing page and select "manage topics."