Skip to content
#

multi-turboquant

Here are 2 public repositories matching this topic...

Native Windows build of vLLM 0.19.1 — no WSL, no Docker. Pre-built wheels + 34-file Windows patch + Multi-TurboQuant KV cache compression (6 methods, 2x cache capacity). PyTorch 2.10 + CUDA 12.6 + Triton + Flash-Attention 2.

  • Updated Apr 26, 2026
  • Python

Improve this page

Add a description, image, and links to the multi-turboquant topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the multi-turboquant topic, visit your repo's landing page and select "manage topics."

Learn more