Rapid-MLX: Fast local OpenAI-compatible LLM provider for LlamaIndex on Mac #21123
raullenchai
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Sharing Rapid-MLX as a fast local LLM provider for LlamaIndex on Apple Silicon.
Setup with LlamaIndex:
Start the server:
rapid-mlx serve qwen3.5-9bWhy Rapid-MLX?
Benchmarks:
Install:
brew install raullenchai/rapid-mlx/rapid-mlxorpip install rapid-mlxBeta Was this translation helpful? Give feedback.
All reactions