You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
docs: update CHANGELOG and README with LM Studio embedding improvements
CHANGELOG updates:
- Added new section documenting LM Studio provider fixes
- Documented with_auto_from_env() support for LM Studio
- Documented embeddings-lmstudio feature flag addition
- Documented architectural consolidation to single config path
- Explained impact: LM Studio now works in all code paths
README updates:
- Added LM Studio as explicit embedding provider option
- Added side-by-side comparison of Ollama vs LM Studio providers
- Updated LM Studio setup with new build commands (Makefile + feature flags)
- Added environment variable configuration option
- Fixed LM Studio URL to include /v1 endpoint
- Improved clarity on supported embedding models for LM Studio
Both files now accurately reflect the current state of LM Studio support.
Copy file name to clipboardExpand all lines: README.md
+31-6Lines changed: 31 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,18 +23,30 @@ CodeGraph indexes your source code to a graph database, creates semantic embeddi
23
23
24
24
### Local Embeddings & Reranking (SurrealDB)
25
25
26
-
CodeGraph now writes Ollama/LM Studioembeddings directly into SurrealDB’s dedicated HNSW columns. Pick the model you want and set the matching env vars before running `codegraph index`:
26
+
CodeGraph supports multiple local embedding providers (Ollama, LM Studio, ONNX) and writes embeddings directly into SurrealDB's dedicated HNSW columns. Pick the provider you want and set the matching env vars before running `codegraph index`:
27
27
28
+
**Option 1: Ollama**
28
29
```bash
29
30
export CODEGRAPH_EMBEDDING_PROVIDER=ollama
30
31
export CODEGRAPH_EMBEDDING_MODEL=qwen3-embedding:0.6b # or all-mini-llm, qwen3-embedding:4b, embeddinggemma etc.
export CODEGRAPH_LMSTUDIO_MODEL=jina-embeddings-v3 # or jina-embeddings-v4, qwen3-embedding-0.6b, nomic-embed-text-v1.5, etc.
39
+
export CODEGRAPH_LMSTUDIO_URL=http://localhost:1234/v1 # Default LM Studio endpoint
40
+
export CODEGRAPH_EMBEDDING_DIMENSION=1024 # Auto-detected for 20+ models, or set manually
41
+
```
32
42
33
-
# Optional local reranking (LM Studio exposes an OpenAI-compatible reranker endpoint)
43
+
**Optional local reranking:**
44
+
```bash
45
+
# LM Studio exposes an OpenAI-compatible reranker endpoint
34
46
export CODEGRAPH_RERANKING_PROVIDER=lmstudio
35
47
```
36
48
37
-
We automatically route embeddings to `embedding_384`, `embedding_768`, `embedding_1024`, `embedding_2048`, `embedding_2056`, or `embedding_4096` and keep reranking disabled unless a provider is configured.
49
+
We automatically route embeddings to `embedding_384`, `embedding_768`, `embedding_1024`, `embedding_2048`, `embedding_2560`, or `embedding_4096` columns and keep reranking disabled unless a provider is configured.
0 commit comments