Quick guide for changing embeddings/LLM settings when using PullData Web UI or API
# 1. Start LM Studio and load your models
# 2. Edit config before starting server
cp configs/default.yaml configs/lm_studio.yaml
notepad configs/lm_studio.yaml # Edit as shown below
# 3. Start server
python run_server.py
# 4. In Web UI: Select "lm_studio" from config dropdownmodels:
embedder:
provider: api
api:
base_url: http://localhost:1234/v1
model: nomic-embed-text-v1.5 # Must be loaded in LM Studio
api_key: sk-dummy
llm:
provider: api
api:
base_url: http://localhost:1234/v1
model: qwen2.5-3b-instruct # Must be loaded in LM Studio
api_key: sk-dummy
storage:
backend: local- Create/Edit Config Files in
configs/directory - Start Server:
python run_server.py - Web UI Auto-Discovers all
.yamlfiles inconfigs/
- During Ingest: Select config from dropdown (or use default)
- During Query: Optionally override with different config
- Server automatically loads the selected config
# Create configs for each LLM
cp configs/default.yaml configs/gpt4.yaml # Edit for GPT-4
cp configs/default.yaml configs/claude.yaml # Edit for Claude
cp configs/default.yaml configs/local.yaml # Edit for local models
# In Web UI: Select from dropdown to test eachDevelopment (use API, no GPU needed):
- Select
lm_studio.yamloropenai.yaml
Production (use local models):
- Select
local_gpu.yaml
| Project | Config | Use Case |
|---|---|---|
finance_reports |
openai.yaml |
High quality, paid API |
experiments |
lm_studio.yaml |
Free local testing |
production |
local_gpu.yaml |
Self-hosted, private |
pulldata/
└── configs/
├── default.yaml # Used if no config selected
├── lm_studio.yaml # Your LM Studio API config
├── openai.yaml # Your OpenAI config
├── local_gpu.yaml # Your local model config
└── ... # Add more as needed
For API keys, use .env file:
# .env (don't commit this!)
OPENAI_API_KEY=sk-proj-...
GROQ_API_KEY=gsk_...
LM_STUDIO_BASE_URL=http://localhost:1234/v1Then reference in configs:
api:
api_key: ${OPENAI_API_KEY} # Loaded from .env- Select project
- Select config from "Configuration" dropdown
- Upload files
- Click "Ingest"
- Select project (uses its original config)
- Override config (optional) to test different LLM
- Enter query
- Click "Query"
# Ingest with config
curl -X POST "http://localhost:8000/ingest/upload?project=my_project&config_path=configs/lm_studio.yaml" \
-F "files=@document.pdf"
# Query with config
curl -X POST http://localhost:8000/query \
-H "Content-Type: application/json" \
-d '{
"project": "my_project",
"query": "What are the key points?",
"config_path": "configs/lm_studio.yaml"
}'import requests
# Ingest with config
files = [('files', open('doc.pdf', 'rb'))]
response = requests.post(
"http://localhost:8000/ingest/upload",
params={
"project": "my_project",
"config_path": "configs/lm_studio.yaml"
},
files=files
)
# Query with config
response = requests.post(
"http://localhost:8000/query",
json={
"project": "my_project",
"query": "What are the key points?",
"config_path": "configs/lm_studio.yaml"
}
)- Check file is in
configs/directory - Check file has
.yamlextension - Refresh browser page
- Check browser console for errors
- Verify server is running (LM Studio, Ollama, etc.)
- Check
base_urlis correct - Test:
curl http://localhost:1234/v1/models - Check firewall settings
- Ensure
.envfile exists in project root - Restart server after changing
.env - Don't use quotes in
.env:KEY=valuenotKEY="value"
See configs/ directory for examples:
default.yaml- Local models (requires GPU)lm_studio_api_embeddings.yaml- LM Studio API example
Or see full documentation: docs/CONFIG_GUIDE.md
Need More Help?