This section provides comprehensive documentation for configuring QDrant Loader. Learn how to set up data sources, optimize performance, configure security, and customize behavior for your specific needs.
- Environment variables: Environment Variables Reference
- LLM providers and model mapping: LLM Provider Guide
- Full YAML schema: Configuration File Reference
- Security practices: Security Considerations
- Runtime flags and setup modes: CLI Commands
- Workspace-vs-traditional config loading: Workspace Mode
- New workspace in minutes: use Quick Start
- Minimal first config: use Basic Configuration
- Team or production rollout: use Configuration File Reference and Security Considerations
- Troubleshoot config or env issues: use Troubleshooting
your-workspace/
├── config.yaml # Main configuration file
├── .env # Environment variables
├── data/qdrant-loader.db # Processing state (auto-generated)
└── logs/ # Log files (optional)
Use this minimal pair as a baseline and then extend from references above.
QDRANT_URL=http://localhost:6333
QDRANT_COLLECTION_NAME=documents
LLM_PROVIDER=openai
LLM_BASE_URL=https://api.openai.com/v1
LLM_API_KEY=your-openai-key
LLM_EMBEDDING_MODEL=text-embedding-3-small
LLM_CHAT_MODEL=gpt-4o-miniglobal:
qdrant:
url: "${QDRANT_URL}"
collection_name: "${QDRANT_COLLECTION_NAME}"
llm:
provider: "${LLM_PROVIDER}"
base_url: "${LLM_BASE_URL}"
api_key: "${LLM_API_KEY}"
models:
embeddings: "${LLM_EMBEDDING_MODEL}"
chat: "${LLM_CHAT_MODEL}"
embeddings:
vector_size: 1536
projects:
default:
project_id: "default"
display_name: "Default"
sources:
localfile:
docs:
base_url: "file://./docs"
include_paths:
- "**/*.md"-
qdrant-loader config --workspace .loads without errors - Required env vars are set for your chosen provider
- At least one project and one source are configured
- QDrant URL and collection name are valid