This tutorial walks you through installing the library, running your first query, and understanding the core concepts you will use in every interaction.
pip install horde-model-referenceOr with uv:
uv add horde-model-referencefrom horde_model_reference import ModelReferenceManager, MODEL_REFERENCE_CATEGORY
manager = ModelReferenceManager()
image_models = manager.get_model_reference(MODEL_REFERENCE_CATEGORY.image_generation)
print(f"Found {len(image_models)} image generation models")
for name, model in list(image_models.items())[:5]:
print(f" {name}: {model.baseline}")On first run the manager fetches model reference data from the PRIMARY server (aihorde.net) with a GitHub fallback. Results are cached in memory with a configurable TTL (default 60 seconds).
ModelReferenceManager is a singleton. The first instantiation locks in its configuration (backend, base path, prefetch strategy). Any subsequent call to ModelReferenceManager() returns the same instance. Attempting to create a second instance with different parameters raises RuntimeError.
Correct pattern -- initialize once, reuse everywhere:
# At startup
manager = ModelReferenceManager(prefetch_strategy=PrefetchStrategy.LAZY)
# Elsewhere in your code
manager = ModelReferenceManager() # returns the same instanceWhat to avoid:
# This will raise RuntimeError because prefetch_strategy differs
manager_a = ModelReferenceManager(prefetch_strategy=PrefetchStrategy.LAZY)
manager_b = ModelReferenceManager(prefetch_strategy=PrefetchStrategy.SYNC) # RuntimeError!If you need the singleton instance without risking a conflicting re-init, use:
manager = ModelReferenceManager.get_instance() # raises RuntimeError if not yet createdThe prefetch_strategy parameter controls when model data is fetched from the backend:
| Strategy | Behavior | Best For |
|---|---|---|
LAZY (default) |
Defers fetching until you first access data | Scripts, CLIs, most consumers |
SYNC |
Fetches all categories immediately during init | Latency-sensitive services that are OK with blocking on startup |
ASYNC |
Schedules a background async fetch if an event loop is running | FastAPI / async services |
DEFERRED |
Creates a handle you trigger later via handle.run_sync() or await handle.run_async() |
Fine-grained startup control |
NONE |
No automatic fetching at all; you call cache helpers manually | Testing, custom orchestration |
from horde_model_reference import ModelReferenceManager, PrefetchStrategy
# For a FastAPI app
manager = ModelReferenceManager(prefetch_strategy=PrefetchStrategy.ASYNC)
# For a CLI script
manager = ModelReferenceManager(prefetch_strategy=PrefetchStrategy.LAZY)Every model in the reference belongs to a category. List them all with:
from horde_model_reference import MODEL_REFERENCE_CATEGORY
for cat in MODEL_REFERENCE_CATEGORY:
print(cat.value)Categories include: image_generation, text_generation, video_generation, audio_generation, clip, controlnet, blip, esrgan, gfpgan, codeformer, safety_checker, miscellaneous.
You can use either the enum member or a plain string:
# These are equivalent
models = manager.get_model_reference(MODEL_REFERENCE_CATEGORY.image_generation)
models = manager.get_model_reference("image_generation")When you call manager.get_model_reference(category):
- The manager checks its in-memory cache
- If stale or missing, it delegates to the backend
- In REPLICA mode (the default), the backend fetches JSON from the PRIMARY API (
aihorde.net), falling back to GitHub if the PRIMARY is unavailable - The raw JSON is validated and converted into typed Pydantic model records
- Results are cached with a TTL (default 60 seconds)
- Querying Models -- Learn the fluent query API for filtering, sorting, and aggregating models
- Working with Records -- Understand the model record types and their fields
- Configuration & Troubleshooting -- Environment variables, debugging, and common issues