NC Pollbook is an exploratory Django web app for importing and analyzing North Carolina State Board of Elections (NCSBE) voter registration and history data with LLMs.
It combines a Django/PostgreSQL ETL pipeline and materialized views with a Pydantic AI SQL agent that answers analytical questions over the voter dataset in CLI and web chat interfaces.
Built with Django 6.x, PostgreSQL 18, and django-pgviews-redux for materialized views.
- Setup
- Loading Data
- Model Providers
- SQL Agent (OpenAI-Compatible API)
- SQL Agent (CLI)
- Testing and Linting
- Releasing
- Deployment
Prerequisites: Python 3.14+, PostgreSQL 18, uv
# Install dependencies
uv sync
# Configure database (defaults to postgresql://postgres@localhost:5432/ncpollbook)
export DATABASE_URL=postgresql://user:password@localhost:5432/yourdb
# Apply migrations
uv run manage.py migrate
uv run manage.py sync_pgviews
# Create superuser (optional)
uv run manage.py createsuperuser# Download NCSBE files and load into PostgreSQL, then refresh materialized views
uv run manage.py ncsbe etl
# Only refresh materialized views (skip download/load)
uv run manage.py ncsbe etl --refresh-only
# Inspect the first 100 rows of each source file
uv run manage.py ncsbe peekData is cached in scratch/data/ after the first download.
Models are configured via the Django admin under Agent > Tool Models. Load the default fixture to get started:
uv run manage.py loaddata agent_modelsThis configures lmstudio:mistralai/ministral-3-3b for the voter_agent tool and
bedrock:us.anthropic.claude-haiku-4-5-20251001-v1:0 for sql_gen.
- Download and install LM Studio.
- Search for and download mistralai/ministral-3-3b (the default model).
- Start the local inference server (listens on
http://localhost:1234/v1):
lms server startNo API key is required — LM Studio is accessed with the placeholder key lm-studio.
Set the bearer token before starting the server:
export AWS_BEARER_TOKEN_BEDROCK=<your-token>Model names use the bedrock: prefix (e.g. bedrock:us.anthropic.claude-sonnet-4-6).
Set the API key before starting the server:
export ANTHROPIC_API_KEY=<your-key>Model names use the anthropic: prefix (e.g. anthropic:claude-sonnet-4-6).
An OpenAI-compatible API (/v1/chat/completions, /v1/models) served by django-ninja and Daphne (ASGI).
Point LibreChat or any OpenAI-compatible client at http://<host>:8000/v1 with model voter-agent.
Optionally protect the API with an API key by setting AGENT_API_KEY in the environment. Clients
send it as Authorization: Bearer <key>.
# Start the async API server (serves API, Django admin, and OpenAPI docs on port 8000)
uv run manage.py runserver
# Health endpoints (no auth required)
# GET /health — liveness probe
# GET /ready — readiness probe (checks database connectivity)
# Django admin available at http://127.0.0.1:8000/admin/
# OpenAPI docs available at http://127.0.0.1:8000/api/docsTo run LibreChat locally, follow the Local Installation - Docker guide to clone the repo.
Then we need to configure LibreChat to use our local API. First we create a custom config YAML file:
# file: librechat.yaml
version: 1.3.7
cache: true
endpoints:
custom:
- name: "Django-Backend"
# The URL where your Django app serves the OpenAI-compatible API
baseURL: "http://host.docker.internal:8000/v1"
# Reference a variable in your .env file
apiKey: "apikey"
models:
# List the models your Django app supports
default: ["voter-agent"]
# Set to true if your Django app has a /models endpoint
fetch: true
titleConvo: true
modelDisplayLabel: "NC Voter Data Agent"Then we create a compose override file to mount the config and set the environment variable:
# file: docker-compose.override.yml
services:
api:
volumes:
- type: bind
source: ./librechat.yaml
target: /app/librechat.yamlStart the server with the override:
docker compose up -dNow open the LibreChat UI at http://localhost:3080/login.
As noted in the docs, the first account you register becomes the admin account. There are no default credentials.
A terminal alternative to the web UI with step-by-step output and a per-run summary of model response time, tool execution time, and tokens/second.
# Interactive mode — type questions at the prompt, quit to exit
uv run manage.py agent cli
# Single question mode
uv run manage.py agent cli -q "how many active voters are in Durham County?"
# Inspect agent system prompts
uv run manage.py agent prompts
uv run manage.py agent prompts --name sql_gen
uv run manage.py agent prompts --name voterEach run prints:
- Thinking panels — the model's internal reasoning (when supported by the model)
- → tool_name(args) — each tool call as it is issued
- ↩ tool result — a truncated preview of each tool response
- Answer — the final markdown answer
- Run summary table — step name, elapsed time, input/output tokens, and tokens/second
Sample questions:
- How many people are registered to vote in Durham County?
- What is the breakdown of party affiliation among voters aged 18–25 vs 65+?
- What percentage of voters who voted in the 2020 General also voted in the 2022 Primary?
The agent has two tools:
- run_sql_query — generates and executes a SQL query, returns a markdown table
- run_python_code — executes LLM-written Python in a secure Monty sandbox, with
run_sql_queryavailable for chaining multiple queries
# Run tests (LLM evals skipped by default)
uv run pytest
# Run only LLM-judge evals (requires ollama or another local model running)
uv run pytest -m llm
# Run all tests including LLM evals
uv run pytest -m ''
# Run linters
uv run pre-commit run --all-files-
Bump the version in
pyproject.toml:uv version --bump patch # or --bump minor/major -
Create a new release on GitHub:
- Tag the version (e.g.,
v0.2.1) - Add the release notes
- Publish the release
When the release is published, CI automatically builds and publishes a new Docker image.
- Tag the version (e.g.,
See DEPLOYMENT.md for Kubernetes/Ansible deployment instructions.