Custom n8n community nodes providing OpenAI-compatible Chat Model and Embeddings interfaces with optional mTLS (mutual TLS) authentication using x509 certificates.
TOADSTAR provides two custom n8n nodes:
- TOADSTAR Chat Model — A LangChain-compatible chat model node for use in AI Agent blocks. Wraps any OpenAI-compatible API endpoint (Ollama, vLLM, LiteLLM, etc.).
- TOADSTAR Embeddings — A LangChain-compatible embeddings node for use in RAG pipelines (insert and retrieve data blocks). Works with any OpenAI-compatible embeddings endpoint.
Both nodes support:
- ✅ Standard OpenAI API interface
- ✅ Optional mTLS client certificate authentication (via n8n's built-in SSL credentials)
- ✅ Custom HTTP headers
- ✅ Configurable model parameters
┌─────────────────────┐ ┌──────────────────┐ ┌─────────────┐
│ n8n (custom img) │────▶│ nginx (mTLS) │────▶│ Ollama │
│ with TOADSTAR │ │ reverse proxy │ │ qwen2.5:3b │
│ nodes installed │ │ port 8443 │ │ nomic-embed│
│ port 5678 │ │ │ │ port 11434 │
└─────────────────────┘ └──────────────────┘ └─────────────┘
- n8n runs with TOADSTAR nodes pre-installed
- nginx performs mTLS termination — requires valid client certificates
- Ollama serves AI models (not exposed externally)
- Docker & Docker Compose v2+
- OpenSSL (for certificate generation)
- ~5GB disk space (for AI models)
./scripts/setup.shThis will:
- Generate mTLS certificates (CA, server, client)
- Build the custom n8n Docker image with TOADSTAR nodes
- Start all services (n8n, Ollama, nginx proxy)
- Pull AI models (
qwen2.5:3bandnomic-embed-text) - Verify mTLS connectivity
Open http://localhost:5678 in your browser.
In n8n, go to Credentials → Add Credential → search for TOADSTAR API:
| Setting | Value |
|---|---|
| Base URL | https://ollama-proxy:8443/v1 |
| API Key | ollama (any non-empty string works) |
| Custom Headers | (optional) |
Go to Credentials → Add Credential → search for SSL Certificates:
Paste the contents of the generated certificate files:
# CA Certificate
cat certs/output/ca.crt
# Client Certificate
cat certs/output/client.crt
# Client Key
cat certs/output/client.key- Add an AI Agent node to your workflow
- Connect a TOADSTAR Chat Model sub-node to it
- Select your TOADSTAR API credential
- Enable Provide SSL Certificates and select your SSL credential
- Set model to
qwen2.5:3b(or any Ollama model)
- Add a Vector Store node (insert or retrieve)
- Connect a TOADSTAR Embeddings sub-node to it
- Select your TOADSTAR API credential
- Enable Provide SSL Certificates and select your SSL credential
- Set model to
nomic-embed-text
vibe-n8n/
├── n8n-nodes-toadstar/ # Custom n8n node package
│ ├── credentials/ # TOADSTAR API credential type
│ ├── nodes/
│ │ ├── ToadstarChatModel/ # Chat model node
│ │ ├── ToadstarEmbeddings/ # Embeddings node
│ │ └── utils/ # Shared mTLS helper
│ ├── package.json
│ └── tsconfig.json
├── docker/
│ ├── docker-compose.yml # Full stack orchestration
│ ├── Dockerfile.n8n # Custom n8n image
│ ├── nginx/ # mTLS reverse proxy config
│ └── ollama/ # Model pull entrypoint
├── certs/
│ ├── generate-certs.sh # Certificate generation
│ └── output/ # Generated certificates
├── scripts/
│ └── setup.sh # One-command setup
├── plans/
│ └── architecture.md # Detailed architecture docs
└── README.md
./certs/generate-certs.sh --force| Certificate | CN | Purpose | Location |
|---|---|---|---|
| CA | TOADSTAR CA | Root trust anchor | certs/output/ca.crt |
| Server | ollama-proxy | nginx TLS server identity | certs/output/server.crt |
| Client | n8n-client | n8n mTLS client auth | certs/output/client.crt |
curl --cacert certs/output/ca.crt \
--cert certs/output/client.crt \
--key certs/output/client.key \
https://localhost:8443/v1/modelscd docker
# Start all services
docker compose up -d
# View logs
docker compose logs -f
# View specific service logs
docker compose logs -f n8n
docker compose logs -f ollama
docker compose logs -f ollama-proxy
# Stop all services
docker compose down
# Rebuild n8n (after node changes)
docker compose build n8n && docker compose up -d n8n
# Pull additional Ollama models
docker exec toadstar-ollama ollama pull llama3
# Check service health
docker compose psEdit docker/ollama/entrypoint.sh or set the OLLAMA_MODELS environment variable:
# In docker-compose.yml
ollama:
environment:
- OLLAMA_MODELS=llama3,mistral,nomic-embed-textIn the TOADSTAR API credential, add headers:
X-Custom-Auth:your-tokenX-Request-Source:n8n-toadstar
To use without mTLS, configure the TOADSTAR API credential with Ollama directly:
- Base URL:
http://ollama:11434/v1 - Leave Provide SSL Certificates disabled
cd n8n-nodes-toadstar
npm install
npm run buildcd n8n-nodes-toadstar
npm run dev- Ensure certificates are generated:
ls certs/output/ - Regenerate if expired:
./certs/generate-certs.sh --force - Restart the proxy:
cd docker && docker compose restart ollama-proxy
- Check Ollama logs:
docker compose logs ollama - Manually pull:
docker exec toadstar-ollama ollama pull qwen2.5:3b
- Rebuild the n8n image:
cd docker && docker compose build n8n - Check the build log for errors
- Verify the custom extension path is set
MIT