Skip to content

JakeBx/n8n-mtls-inference

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TOADSTAR - n8n Custom Nodes with mTLS Support

Custom n8n community nodes providing OpenAI-compatible Chat Model and Embeddings interfaces with optional mTLS (mutual TLS) authentication using x509 certificates.

Overview

TOADSTAR provides two custom n8n nodes:

  • TOADSTAR Chat Model — A LangChain-compatible chat model node for use in AI Agent blocks. Wraps any OpenAI-compatible API endpoint (Ollama, vLLM, LiteLLM, etc.).
  • TOADSTAR Embeddings — A LangChain-compatible embeddings node for use in RAG pipelines (insert and retrieve data blocks). Works with any OpenAI-compatible embeddings endpoint.

Both nodes support:

  • ✅ Standard OpenAI API interface
  • ✅ Optional mTLS client certificate authentication (via n8n's built-in SSL credentials)
  • ✅ Custom HTTP headers
  • ✅ Configurable model parameters

Architecture

┌─────────────────────┐     ┌──────────────────┐     ┌─────────────┐
│   n8n (custom img)  │────▶│  nginx (mTLS)    │────▶│   Ollama    │
│   with TOADSTAR     │     │  reverse proxy   │     │  qwen2.5:3b │
│   nodes installed   │     │  port 8443       │     │  nomic-embed│
│   port 5678         │     │                  │     │  port 11434 │
└─────────────────────┘     └──────────────────┘     └─────────────┘
  • n8n runs with TOADSTAR nodes pre-installed
  • nginx performs mTLS termination — requires valid client certificates
  • Ollama serves AI models (not exposed externally)

Quick Start

Prerequisites

  • Docker & Docker Compose v2+
  • OpenSSL (for certificate generation)
  • ~5GB disk space (for AI models)

One-Command Setup

./scripts/setup.sh

This will:

  1. Generate mTLS certificates (CA, server, client)
  2. Build the custom n8n Docker image with TOADSTAR nodes
  3. Start all services (n8n, Ollama, nginx proxy)
  4. Pull AI models (qwen2.5:3b and nomic-embed-text)
  5. Verify mTLS connectivity

Access n8n

Open http://localhost:5678 in your browser.

Configuration

Step 1: Create TOADSTAR API Credential

In n8n, go to CredentialsAdd Credential → search for TOADSTAR API:

Setting Value
Base URL https://ollama-proxy:8443/v1
API Key ollama (any non-empty string works)
Custom Headers (optional)

Step 2: Create SSL Certificate Credential

Go to CredentialsAdd Credential → search for SSL Certificates:

Paste the contents of the generated certificate files:

# CA Certificate
cat certs/output/ca.crt

# Client Certificate
cat certs/output/client.crt

# Client Key
cat certs/output/client.key

Step 3: Use TOADSTAR Nodes in Workflows

Chat Model (AI Agent)

  1. Add an AI Agent node to your workflow
  2. Connect a TOADSTAR Chat Model sub-node to it
  3. Select your TOADSTAR API credential
  4. Enable Provide SSL Certificates and select your SSL credential
  5. Set model to qwen2.5:3b (or any Ollama model)

Embeddings (RAG Pipeline)

  1. Add a Vector Store node (insert or retrieve)
  2. Connect a TOADSTAR Embeddings sub-node to it
  3. Select your TOADSTAR API credential
  4. Enable Provide SSL Certificates and select your SSL credential
  5. Set model to nomic-embed-text

Project Structure

vibe-n8n/
├── n8n-nodes-toadstar/          # Custom n8n node package
│   ├── credentials/             # TOADSTAR API credential type
│   ├── nodes/
│   │   ├── ToadstarChatModel/   # Chat model node
│   │   ├── ToadstarEmbeddings/  # Embeddings node
│   │   └── utils/               # Shared mTLS helper
│   ├── package.json
│   └── tsconfig.json
├── docker/
│   ├── docker-compose.yml       # Full stack orchestration
│   ├── Dockerfile.n8n           # Custom n8n image
│   ├── nginx/                   # mTLS reverse proxy config
│   └── ollama/                  # Model pull entrypoint
├── certs/
│   ├── generate-certs.sh        # Certificate generation
│   └── output/                  # Generated certificates
├── scripts/
│   └── setup.sh                 # One-command setup
├── plans/
│   └── architecture.md          # Detailed architecture docs
└── README.md

Certificate Management

Regenerate Certificates

./certs/generate-certs.sh --force

Certificate Details

Certificate CN Purpose Location
CA TOADSTAR CA Root trust anchor certs/output/ca.crt
Server ollama-proxy nginx TLS server identity certs/output/server.crt
Client n8n-client n8n mTLS client auth certs/output/client.crt

Manual mTLS Test

curl --cacert certs/output/ca.crt \
     --cert certs/output/client.crt \
     --key certs/output/client.key \
     https://localhost:8443/v1/models

Docker Commands

cd docker

# Start all services
docker compose up -d

# View logs
docker compose logs -f

# View specific service logs
docker compose logs -f n8n
docker compose logs -f ollama
docker compose logs -f ollama-proxy

# Stop all services
docker compose down

# Rebuild n8n (after node changes)
docker compose build n8n && docker compose up -d n8n

# Pull additional Ollama models
docker exec toadstar-ollama ollama pull llama3

# Check service health
docker compose ps

Customization

Using Different Models

Edit docker/ollama/entrypoint.sh or set the OLLAMA_MODELS environment variable:

# In docker-compose.yml
ollama:
  environment:
    - OLLAMA_MODELS=llama3,mistral,nomic-embed-text

Adding Custom Headers

In the TOADSTAR API credential, add headers:

  • X-Custom-Auth: your-token
  • X-Request-Source: n8n-toadstar

Disabling mTLS (Plain HTTP)

To use without mTLS, configure the TOADSTAR API credential with Ollama directly:

  • Base URL: http://ollama:11434/v1
  • Leave Provide SSL Certificates disabled

Development

Building the Node Package Locally

cd n8n-nodes-toadstar
npm install
npm run build

Watching for Changes

cd n8n-nodes-toadstar
npm run dev

Troubleshooting

"SSL certificate problem" errors

  • Ensure certificates are generated: ls certs/output/
  • Regenerate if expired: ./certs/generate-certs.sh --force
  • Restart the proxy: cd docker && docker compose restart ollama-proxy

Models not loading

  • Check Ollama logs: docker compose logs ollama
  • Manually pull: docker exec toadstar-ollama ollama pull qwen2.5:3b

TOADSTAR nodes not visible in n8n

  • Rebuild the n8n image: cd docker && docker compose build n8n
  • Check the build log for errors
  • Verify the custom extension path is set

License

MIT

About

Community node development for mTLS auth enabled LLM inference

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors