┌─────────────────┐
│ Tauri App │
│ (Desktop/Web) │
└────────┬────────┘
│
├──────────────────────┐
│ │
▼ ▼
┌─────────────────┐ ┌─────────────────┐
│ Local Ollama │ │ AI Backend │
│ (localhost) │ │ Service │
└─────────────────┘ └────────┬────────┘
│
┌──────────┴──────────┐
│ │
┌─────▼─────┐ ┌─────▼─────┐
│ OpenAI │ │ Claude │
│ API │ │ API │
└───────────┘ └───────────┘
- Node.js 20+ and npm
- Docker and Docker Compose (for Redis and containerized deployment)
- Rust and Cargo (for Tauri app)
- Clerk Account with application created
- API Keys for OpenAI and/or Anthropic (Claude)
- Create an account at clerk.com
- Create a new application
- Configure user metadata schema:
{ "tier": "free" | "pro" | "enterprise" } - Get your API keys from the Clerk dashboard
cd ai-notes-app
cp .env.example .env
# Edit .env with your values:
# VITE_CLERK_PUBLISHABLE_KEY=pk_test_...
# AI_BACKEND_URL=http://localhost:3001 # or your production URLcd ai-backend-service
cp .env.example .env
# Edit .env with your values:
# CLERK_SECRET_KEY=sk_test_...
# OPENAI_API_KEY=sk-...
# ANTHROPIC_API_KEY=sk-ant-...cd ai-backend-service
# Using Docker Compose (recommended)
docker-compose up -d
# Or manually with Redis
redis-server &
npm install
npm run devcd ai-notes-app
npm install
npm run tauri:dev- Railway.app Example:
# Install Railway CLI
npm install -g @railway/cli
# Login and initialize
railway login
railway init
# Add Redis service
railway add redis
# Deploy
railway up- Environment Variables:
- Set all required env vars in the platform's dashboard
- Update
AI_BACKEND_URLin your Tauri app to production URL
- Using Docker:
# Build and push to registry
docker build -t noteraity-ai-backend .
docker tag noteraity-ai-backend:latest your-registry/noteraity-ai-backend:latest
docker push your-registry/noteraity-ai-backend:latest- Using Kubernetes:
# k8s-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: ai-backend
spec:
replicas: 3
selector:
matchLabels:
app: ai-backend
template:
metadata:
labels:
app: ai-backend
spec:
containers:
- name: ai-backend
image: your-registry/noteraity-ai-backend:latest
ports:
- containerPort: 3001
env:
- name: NODE_ENV
value: "production"
# Add other env vars from ConfigMap/Secrets-
Server Requirements:
- Ubuntu 20.04+ or similar
- 2+ CPU cores, 4GB+ RAM
- Docker and Docker Compose installed
-
Deployment Steps:
# Clone the repository
git clone https://github.com/your-repo/noteraity.git
cd noteraity/ai-backend-service
# Create .env file with production values
cp .env.example .env
vim .env # Edit with production values
# Start services
docker-compose up -d
# Set up reverse proxy (nginx example)
sudo apt install nginx
sudo vim /etc/nginx/sites-available/ai-backend- Nginx Configuration:
server {
listen 80;
server_name api.yourdomain.com;
location / {
proxy_pass http://localhost:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}- SSL with Let's Encrypt:
sudo apt install certbot python3-certbot-nginx
sudo certbot --nginx -d api.yourdomain.com- Backend health:
GET http://your-backend-url/health - Redis status:
docker exec noteraity-redis redis-cli ping
# View backend logs
docker logs -f noteraity-ai-backend
# View Redis logs
docker logs -f noteraity-redis# Scale backend service
docker-compose up -d --scale ai-backend=3# Create backup
docker exec noteraity-redis redis-cli BGSAVE
# Copy backup file
docker cp noteraity-redis:/data/dump.rdb ./backups/dump-$(date +%Y%m%d).rdb-
API Keys:
- Never commit API keys to version control
- Use environment variables or secret management systems
- Rotate keys regularly
-
Network Security:
- Use HTTPS in production
- Configure CORS properly
- Implement IP whitelisting if needed
-
Rate Limiting:
- Adjust rate limits based on your needs
- Monitor for abuse patterns
- Implement user-specific limits
-
Authentication:
- Keep Clerk SDK updated
- Implement proper session management
- Use secure token storage
-
Connection refused to backend:
- Check if backend is running:
curl http://localhost:3001/health - Verify CORS settings include your Tauri app origin
- Check firewall rules
- Check if backend is running:
-
Authentication errors:
- Verify Clerk keys are correct
- Check token expiration
- Ensure user has proper tier metadata
-
Rate limiting issues:
- Check Redis connection
- Verify rate limit configuration
- Monitor Redis memory usage
-
AI provider errors:
- Verify API keys are valid
- Check provider service status
- Monitor API quotas and limits
-
Caching:
- Implement response caching for common queries
- Use Redis for session storage
- Cache model listings
-
Connection Pooling:
- Configure database connection pools
- Optimize Redis connections
- Use HTTP keep-alive
-
Load Balancing:
- Use multiple backend instances
- Implement sticky sessions if needed
- Configure health-check based routing
For issues and questions:
- GitHub Issues: [your-repo/issues]
- Documentation: [your-docs-site]
- Discord/Slack: [your-community]