Skip to content

Latest commit

 

History

History
305 lines (246 loc) · 7.27 KB

File metadata and controls

305 lines (246 loc) · 7.27 KB

Deployment Guide for Noteraity with AI Backend Service

Architecture Overview

┌─────────────────┐
│   Tauri App     │
│  (Desktop/Web)  │
└────────┬────────┘
         │
         ├──────────────────────┐
         │                      │
         ▼                      ▼
┌─────────────────┐    ┌─────────────────┐
│  Local Ollama   │    │  AI Backend     │
│   (localhost)   │    │    Service      │
└─────────────────┘    └────────┬────────┘
                               │
                    ┌──────────┴──────────┐
                    │                     │
              ┌─────▼─────┐        ┌─────▼─────┐
              │  OpenAI   │        │  Claude   │
              │    API    │        │    API    │
              └───────────┘        └───────────┘

Prerequisites

  1. Node.js 20+ and npm
  2. Docker and Docker Compose (for Redis and containerized deployment)
  3. Rust and Cargo (for Tauri app)
  4. Clerk Account with application created
  5. API Keys for OpenAI and/or Anthropic (Claude)

Configuration Steps

1. Set up Clerk Authentication

  1. Create an account at clerk.com
  2. Create a new application
  3. Configure user metadata schema:
    {
      "tier": "free" | "pro" | "enterprise"
    }
  4. Get your API keys from the Clerk dashboard

2. Configure Environment Variables

Frontend (.env)

cd ai-notes-app
cp .env.example .env
# Edit .env with your values:
# VITE_CLERK_PUBLISHABLE_KEY=pk_test_...
# AI_BACKEND_URL=http://localhost:3001  # or your production URL

AI Backend Service (.env)

cd ai-backend-service
cp .env.example .env
# Edit .env with your values:
# CLERK_SECRET_KEY=sk_test_...
# OPENAI_API_KEY=sk-...
# ANTHROPIC_API_KEY=sk-ant-...

3. Local Development Setup

Start the AI Backend Service

cd ai-backend-service

# Using Docker Compose (recommended)
docker-compose up -d

# Or manually with Redis
redis-server &
npm install
npm run dev

Start the Tauri Application

cd ai-notes-app
npm install
npm run tauri:dev

Production Deployment

Option 1: Deploy Backend to Cloud (Recommended)

Deploy to Railway/Render/Fly.io

  1. Railway.app Example:
# Install Railway CLI
npm install -g @railway/cli

# Login and initialize
railway login
railway init

# Add Redis service
railway add redis

# Deploy
railway up
  1. Environment Variables:
    • Set all required env vars in the platform's dashboard
    • Update AI_BACKEND_URL in your Tauri app to production URL

Deploy to AWS/GCP/Azure

  1. Using Docker:
# Build and push to registry
docker build -t noteraity-ai-backend .
docker tag noteraity-ai-backend:latest your-registry/noteraity-ai-backend:latest
docker push your-registry/noteraity-ai-backend:latest
  1. Using Kubernetes:
# k8s-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ai-backend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: ai-backend
  template:
    metadata:
      labels:
        app: ai-backend
    spec:
      containers:
      - name: ai-backend
        image: your-registry/noteraity-ai-backend:latest
        ports:
        - containerPort: 3001
        env:
        - name: NODE_ENV
          value: "production"
        # Add other env vars from ConfigMap/Secrets

Option 2: Self-Hosted Deployment

  1. Server Requirements:

    • Ubuntu 20.04+ or similar
    • 2+ CPU cores, 4GB+ RAM
    • Docker and Docker Compose installed
  2. Deployment Steps:

# Clone the repository
git clone https://github.com/your-repo/noteraity.git
cd noteraity/ai-backend-service

# Create .env file with production values
cp .env.example .env
vim .env  # Edit with production values

# Start services
docker-compose up -d

# Set up reverse proxy (nginx example)
sudo apt install nginx
sudo vim /etc/nginx/sites-available/ai-backend
  1. Nginx Configuration:
server {
    listen 80;
    server_name api.yourdomain.com;

    location / {
        proxy_pass http://localhost:3001;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}
  1. SSL with Let's Encrypt:
sudo apt install certbot python3-certbot-nginx
sudo certbot --nginx -d api.yourdomain.com

Monitoring and Maintenance

Health Checks

  • Backend health: GET http://your-backend-url/health
  • Redis status: docker exec noteraity-redis redis-cli ping

Logging

# View backend logs
docker logs -f noteraity-ai-backend

# View Redis logs
docker logs -f noteraity-redis

Scaling

# Scale backend service
docker-compose up -d --scale ai-backend=3

Backup Redis Data

# Create backup
docker exec noteraity-redis redis-cli BGSAVE

# Copy backup file
docker cp noteraity-redis:/data/dump.rdb ./backups/dump-$(date +%Y%m%d).rdb

Security Considerations

  1. API Keys:

    • Never commit API keys to version control
    • Use environment variables or secret management systems
    • Rotate keys regularly
  2. Network Security:

    • Use HTTPS in production
    • Configure CORS properly
    • Implement IP whitelisting if needed
  3. Rate Limiting:

    • Adjust rate limits based on your needs
    • Monitor for abuse patterns
    • Implement user-specific limits
  4. Authentication:

    • Keep Clerk SDK updated
    • Implement proper session management
    • Use secure token storage

Troubleshooting

Common Issues

  1. Connection refused to backend:

    • Check if backend is running: curl http://localhost:3001/health
    • Verify CORS settings include your Tauri app origin
    • Check firewall rules
  2. Authentication errors:

    • Verify Clerk keys are correct
    • Check token expiration
    • Ensure user has proper tier metadata
  3. Rate limiting issues:

    • Check Redis connection
    • Verify rate limit configuration
    • Monitor Redis memory usage
  4. AI provider errors:

    • Verify API keys are valid
    • Check provider service status
    • Monitor API quotas and limits

Performance Optimization

  1. Caching:

    • Implement response caching for common queries
    • Use Redis for session storage
    • Cache model listings
  2. Connection Pooling:

    • Configure database connection pools
    • Optimize Redis connections
    • Use HTTP keep-alive
  3. Load Balancing:

    • Use multiple backend instances
    • Implement sticky sessions if needed
    • Configure health-check based routing

Support

For issues and questions:

  • GitHub Issues: [your-repo/issues]
  • Documentation: [your-docs-site]
  • Discord/Slack: [your-community]