Skip to content

Latest commit

 

History

History
129 lines (89 loc) · 2.36 KB

File metadata and controls

129 lines (89 loc) · 2.36 KB

ByeBrief Installation Guide

System Requirements

  • Node.js: Version 18 or higher
  • Operating System: Windows 10+, macOS 10.15+, Linux (Ubuntu 20.04+)
  • RAM: 4GB minimum (8GB recommended for AI features)
  • Storage: 200MB for application, plus local model storage

Installation Steps

1. Clone and Install Dependencies

git clone <repository-url>
cd ByeBrief
npm install

2. Run Development Server

npm run dev

The application will start on http://localhost:5173 (or next available port).

3. Build for Production

npm run build

Output will be in the dist/ directory.

4. Preview Production Build

npm run preview

Ollama Setup (Optional - AI Features)

ByeBrief uses Ollama for local AI capabilities. This is optional but required for AI features like legal-grade reports and analysis.

Installation

macOS/Linux:

curl -fsSL https://ollama.com/install.sh | sh

Windows: Download from https://ollama.com/download/windows

Start Ollama Service

ollama serve

Pull a Model

# Minimum model for basic features
ollama pull llama3.2

# Recommended for legal analysis
ollama pull gemma3

Verify Ollama is Running

curl http://localhost:11434/api/version

Expected response:

{
  "version": "0.5.4"
}

Health Check

curl -X POST http://localhost:11434/api/generate \
  -d '{
    "model": "llama3.2",
    "prompt": "Hello",
    "stream": false
  }'

Expected response includes "response":"Hello" or similar.

Configure ByeBrief

  1. Click the gear icon (Settings) in the bottom toolbar
  2. Go to the AI Model tab
  3. Set Base URL: http://localhost:11434
  4. Select your model (e.g., llama3.2 or gemma3)
  5. Adjust settings (temperature, max tokens) as desired
  6. Click Test Connection to verify

Troubleshooting

Port Already in Use

If port 5173 is in use, Vite will automatically try ports 5174, 5175, etc. Check the terminal output for the actual URL.

Ollama Connection Failed

  1. Ensure Ollama is running: ollama serve
  2. Check the base URL in Settings (default: http://localhost:11434)
  3. Verify no firewall is blocking localhost
  4. Try pulling a smaller model: ollama pull llama3.2

Build Errors

Ensure Node.js 18+ is installed:

node --version