- Node.js: Version 18 or higher
- Operating System: Windows 10+, macOS 10.15+, Linux (Ubuntu 20.04+)
- RAM: 4GB minimum (8GB recommended for AI features)
- Storage: 200MB for application, plus local model storage
git clone <repository-url>
cd ByeBrief
npm installnpm run devThe application will start on http://localhost:5173 (or next available port).
npm run buildOutput will be in the dist/ directory.
npm run previewByeBrief uses Ollama for local AI capabilities. This is optional but required for AI features like legal-grade reports and analysis.
macOS/Linux:
curl -fsSL https://ollama.com/install.sh | shWindows: Download from https://ollama.com/download/windows
ollama serve# Minimum model for basic features
ollama pull llama3.2
# Recommended for legal analysis
ollama pull gemma3curl http://localhost:11434/api/versionExpected response:
{
"version": "0.5.4"
}curl -X POST http://localhost:11434/api/generate \
-d '{
"model": "llama3.2",
"prompt": "Hello",
"stream": false
}'Expected response includes "response":"Hello" or similar.
- Click the gear icon (Settings) in the bottom toolbar
- Go to the AI Model tab
- Set Base URL:
http://localhost:11434 - Select your model (e.g.,
llama3.2orgemma3) - Adjust settings (temperature, max tokens) as desired
- Click Test Connection to verify
If port 5173 is in use, Vite will automatically try ports 5174, 5175, etc. Check the terminal output for the actual URL.
- Ensure Ollama is running:
ollama serve - Check the base URL in Settings (default:
http://localhost:11434) - Verify no firewall is blocking localhost
- Try pulling a smaller model:
ollama pull llama3.2
Ensure Node.js 18+ is installed:
node --version