An intelligent multi-agent medical consultation system powered by OpenAI GPT-4o, featuring specialized agents for triage, diagnosis, lifestyle recommendations, and safety guardrails.
- π€ Multi-Agent System - 6 specialized AI agents working together
- π Safety Guardrails - Emergency detection and medical safety checks
- π§ RAG-Powered Knowledge - Medical knowledge base + conversation memory
- π± PWA Support - Installable progressive web app with offline capability
- β‘ Real-time Streaming - SSE-based agent responses
- π¨ Modern UI - Beautiful, responsive Next.js frontend
- π³ Production-Ready - Docker deployment with health checks
flowchart TD
User[User] --> Frontend[Next.js PWA]
Frontend --> API[FastAPI Backend]
API --> Orchestrator[Agent Orchestrator]
Orchestrator --> Guardrails[Safety Guardrails Agent]
Orchestrator --> Triage[Triage Agent]
Orchestrator --> Diagnostic[Diagnostic Agent]
Orchestrator --> Lifestyle[Lifestyle Agent]
Orchestrator --> Followup[Follow-up Agent]
Guardrails --> RAG[RAG Agent]
Triage --> RAG
Diagnostic --> RAG
Lifestyle --> RAG
RAG --> MedicalKB[Medical Knowledge Base]
RAG --> UserMemory[Conversation Memory]
Orchestrator --> OpenAI[OpenAI GPT-4o]
RAG --> FAISS[FAISS Vector Store]
UserMemory --> PostgreSQL[(PostgreSQL)]
- Guardrails Agent - Detects emergencies, validates safety
- RAG Agent - Retrieves relevant medical knowledge
- Triage Agent - Assesses urgency and routes care
- Diagnostic Agent - Provides differential diagnosis
- Lifestyle Agent - Suggests evidence-based recommendations
- Follow-up Agent - Generates clarifying questions
- Python 3.10+
- Node.js 18+
- Docker & Docker Compose
- OpenAI API Key (required)
- 8GB+ RAM (16GB recommended)
- 10GB+ Disk Space
git clone <your-repo>
cd doctorg
# Copy environment file
cp .env.example .env
# Edit .env with your API keys
nano .envEdit .env file:
# Required - Add your OpenAI API key
OPENAI_API_KEY=sk-proj-your_key_here
# Database (auto-configured in Docker)
POSTGRES_PASSWORD=your_secure_password_here
JWT_SECRET=your_jwt_secret_min_32_chars
# Optional - for dataset augmentation
GOOGLE_API_KEY=your_google_key_here
PUBMED_EMAIL=your_email@example.com# Build and start all services
docker-compose up --build
# Access the application
# Frontend: http://localhost:3000
# Backend API: http://localhost:8000
# API Docs: http://localhost:8000/docscd backend
# Create virtual environment
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Run database migrations
python -c "from app.db.database import init_db; init_db()"
# Start backend server
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000cd frontend
# Install dependencies
npm install
# Start development server
npm run dev
# Access at http://localhost:3000The system uses a FAISS vector database for fast medical knowledge retrieval:
cd backend
# Activate virtual environment
source venv/bin/activate
# Ingest DoctorG medical dataset
python scripts/ingest_doctorg_data.pyThis creates:
backend/data/faiss_indices/medical_knowledge.index- FAISS vector indexbackend/data/faiss_indices/medical_knowledge.metadata- Condition metadata
What it does:
- Loads medical conditions from
backend/data/doctorg_data.csv - Generates embeddings using sentence-transformers
- Builds FAISS index for fast similarity search
- Stores condition metadata (symptoms, descriptions, weights)
Expected Output:
Loading data from backend/data/doctorg_data.csv
Loaded 5000 records
After cleaning: 4850 records
Loading embedding model: sentence-transformers/all-MiniLM-L6-v2
Generating embeddings...
100%|ββββββββββ| 152/152 [00:45<00:00]
Building FAISS index...
FAISS index built with 4850 vectors
β DoctorG data ingestion completed successfully!
- Indexed: 4850 medical conditions
The RAG system retrieves from three sources:
- Medical Knowledge Base - DoctorG disease dataset (indexed)
- User Conversation History - Previous consultations (FAISS + PostgreSQL)
- PubMed Literature - Medical research (placeholder for future enhancement)
Every consultation goes through these agents:
-
Safety Check (Guardrails Agent)
- Detects emergency symptoms
- Flags out-of-scope queries
- Prevents medication prescriptions
-
Knowledge Retrieval (RAG Agent)
- Searches medical knowledge base
- Retrieves user conversation history
- Provides context to other agents
-
Initial Assessment (Triage Agent)
- Evaluates symptom urgency
- Routes to appropriate care level
- Determines if follow-up is needed
-
Analysis (Diagnostic Agent)
- Provides differential diagnosis
- Lists 3-5 possible conditions
- Suggests medical tests
-
Recommendations (Lifestyle Agent)
- Evidence-based lifestyle changes
- Dietary modifications
- Exercise and wellness practices
-
Clarification (Follow-up Agent)
- Asks targeted questions
- Gathers missing information
- Improves diagnostic accuracy
Emergency Detection:
- Chest pain, stroke symptoms, severe bleeding
- Automatic emergency alert
- Directs to call 911
Medical Safety:
- Cannot prescribe medications
- Always includes disclaimer
- Recommends professional consultation
On Mobile:
- Visit the app in your mobile browser
- Tap "Add to Home Screen" (iOS) or "Install" (Android)
- App installs like a native app
On Desktop:
- Look for install icon in browser address bar
- Click "Install DoctorG"
- App opens in standalone window
- Caches recent conversations
- Queues messages when offline
- Syncs when connection restored
- β Works offline
- β Installable on home screen
- β Fast loading with service worker
- β Push notifications (future feature)
- β Responsive design
- β App-like experience
# Build for production
docker-compose -f docker-compose.yml up --build -d
# View logs
docker-compose logs -f
# Stop services
docker-compose down
# Stop and remove volumes (clean slate)
docker-compose down -vEdit docker-compose.yml to enable GPU:
services:
backend:
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]Then run:
# Requires nvidia-docker2 installed
docker-compose up --build# Development
docker-compose -f docker-compose.yml up
# Production with GPU
docker-compose -f docker-compose.prod.yml up
# Staging
docker-compose -f docker-compose.staging.yml upcurl -X POST http://localhost:8000/api/v1/auth/register \
-H "Content-Type: application/json" \
-d '{
"email": "user@example.com",
"password": "securepassword123",
"full_name": "John Doe"
}'curl -X POST http://localhost:8000/api/v1/auth/login \
-H "Content-Type: application/json" \
-d '{
"email": "user@example.com",
"password": "securepassword123"
}'Response:
{
"access_token": "eyJhbGciOiJIUzI1NiIs...",
"token_type": "bearer",
"expires_in": 3600
}Via Web UI:
- Open http://localhost:3000
- Login with your credentials
- Describe your symptoms
- Get real-time streaming response
Via API:
curl -X POST http://localhost:8000/api/v1/chat/predict \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_TOKEN" \
-d '{
"symptoms": ["headache", "fever", "fatigue"]
}'curl -X POST http://localhost:8000/api/v1/feedback \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_TOKEN" \
-d '{
"session_id": "session-id-here",
"rating": 5,
"helpful": true,
"comments": "Very helpful diagnosis!"
}'Free Tier:
- 5 sessions per month
- No memory/history
- Basic medical insights
Premium Tier:
- Unlimited sessions
- Full RAG memory (past consultations)
- Detailed follow-up questions
- Priority support
Edit backend/app/core/constants.py:
class SubscriptionLimits:
FREE_SESSION_LIMIT = 5 # Change to desired limit
PREMIUM_SESSION_LIMIT = -1 # -1 = unlimitedcd backend
pytest tests/ -vcd frontend
npm test# Start all services
docker-compose up -d
# Run E2E tests
npm run test:e2ecurl http://localhost:8000/healthResponse:
{
"status": "healthy",
"version": "1.0.0",
"timestamp": "2026-02-15T10:30:00",
"services": {
"database": "connected",
"llm": "ready",
"rag": "ready"
}
}# Backend logs
docker-compose logs -f backend
# Frontend logs
docker-compose logs -f frontend
# Database logs
docker-compose logs -f postgres- β No hardcoded secrets (all in .env)
- β Bcrypt password hashing
- β JWT authentication with expiration
- β SQL injection prevention (ORM)
- β XSS protection (React escaping)
- β CORS configured
- β Security headers enabled
- β Rate limiting implemented
- Change default passwords in
.env - Use strong JWT secret (min 32 characters)
- Enable HTTPS in production
- Regular dependency updates:
pip list --outdated - Backup database regularly
# Check CUDA installation
nvidia-smi
# Check PyTorch CUDA
python -c "import torch; print(torch.cuda.is_available())"
# Reinstall PyTorch with CUDA
pip install torch --index-url https://download.pytorch.org/whl/cu118# Clean rebuild
docker-compose down -v
docker-compose build --no-cache
docker-compose up
# Check container logs
docker-compose logs backend# Reset database
docker-compose down -v
docker-compose up postgres -d
sleep 10
docker-compose up backend# Find and kill process on port 8000
# Windows:
netstat -ano | findstr :8000
taskkill /PID <PID> /F
# Linux/Mac:
lsof -ti:8000 | xargs kill -9- Swagger UI: http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc
Authentication:
POST /api/v1/auth/register - Register new user
POST /api/v1/auth/login - Login and get JWT token
POST /api/v1/auth/logout - Logout user
Chat/Consultation:
POST /api/v1/chat/stream - Stream agent responses (SSE)
POST /api/v1/chat/predict - Get consultation (non-streaming)
User Management:
GET /api/v1/user/profile - Get user profile
GET /api/v1/user/sessions - Get consultation history
Feedback:
POST /api/v1/feedback - Submit consultation feedback
The /chat/stream endpoint returns Server-Sent Events with this structure:
{
"type": "agent_start|content|emergency|complete",
"agent": "triage|diagnostic|lifestyle|followup",
"content": "Agent response text...",
"metadata": { ... }
}Event Types:
agent_start- Agent begins processingcontent- Streaming text chunkagent_complete- Agent finishedemergency- Emergency detectedcomplete- Consultation done
- Fork the repository
- Create feature branch (
git checkout -b feature/amazing-feature) - Commit changes (
git commit -m 'Add amazing feature') - Push to branch (
git push origin feature/amazing-feature) - Open Pull Request
This project is licensed under the MIT License.
- Developer: Abhishek Gupta
- GitHub: @cosmos-dx
- LinkedIn: abhishek-gupta
- OpenAI for GPT-4o API and embeddings
- Sentence Transformers for medical embeddings
- FAISS for fast similarity search
- FastAPI and Next.js communities
- Medical datasets and research papers
For issues and questions:
- GitHub Issues: Create an issue
- Email: support@doctorg.ai
- Discord: Join our community