An internal security MVP that ingests security-relevant logs, normalizes events into a common schema, computes explainable risk scores using rule-based behavioral anomaly detection, and surfaces alerts in a triage-focused dashboard.
- Event Ingestion API - Receive security logs from multiple sources via HTTP API
- Event Normalization - Transform raw events into a common schema
- Behavioral Baselines - Compute rolling 14-day behavioral profiles per actor
- Rule-Based Scoring - Explainable risk scores with visible rule contributions
- Alert Generation - Automatic alerts when risk thresholds are exceeded
- Triage Dashboard - Overview, alerts list, actor profiles, and admin configuration
- Integrations - Pull connectors (AWS GuardDuty), push connectors, and collector connectors (MikroTik, Generic Syslog)
- Integration Packs - Downloadable setup packages with infrastructure-as-code, verification, and rollback steps
- Node.js 20+
- Docker and Docker Compose
- Neon PostgreSQL account (or any PostgreSQL database)
git clone <repository-url>
cd insider-risk-monitor
npm installcp .env.example .envEdit .env with your configuration:
# Database (Neon PostgreSQL)
DATABASE_URL="postgresql://user:password@host/database?sslmode=require"
# Auth
AUTH_SECRET="generate-a-secure-random-string"
AUTH_URL="http://localhost:3000"
# Admin credentials
ADMIN_EMAIL="admin@example.com"
ADMIN_PASSWORD="your-secure-password"
ADMIN_NAME="Admin User"# Start all services (web, worker, redis)
docker compose up
# Or run in detached mode
docker compose up -dThis will:
- Run database migrations automatically
- Seed default scoring rules and admin user
- Start the Next.js web application on port 3000
- Start the background worker for scoring and cleanup
- Start Redis for rate limiting
Open http://localhost:3000 and log in with your admin credentials.
For local development without Docker:
# Install dependencies
npm install
# Generate Prisma client
npx prisma generate
# Push schema to database
npx prisma db push
# Seed the database
npm run seed
# Start development server
npm run dev
# In another terminal, start the worker
npm run workerPOST /api/ingest/{sourceKey}
Include the API key in the x-api-key header:
curl -X POST http://localhost:3000/api/ingest/vpn \
-H "Content-Type: application/json" \
-H "x-api-key: YOUR_API_KEY" \
-d '{
"timestamp": "2024-01-15T10:30:00Z",
"userId": "john.doe@company.com",
"action": "login",
"ip": "192.168.1.100",
"success": true
}'| Field | Type | Required | Description |
|---|---|---|---|
| timestamp | string (ISO 8601) | Yes | When the event occurred |
| userId / user / actor | string | Yes | Actor identifier |
| action / type | string | Yes | Action type (login, read, download, etc.) |
| resource | string | No | Resource being accessed |
| resourceId | string | No | Resource identifier |
| ip | string | No | Source IP address |
| userAgent | string | No | User agent string |
| bytes | number | No | Bytes transferred |
| success / outcome | boolean/string | No | Whether action succeeded |
VPN Login:
curl -X POST http://localhost:3000/api/ingest/vpn \
-H "Content-Type: application/json" \
-H "x-api-key: YOUR_VPN_API_KEY" \
-d '{
"timestamp": "2024-01-15T10:30:00Z",
"userId": "john.doe@company.com",
"action": "login",
"ip": "203.0.113.50",
"userAgent": "OpenVPN/2.5.0",
"success": true
}'File Download:
curl -X POST http://localhost:3000/api/ingest/app \
-H "Content-Type: application/json" \
-H "x-api-key: YOUR_APP_API_KEY" \
-d '{
"timestamp": "2024-01-15T11:45:00Z",
"userId": "jane.smith@company.com",
"action": "download",
"resource": "document",
"resourceId": "doc-12345",
"bytes": 5242880,
"success": true
}'Failed IAM Action:
curl -X POST http://localhost:3000/api/ingest/iam \
-H "Content-Type: application/json" \
-H "x-api-key: YOUR_IAM_API_KEY" \
-d '{
"timestamp": "2024-01-15T14:20:00Z",
"userId": "service-account@company.com",
"action": "admin_change",
"resource": "user",
"resourceId": "user-67890",
"success": false
}'After starting the services, generate sample data:
# Using npm script
npm run generate-demo
# Or with Docker
docker compose exec web npx tsx scripts/generate-demo-data.tsThis creates:
- 3 actors with normal behavior patterns
- 1 anomalous actor triggering multiple rules
- At least 2 alerts
- Overview (
/) - See alerts today and high-risk actors - Alerts (
/alerts) - Browse and filter alerts by severity/status - Alert Detail (
/alerts/[id]) - View score breakdown and evidence - Actors (
/actors) - See all actors with risk levels - Actor Detail (
/actors/[id]) - View timeline and baseline values - Rules (
/rules) - Configure scoring rules and thresholds - Sources (
/sources) - Manage event sources and API keys - Audit (
/audit) - View configuration change history
Send events that trigger scoring rules:
# Off-hours activity (if current time is outside 9-17)
curl -X POST http://localhost:3000/api/ingest/vpn \
-H "Content-Type: application/json" \
-H "x-api-key: YOUR_API_KEY" \
-d '{
"timestamp": "'$(date -u +%Y-%m-%dT03:00:00Z)'",
"userId": "test.user@company.com",
"action": "login",
"ip": "10.0.0.1"
}'
# Volume spike (large download)
curl -X POST http://localhost:3000/api/ingest/app \
-H "Content-Type: application/json" \
-H "x-api-key: YOUR_API_KEY" \
-d '{
"timestamp": "'$(date -u +%Y-%m-%dT%H:%M:%SZ)'",
"userId": "test.user@company.com",
"action": "download",
"bytes": 104857600
}'| Rule | Description | Default Weight | Default Threshold |
|---|---|---|---|
| off_hours | Activity outside typical hours | 15 | 2+ events |
| new_ip | First-seen IP in last 14 days | 15 | 1+ new IPs |
| volume_spike | Bytes transferred > 3x baseline | 25 | 3x multiplier |
| scope_expansion | Accessing 2x more resources than normal | 20 | 2x multiplier |
| failure_burst | Many failures in short window | 25 | 5+ failures in 10 min |
┌─────────────────────────────────────────────────────────────┐
│ Next.js App │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌─────────────┐ │
│ │Dashboard │ │API Routes│ │ Auth │ │ Admin UI │ │
│ └──────────┘ └──────────┘ └──────────┘ └─────────────┘ │
├─────────────────────────────────────────────────────────────┤
│ Business Logic │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌─────────────┐ │
│ │Ingestion │ │Normalize │ │ Scoring │ │ Alerting │ │
│ └──────────┘ └──────────┘ └──────────┘ └─────────────┘ │
├─────────────────────────────────────────────────────────────┤
│ Data Layer │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ Prisma + Neon PostgreSQL │ │
│ └─────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ Background Worker │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ Baseline │ │ Scoring │ │Retention │ │
│ │ (5 min) │ │ (5 min) │ │ (daily) │ │
│ └──────────┘ └──────────┘ └──────────┘ │
└─────────────────────────────────────────────────────────────┘
# Development
npm run dev # Start Next.js dev server
npm run worker # Start background worker
npm run build # Build for production
npm run start # Start production server
# Database
npm run db:generate # Generate Prisma client
npm run db:push # Push schema to database
npm run db:migrate # Run migrations
npm run db:studio # Open Prisma Studio
# Data
npm run seed # Seed default data
npm run generate-demo # Generate demo data
# Testing
npm test # Run tests
npm run test:watch # Run tests in watch mode
npm run test:coverage # Run tests with coverage
# Docker
docker compose up # Start all services
docker compose up -d # Start in detached mode
docker compose down # Stop all services
docker compose logs -f # Follow logs| Variable | Description | Default |
|---|---|---|
| DATABASE_URL | PostgreSQL connection string | Required |
| AUTH_SECRET | NextAuth secret key | Required |
| AUTH_URL | Application URL | http://localhost:3000 |
| ADMIN_EMAIL | Default admin email | admin@example.com |
| ADMIN_PASSWORD | Default admin password | Required |
| ADMIN_NAME | Default admin name | Admin User |
| REDIS_URL | Redis connection string | redis://localhost:6379 |
| DATA_RETENTION_DAYS | Event retention period | 90 |
| BASELINE_INTERVAL_MS | Baseline computation interval | 300000 (5 min) |
| SCORING_INTERVAL_MS | Scoring run interval | 300000 (5 min) |
| ALERT_THRESHOLD | Risk score threshold for alerts | 60 |
| INTEGRATION_ENCRYPTION_KEY | AES-256-GCM key for integration secrets (base64, 32 bytes) | Required for integrations |
- No spyware: No keylogging, screenshots, or invasive monitoring
- Existing telemetry only: Uses security data companies already generate
- Explainability: All scores include visible rule contributions
- Data minimization: Configurable retention and optional redaction
- API key security: Keys stored hashed (bcrypt)
- Audit logging: All admin changes are logged
The Integrations feature enables automated security telemetry onboarding through three connector types:
- Pull Connectors - Scheduled API fetch (e.g., AWS GuardDuty)
- Push Connectors - Webhook delivery from external sources
- Collector Connectors - Syslog/flow forwarding (e.g., MikroTik, Generic Syslog)
Add these to your .env file:
# Integration secrets encryption key (32 bytes, base64-encoded)
# Generate with: node -e "console.log(require('crypto').randomBytes(32).toString('base64'))"
INTEGRATION_ENCRYPTION_KEY="your-base64-encoded-32-byte-key"| Variable | Description | Required |
|---|---|---|
| INTEGRATION_ENCRYPTION_KEY | AES-256-GCM encryption key for secrets (base64-encoded 32 bytes) | Yes |
- Navigate to Integrations (
/integrations) in the dashboard - Click Create Integration
- Follow the wizard:
- Step 1: Select provider (AWS, MikroTik, Generic Syslog)
- Step 2: Select mode (pull/push/collector) based on provider
- Step 3: Enter provider-specific configuration
- Step 4: Configure schedule (for pull connectors)
- Step 5: Set retention and redaction options
- Step 6: Test connection
- Step 7: Review and create
Each integration can generate a downloadable Integration Pack - a ZIP file containing everything needed to deploy the integration:
- Infrastructure setup files (Terraform/CloudFormation for AWS, docker-compose for collectors)
- Verification commands to confirm successful setup
- Rollback steps for cleanup
- Troubleshooting guide
- Data minimization documentation
To generate a pack:
- Go to the integration detail page (
/integrations/[id]) - Click Generate Integration Pack
- Download and extract the ZIP
- Follow the README.md inside the pack
Ingest AWS GuardDuty findings automatically.
Configuration:
- AWS Account ID
- IAM Role ARN (with GuardDuty read permissions)
- External ID (for secure cross-account access)
- Regions to monitor
Demo Flow:
- Create the integration in the dashboard
- Download the Integration Pack
- Deploy the IAM role using Terraform or CloudFormation:
cd integration-pack/terraform terraform init terraform apply - Return to the dashboard and click Test Connection
- Set a schedule (e.g., every 5 minutes)
- Enable the integration
The connector will fetch GuardDuty findings and normalize them into events with:
- Actor: AWS principal (user/role ARN)
- Action: GuardDuty finding type
- Severity: Mapped from GuardDuty severity (1-10 → low/medium/high/critical)
Ingest logs from MikroTik routers via syslog.
Configuration:
- Syslog port (default: 514)
- Device name identifier
Demo Flow:
- Create the integration in the dashboard
- Download the Integration Pack
- Deploy the syslog collector:
cd integration-pack docker compose up -d - Configure your MikroTik router:
/system logging action add name=remote target=remote remote=<collector-ip> remote-port=514 /system logging add action=remote topics=system,info add action=remote topics=firewall - Verify events appear in the dashboard
The collector parses and normalizes:
- Login success/failure events
- Configuration changes
- Firewall drops/denies
Ingest syslog from any device supporting RFC3164 or RFC5424.
Configuration:
- Syslog port
- Protocol (UDP/TCP)
- Format (RFC3164, RFC5424, or auto-detect)
Demo Flow:
- Create the integration in the dashboard
- Download the Integration Pack
- Deploy the collector:
cd integration-pack docker compose up -d - Configure your devices to send syslog to the collector
- Test with the included sample generator:
./scripts/send-sample-syslog.sh
Each integration displays a health status:
| Status | Description |
|---|---|
| Healthy | Events received within expected threshold |
| Degraded | Events delayed but within error threshold |
| Failed | Consecutive errors exceed threshold |
Monitor integration health from the Integrations list page. Click on any integration to see detailed metrics, error logs, and retry history.
| Method | Endpoint | Description |
|---|---|---|
| GET | /api/integrations |
List all integrations |
| POST | /api/integrations |
Create new integration |
| GET | /api/integrations/:id |
Get integration details |
| PUT | /api/integrations/:id |
Update integration |
| DELETE | /api/integrations/:id |
Delete integration |
| POST | /api/integrations/:id/test |
Test connection |
| POST | /api/integrations/:id/run |
Trigger immediate fetch |
| POST | /api/integrations/:id/pause |
Pause integration |
| POST | /api/integrations/:id/resume |
Resume integration |
| POST | /api/integrations/:id/rotate |
Rotate credentials |
| GET | /api/integrations/:id/pack |
Download Integration Pack |
| GET | /api/integrations/:id/preview |
Get mapping preview |
MIT