Skip to content

heyhaiden/v0-ai-claims-agent

Repository files navigation

ClaimsIQ

AI-powered vehicle damage assessment for insurance claims agents. Replaces the manual review-to-authorization workflow with a system that analyzes damage photos, generates repair estimates, and routes decisions based on confidence thresholds.

Problem: The average auto repair cycle takes 22.3 days. 80% of customers with poor claims experiences leave their carrier. The bottleneck is the manual review step — agents visually inspect photos, estimate costs from memory or databases, and pass estimates up for approval.

Solution: ClaimsIQ automates the middle of the claims pipeline (review, estimate, routing) while keeping agents in control of final decisions.

What It Does

ClaimsIQ handles three steps of the claims workflow:

Step Before ClaimsIQ With ClaimsIQ
Damage Review Agent manually inspects photos, identifies damage types Claude Vision detects damage regions, classifies severity, returns bounding boxes
Estimate Generation Agent looks up repair costs in databases, writes line items System maps detected damage to cost tables, generates itemized estimate with parts + labor
Routing & Approval Agent decides whether to approve or escalate Confidence-based routing: auto-approve, agent review, or senior escalation

The agent remains the decision-maker. The AI handles the analysis; the agent confirms, edits, or overrides.

Core Workflow

  1. Submit a claim — Enter policy details, vehicle info, accident description, and upload damage photos (/claims/new)
  2. AI assessment runs — Photos are sent to Claude Vision. The system validates image quality, detects damage regions with bounding boxes, classifies severity, and calculates cost estimates
  3. Agent reviews — The claim detail view (/claims/[id]) shows detected damages overlaid on photos, an editable cost table, confidence scores, and fraud risk indicators
  4. Agent acts — Based on the AI confidence score and routing logic:
    • High confidence (>85%), minor damage: One-click confirm
    • Medium confidence (60-84%): Agent reviews annotated images, adjusts estimate, approves or escalates
    • Low confidence (<60%), high value (>$15K), or fraud flags: Routed to senior adjuster

Every agent edit or override is logged as a training signal for future model improvement.

Tech Stack

  • Next.js 16 with App Router, React 19, TypeScript
  • Claude Vision API (claude-sonnet-4-20250514) for damage detection and classification
  • Tailwind CSS + shadcn/ui (Radix primitives) for the interface
  • In-memory store with useSyncExternalStore for state management
  • pnpm for package management

Project Structure

app/
  (dashboard)/
    page.tsx                 # Claims queue — search, filter, sort all claims
    claims/new/page.tsx      # New claim form — multi-step submission
    claims/[id]/page.tsx     # Claim detail — AI assessment + agent actions
  api/assess-damage/route.ts # POST endpoint — sends photos to Claude, returns structured assessment

components/
  claims-queue.tsx           # Dashboard table with filters, sorting, stats
  claim-detail.tsx           # Assessment view with bounding boxes, editable damage table, action buttons
  new-claim-form.tsx         # Multi-step form (info → photos → processing → done)
  claim-approval-modal.tsx   # Modal for approving with custom amount
  claim-escalate-modal.tsx   # Modal for escalating to senior adjuster

hooks/
  use-claims-assessment.ts   # Orchestrates multi-stage AI pipeline (validate → risk → detect → estimate → route)

lib/
  claims-data.ts             # Type definitions (Claim, AIAssessment, DetectedDamage)
  claims-store.ts            # In-memory store with CRUD operations and subscriptions
  sample-claims.ts           # Pre-loaded demo claims with full AI assessments

AI Integration

The /api/assess-damage endpoint sends each uploaded photo to Claude Vision with a structured prompt. Claude returns JSON containing:

  • Image quality validation — Is it a vehicle? Is the image clear? Adequate lighting?
  • Detected damages — Part affected, damage type (dent, scratch, crack, shatter, missing), severity (minor/moderate/severe), confidence score, and bounding box coordinates
  • Fraud indicators — EXIF metadata checks, duplicate image detection, damage-incident consistency

The API then enriches Claude's response with cost estimates using a lookup table (base cost by damage type/severity, multiplied by part-specific factors, plus 40% labor). A routing engine uses the confidence score, total estimate value, and fraud flags to determine the claim path.

Pipeline stages (visible in the UI as a progress indicator):

  1. Image Quality Gate
  2. Fraud/Risk Assessment
  3. Damage Detection
  4. Cost Estimation
  5. Routing Decision

Running Locally

# Install dependencies
pnpm install

# Add your Anthropic API key
echo "ANTHROPIC_API_KEY=your-key-here" > .env.local

# Start the dev server
pnpm dev

The app runs at http://localhost:3000. Pre-loaded sample claims are available immediately — no database setup required.

To test the full AI flow, navigate to /claims/new, fill in claim details, upload a vehicle damage photo, and watch the assessment pipeline run.

PRD Alignment

This prototype implements the P0 features defined in the ClaimsIQ PRD:

PRD Feature Status Implementation
Claim Context View Built Policy, vehicle, accident details displayed on claim detail page
Photo Upload & Validation Built Multi-image upload with AI quality checks (blur, lighting, vehicle detection)
AI Damage Detection Built Claude Vision identifies damage type, location, severity with bounding boxes
Confidence Scoring & Routing Built 0-100 score per assessment; threshold-based routing to approve/review/escalate
Estimate Generation Built Line-item estimates with parts, labor, and cost ranges
Agent Review & Override Built Inline editing of damage table; all overrides tracked

Not yet implemented (documented in PRD as P1/P2):

  • CCC ONE / Mitchell integration for OEM parts pricing
  • Senior adjuster approval queue with batch actions
  • PDF report export (report generation exists but as HTML)
  • Audit & compliance log
  • EXIF-based fraud detection (currently uses Claude's analysis)

Design Decisions

Why Claude Vision instead of YOLO/CNN? For a prototype, Claude Vision provides damage detection, classification, and natural language reasoning in a single API call. A production system would use specialized CV models (YOLO for detection, ResNet for classification) as described in the PRD. The prototype demonstrates the same workflow and data structure that a production pipeline would produce.

Why an in-memory store? The focus is on demonstrating the claims workflow, not database operations. The store uses the same interface a database-backed implementation would, making it straightforward to swap in persistence later.

Why confidence-based routing? The three-tier routing model (AI-led, AI-assisted, AI-flagged) directly addresses the core tension in AI-assisted workflows: speed versus accuracy. High-confidence, low-value claims move fast. Low-confidence or high-value claims get more human attention. The agent always has final say.

Releases

No releases published

Packages

 
 
 

Contributors