Gear Quality SPC System is a production-oriented backend for gear inspection quality analysis. It takes CSV measurement data, computes SPC results deterministically, compares them with historical runs, generates reports and charts, and validates the final output with a harness layer before you trust it.
What makes it different from a generic "AI workflow" project is the boundary it draws: the numbers live in code, while language-facing layers sit on top. Langflow is supported as a visual entry, but the system is designed to run without it.
flowchart LR
A["CSV input / spec overrides"] --> B["Deterministic SPC core<br/>Cp / Cpk / limits / rules"]
B --> C["LangGraph orchestration<br/>history / alert / artifacts / harness"]
C --> D["FastAPI service"]
C --> E["SQLite history"]
C --> F["Reports & charts<br/>JSON / HTML / SVG / PDF"]
C --> G["Harness validation"]
D --> H["Streamlit dashboard"]
D --> I["Optional Langflow front-end"]
If you are trying to turn inspection spreadsheets into something closer to an engineering system, this project is the middle ground between a one-off script and a full factory platform. It gives you deterministic SPC computation, traceable history, machine-checkable validation, and deployable service interfaces in one place.
| Area | What is implemented | Why it matters |
|---|---|---|
| Deterministic computation | SPC metrics, control limits, historical deltas, and status grading are computed in Python | Core quality facts stay stable and auditable |
| Historical memory | SQLite-backed run storage and cross-run comparison | The system can say how the current batch changed, not just describe one snapshot |
| Validation layer | Harness checks and golden-case regression tests | Output quality is checked systematically instead of trusted by default |
| Multi-surface delivery | API, HTML reports, SVG charts, dashboard, and optional Langflow entry | The same backend can serve engineering use, reporting, and demos |
| Production shape | Auto-runner, webhook-ready alerts, Docker skeleton, CI test workflow | The project already behaves like something meant to leave the notebook stage |
- Python
3.11+ - Windows PowerShell for the bundled scripts
- or Docker if you prefer container startup
git clone https://github.com/alexhuang-dev/gear-quality-spc-system.git
cd gear-quality-spc-system
python -m venv .venv
.\.venv\Scripts\python -m pip install -r requirements.txt
powershell -ExecutionPolicy Bypass -File .\start_production_stack.ps1After startup:
- API docs: http://127.0.0.1:8000/docs
- Ready check: http://127.0.0.1:8000/ready
- Dashboard: http://127.0.0.1:8501
cp .env.example .env
docker compose -f docker-compose.production.yml up --build -dExample request:
$body = @{
csv = @"
batch_no,time,part_id,metric_a,metric_b,defect_count
LOT001,2024-07-01 08:00,P001,12,4,0
LOT001,2024-07-01 08:05,P002,13,5,0
LOT001,2024-07-01 08:10,P003,14,6,1
LOT001,2024-07-01 08:15,P004,15,5,0
"@
specs = @{
metric_a = @{ USL = 20; LSL = 0 }
metric_b = @{ USL = 10; LSL = 0 }
}
} | ConvertTo-Json -Depth 6
Invoke-RestMethod `
-Method Post `
-Uri http://127.0.0.1:8000/analyze `
-ContentType "application/json" `
-Body $bodyExample response shape:
{
"spc_result": {
"run_id": "20260410090935_8b4fbe1a",
"batch_numbers": ["LOT001"],
"overall_min_cpk": 0.882,
"overall_status": "warning"
},
"harness_eval": {
"passed": true,
"score": 1.0
},
"report_paths": {
"html_report_path": "data/reports/report_20260410090935_8b4fbe1a.html"
}
}If the auto-runner is enabled, place CSV files in:
data/incoming/
Processed files move to:
data/processed/
Langflow is optional. If you want the showcase workflow:
New Flow - v9.3 api-frontend-prompt-merge-friendly.jsonlangflow_integration/gear_spc_component.py
api/ FastAPI service entrypoints
core/ SPC, history, charts, reports, alerts, harness logic
graph/ LangGraph orchestration and deterministic fallback
harness/ golden-case helpers and regression support
services/ auto-runner for incoming CSV files
dashboard/ Streamlit dashboard
langflow_integration/ Langflow custom component and setup notes
tests/ pytest coverage and golden fixtures
data/specs/ default specification configuration
- Deterministic code owns SPC facts because those numbers need to stay stable across prompt changes.
- Langflow is kept outside the critical path because it is useful for demos but not a good system boundary.
- Harness validation is built into the project because report generation without consistency checks is not very convincing in an industrial setting.
- Default spec values are placeholders until replaced with real process standards.
- The repository does not include MES, ERP, or PLC integration.
- Alert delivery is webhook-ready, but real enterprise endpoints still need to be configured.
- PDF generation depends on the target host having the right rendering dependencies.
- add more realistic production datasets and regression fixtures
- extend the dashboard from run summaries to operator-facing monitoring
- connect the alert layer to real enterprise notification channels
- expose the same backend through a cleaner LangGraph-native application boundary
Run the current test suite with:
.\.venv\Scripts\python -m pytest tests -qPRODUCTION_DEPLOYMENT.mdFINAL_ARCHITECTURE.mdPROJECT_INTRO_BILINGUAL.mdINTERVIEW_GUIDE.zh-CN.mdSHOWCASE.md