iip-accessibility-audit-integrity-assistant-chiron
Skills demonstrated: Accessibility auditing · WCAG 2.2 · AI output QA · Ethics review · Plain-language standards · Structured audit documentation · Dignity-based design · Neurodivergent UX safety
Chiron is a structured accessibility audit and ethics integrity framework built to help users identify barriers, audit content for accessibility gaps, and evaluate tone and language for unintended exclusion. The goal was to treat barriers as environmental by using plain-language principles and dignity-based design at its core.
The project includes a fully documented audit cycle: a capability audit, a UX walkthrough with live mode testing, an ethics review via the Jiminy Cricket Protocol, a fulfillment rubric with scored ratings, and a compatibility and token efficiency assessment.
This is not an automated compliance engine. Chiron is a conversation-based clarity assistant that guides structured reflection and reporting using WCAG 2.2, plain language principles, and neurodivergent UX safety practices.
Users encountering accessibility barriers often struggle to name or articulate what isn't working in a way that helps them get support. Content creators, small teams, and individuals without a dedicated accessibility professional can find tone and language in digital products can cause harm through unintended exclusion without anyone noticing.
Chiron was built to address three gaps of Accessibility review:
- For users: A structured way to describe access barriers in plain language
- For creators: A guided manual audit using the POUR framework (Perceivable, Operable, Understandable, Robust)
- For teams: A reflection tool for evaluating tone, assumptions, and ethical risk in content and design
The Prompt
Chiron v1.0 is a single-session prompt using Start/End mode commands to activate Support Mode, Audit Mode, and Ethics Check independently. Designed for GPT-3.5 and GPT-4 with a token load of ~3,700 tokens — within the GPT-3.5 session limit. Mode behavior was tested and confirmed stable across all three modes.
All deliverables were produced as part of a structured audit and review cycle:
- 📝 Prompt – main accessibility assistant
- 📘 Professional Summary – use cases and assistant scope
- 📑 Prompt Summary Sheet – structural overview and key logic
- 📊 Metadata Table – fields, values, and examples
- 🕓 Version History – versions, dates, and change notes
- 🗃️ Full Audit Archive (v1.0) – detailed audit & rubric
- 🔎 Capability Audit – scope review and limitations
- ⚖️ Legal & Attribution – authorship, usage notice, and license
- 🧭 UX Walkthrough – testing transcript and observations
- 📌 Key Deliverables – highlights and artifacts
- 🔧 Compatibility Notice – platform notes and limitations
The audit cycle applied a three-part review structure:
- Capability Audit Verified that all declared features functioned as described: mode activation and exit commands, first-time starter prompts, one-mode-at-a-time flow, alt text guidance, cognitive clarity options, and limitation disclosures. No simulation, memory, or hidden logic was present.
- UX Walkthrough Live testing was conducted across Support Mode, Audit Mode, and Ethics Check in a GPT-4 environment by a structured tester. Each mode was evaluated for tone, pacing, clarity, and exit behavior. Observations were documented and no changes were required prior to publication.
- Ethics Review (Jiminy Cricket Protocol) Content was reviewed for dignity-based design alignment: no shame framing, no moralizing language, no productivity bias, no simulation of marginalized experience. All three modes passed without flagged issues.
Chiron was scored against WCAG 2.2 and ethical AI standards using a declared-scope rubric:
- ✅ WCAG Understandable (Plain Language): 5/5 — Plain, direct language throughout; chunked formatting; no jargon
- ✅ WCAG Perceivable (Text Alternatives): 4/5 — Alt text guidance provided; cannot detect or verify visual content
- ✅ WCAG Operable (Keyboard & Interface): 3/5 — Usability questions included; cannot test real interfaces
- ✅ WCAG Robust (Assistive Tech Compatibility): 3/5 — Screen reader considerations prompted; cannot simulate or verify
- ✅ Dignity-Based Design: 5/5 — No shame framing, coercion, or moralizing language present
- ✅ Ethical AI Transparency: 5/5 — Single session, no memory, no hidden logic; all limits disclosed
- ✅ Neurodivergent UX Safety: 5/5 — One-mode flow, opt-in guidance, user pacing fully honored
Rating Scale: 1 – Mentioned but unsupported · 2 – Minimally addressed · 3 – Generally acknowledged · 4 – Supported within prompt · 5 – Fully supported within declared limits
- WCAG 2.2 — Web Content Accessibility Guidelines (POUR framework)
- Plain Language Principles — U.S. PLAIN and GOV.UK Style Guide
- Jiminy Cricket Protocol — Structured ethics and bias review
- EU AI Act Limited Risk Tier — Ethical AI reference standard
- A11y Project & Disability Visibility Project — Disability-led advocacy framing
- ChatGPT (GPT-3.5 / GPT-4) — Primary testing environment
Chiron is a manual, conversation-based audit tool. It does not:
- Run automated accessibility scans or screen reader simulations
- Detect contrast ratios, spacing, or interface behavior
- Analyze code, video, or audio files
- Provide legal compliance certification or guarantee conformance
It is designed to support clarity, guided reflection, and structured reporting — not to replace professional accessibility audits or legal review.
Chiron is an assistive guidance tool built for portfolio demonstration and practical accessibility support. It does not constitute legal advice or professional consultation. Users remain responsible for ensuring their content meets applicable legal and compliance requirements. All standards used were current at the time of Chiron's creation in June 2025.
All prompt content, documentation, audit materials, and fulfillment ratings reflect original work by A.H. Faria. AI collaboration agent assistants (Byte/ChatGPT, Claude) were used in development and testing. A.H. Faria | Information Integrity & AI-Assisted QA