This guide covers documentation standards, editorial workflows, and Claude-powered automation for the Seqera Platform documentation repository.
- Editorial review → Use
/editorial-reviewcommand → Agents review docs and provide feedback - Local review → Use
/review <file-or-directory>→ See issues before committing - Apply fixes → Address issues directly in files based on agent feedback
Important: Editorial reviews are on-demand only. They do NOT run automatically on PR creation or commits. You must explicitly trigger them by:
- Running
/editorial-reviewlocally in Claude Code- Commenting
/editorial-reviewon a PR- Manually dispatching the docs editorial review GitHub Actions workflow
Run /editorial-review in Claude Code when:
- ✅ Writing new content (catch issues before PR)
- ✅ Iterating on drafts (immediate feedback)
- ✅ Learning the style guidelines
- ✅ Working on single files or small changes
- ✅ You want faster feedback (no CI wait time)
Environment: Your local machine Cost: Your Claude API subscription Agents available: voice-tone, terminology (clarity disabled, punctuation not yet implemented)
Example:
/editorial-review platform-enterprise_docs/getting-started/quickstart.mdComment /editorial-review on a PR when:
- ✅ Reviewing multi-file PRs (inline suggestions help)
- ✅ Reviewing contributor PRs (official documented review)
- ✅ You want to batch-apply fixes via GitHub UI
- ✅ Contributors don't have Claude Code installed
- ✅ Final check before merge on substantial changes
Environment: GitHub Actions CI Cost: Repository API budget Agents: Skill orchestrates voice-tone and terminology agents
Example:
# Comment this on any PR
/editorial-review
Automatic (smart-gate blocks these):
- ✅ Reviewed <60 minutes ago
- ✅ <10 meaningful lines changed
- ✅ >5 formatting issues (fix with markdownlint first)
Use judgment (don't trigger manually):
- ❌ Single typo fixes
- ❌ PRs with only code changes (no docs)
- ❌ Urgent hotfixes
- ❌ File renames or moves only
- ❌ Dependency updates
- ❌ Whitespace-only changes
How it works: Comment /editorial-review and smart-gate automatically decides if it's worth running. You don't need to manually check timing or size.
Approximate costs per review (using Sonnet 4.5):
- Small PR (1-3 files, <500 lines):
15K tokens ($0.30-0.60) - Medium PR (5-10 files, ~1000 lines):
40K tokens ($0.90-1.50) - Large PR (15+ files, 2000+ lines):
80K tokens ($1.80-3.00)
Time savings: Estimated 30-40 minutes per PR (agents identify issues in 2-4 minutes vs manual review)
Estimated annual cost for this repo: $380-760/year (based on ~1,520 PRs/year at current rate, using agents on 25-50% of PRs at ~$1/review average)
Environmental impact: ~0.15 kWh per review (~57 kg CO₂/year at 380 reviews, ~114 kg CO₂/year at 760 reviews). Equivalent to ~158-316 hours of video streaming annually. Using manual triggers and following the "When NOT to re-run" guidelines helps minimize unnecessary compute.
This repository uses two main Claude-powered workflows:
Purpose: On-demand quality checks for documentation content
Architecture:
- Skill:
/editorial-review(orchestrates all agents) - Workflow:
.github/workflows/docs-review.yml(invokes skill) - Smart-gate: Automatic filtering prevents wasteful runs
- Scripts:
classify-pr-type.sh,post-inline-suggestions.sh - Triggers: Manual only -
/editorial-reviewcomment or workflow dispatch
Purpose: Generate OpenAPI overlay files for API documentation updates
Components:
- Skill:
/openapi-overlay-generator - Workflow:
.github/workflows/generate-openapi-overlays.yml - Scripts:
analyze_comparison.py(change analysis),validate_overlay.py(structure validation),check_consistency.py(standards compliance),update_sidebar.py(sidebar updates) - Output: YAML overlay files following OpenAPI overlay specification
- Triggers: Repository dispatch from Platform repository, manual workflow dispatch
The /editorial-review skill orchestrates specialized agents to review your documentation. The skill ensures consistent behavior whether run locally or in CI.
In Claude Code (local):
- Run
/editorial-reviewin your terminal - Skill invokes appropriate agents (voice-tone, terminology, clarity)
- Review findings in the conversation
- Apply fixes to files manually
In GitHub PR (CI):
- Comment
/editorial-reviewon any PR - Workflow runs smart-gate checks (prevents waste):
- Skips if reviewed <60 min ago
- Skips if <10 lines changed
- Skips if formatting issues found (fix those first)
- If gates pass: Invokes
/editorial-reviewskill - Skill orchestrates agents and posts inline suggestions
Local review (Claude Code):
# Review all changed files in current branch
/editorial-review
# Review specific file
/editorial-review platform-enterprise_docs/getting-started/quickstart.md
# Review entire directory
/editorial-review platform-cloud/docs/pipelines/
# Specify which agents to run
/editorial-review --agents voice-tone,terminologyPR review (GitHub):
# Comment this on any PR to trigger review
/editorial-review
When gates pass, the workflow will:
- Invoke
/editorial-reviewskill - Skill runs agents: voice-tone, terminology (clarity disabled, punctuation not yet implemented)
- Post up to 60 inline suggestions
- Provide downloadable artifact if >60 suggestions found
Smart-gate blocks wasteful runs automatically - you don't need to manually check timing/size.
Available agents:
- voice-tone: Second person, active voice, present tense, confidence (runs in CI)
- terminology: Product names, feature names, formatting conventions (runs in CI)
- punctuation: List punctuation, Oxford commas, quotation marks, dashes (planned, not yet implemented)
- clarity: Sentence length, jargon, complexity (local only, disabled in CI)
The agents provide a structured report with:
- Priority categorization: Critical, important, minor issues
- File and line references: Easy navigation to issues
- Specific suggestions: Clear guidance on fixes
- Context: Why each issue matters
From Claude Code:
- Open the files with issues
- Navigate to the specific line numbers
- Apply the suggested changes
- Re-run
/editorial-reviewto verify fixes
From PR suggestions:
- Click "Commit suggestion" on individual inline comments
- Or select multiple suggestions and click "Commit suggestions"
- Comment
/editorial-reviewagain to verify fixes
If 60+ suggestions found:
- Go to the workflow run (link in PR comment)
- Download the
all-editorial-suggestionsartifact - Review full list in
all-suggestions-full.txt - Apply remaining suggestions manually
Run editorial reviews locally before opening a PR:
# Review specific file
/review platform-enterprise_docs/getting-started/quickstart.md
# Review entire directory
/review platform-cloud/docs/pipelines/
# Quick pre-commit check (voice-tone and terminology only)
/review --profile=quick
# Review new content with structure checks
/review new-page.md --profile=new-contentChecks for:
- Second person usage ("you" vs "the user")
- Active voice (not passive)
- Present tense (not future)
- No hedging language ("may", "might", "could")
Checks for consistent product names, feature terminology, and formatting conventions.
For detailed terminology rules, see: .claude/agents/terminology.md
Checks for:
- Sentence length (flag >30 words)
- Undefined jargon
- Complex constructions
- Missing prerequisites
Checks for:
- Oxford commas
- List punctuation consistency
- Quotation marks
- Dash usage
- API keys: Stored in GitHub Secrets and never exposed in logs or output
- Scope: Reviews only read documentation files (no code execution or system access)
- Access control: Manual triggers prevent automated abuse
- Transparency: All review output is publicly visible in PRs for audit
- Rate limiting: Use discretion when triggering reviews; contact maintainers if you need frequent reviews
Before using LLM-based review, consider running fast, local checks first:
Pre-filter with static analysis:
# Run markdownlint for formatting issues
npx markdownlint-cli2 "**/*.md"
# Run Vale for style and terminology (if configured)
vale platform-enterprise_docs/
# Use grep for simple pattern matching
grep -r "Tower" --include="*.md" platform-enterprise_docs/Benefits:
- ⚡ Instant feedback (no API wait time)
- 💰 Zero cost (runs locally)
- 🌱 Minimal environmental impact
- 🎯 Catches simple issues without LLM
When to escalate to LLM review:
- After static checks pass (handle simple issues first)
- For semantic issues (voice, tone, clarity)
- For complex terminology decisions
- For content that requires human-like judgment
This layered approach reduces LLM usage by 50-60% while maintaining quality.
All directories support editorial review via /editorial-review command:
| Directory | Description | File count |
|---|---|---|
platform-enterprise_docs/ |
Main enterprise docs | ~129 files |
platform-cloud/docs/ |
Cloud platform docs | ~114 files |
platform-enterprise_versioned_docs/ |
Versioned docs | Variable |
platform-api-docs/docs/ |
API documentation | ~218 files |
fusion_docs/ |
Fusion docs | ~24 files |
multiqc_docs/ |
MultiQC docs | ~212 files |
wave_docs/ |
Wave docs | ~43 files |
changelog/ |
Release notes | ~232 files |
Q: Too many suggestions?
- Focus on critical and important issues first
- Apply fixes incrementally
- Re-run
/editorial-reviewto verify changes - Consider splitting large reviews into smaller batches
Q: Agents flagged something incorrectly?
- Context matters - some terminology is acceptable in legacy docs
- Agents provide suggestions, not requirements
- Report false positives to improve agent rules
Q: How do I review just specific files?
- In Claude Code:
/editorial-review <file-path>or/editorial-review <directory-path> - In PR comments:
/editorial-reviewreviews all changed files (cannot specify subset)
Q: Does /editorial-review in PR comments run on every comment?
- No, it only runs when you explicitly comment
/editorial-review - No automatic triggers on PR creation or updates
- Completely manual and on-demand
- Agent Implementation: See
.claude/README.mdfor technical details - Agent Definitions: See
.claude/agents/*.mdfor agent prompts - Scripts: See
.github/scripts/for workflow scripts - Skills: See
.claude/skills/for available skills