|
| 1 | +--- |
| 2 | +title: Autoresearch |
| 3 | +description: Run an unattended eval-improve loop that iteratively optimizes agent skills |
| 4 | +sidebar: |
| 5 | + order: 5 |
| 6 | +--- |
| 7 | + |
| 8 | +import { Image } from 'astro:assets'; |
| 9 | +import trajectoryChart from '../../../../assets/screenshots/autoresearch-trajectory.png'; |
| 10 | + |
| 11 | +Autoresearch is an unattended optimization loop that **automatically improves your agent skills** through repeated eval cycles. It runs the same evaluate → analyze → improve loop described in the [Skill Improvement Workflow](/docs/guides/skill-improvement-workflow/), but does it hands-free — no human review between cycles. |
| 12 | + |
| 13 | +<Image src={trajectoryChart} alt="Autoresearch trajectory chart showing score improvement from 0.48 to 0.90 over 9 cycles" /> |
| 14 | + |
| 15 | +The chart above shows a real optimization run: an incident severity classifier starts at 48% accuracy and reaches 90% after 9 automated cycles — each cycle taking seconds and costing fractions of a cent. |
| 16 | + |
| 17 | +## How It Works |
| 18 | + |
| 19 | +``` |
| 20 | + ┌──────────┐ |
| 21 | + │ 1. EVAL │ ◄───────────────────────────────┐ |
| 22 | + └─────┬─────┘ │ |
| 23 | + ▼ │ |
| 24 | + ┌──────────┐ │ |
| 25 | + │ 2. ANALYZE│ dispatcher → analyzer subagent │ |
| 26 | + └─────┬─────┘ │ |
| 27 | + ▼ │ |
| 28 | + ┌──────────┐ wins > losses → KEEP │ |
| 29 | + │ 3. DECIDE │ else → DROP │ |
| 30 | + └─────┬─────┘ │ |
| 31 | + ▼ │ |
| 32 | + ┌──────────┐ │ |
| 33 | + │ 4. MUTATE │ dispatcher → mutator subagent ──┘ |
| 34 | + └──────────┘ |
| 35 | +
|
| 36 | + Stops after 3 consecutive no-improvement cycles |
| 37 | + or 10 total cycles (configurable). |
| 38 | +``` |
| 39 | + |
| 40 | +Each cycle: |
| 41 | +1. **Runs `agentv eval`** against the current version of the artifact |
| 42 | +2. **Analyzes** failures via the analyzer subagent |
| 43 | +3. **Decides** keep or discard using `agentv compare --json` (automated — no human needed) |
| 44 | +4. **Mutates** the artifact to address failing assertions, then loops back |
| 45 | + |
| 46 | +The system uses a **hill-climbing ratchet**: each mutation builds on the best-scoring version, never a failed candidate. Improvements compound; regressions get discarded. |
| 47 | + |
| 48 | +## What Gets Optimized |
| 49 | + |
| 50 | +Any file or directory artifact: SKILL.md, prompt template, agent config, system prompt, or a directory of related files (e.g., a skill with `references/` and `agents/` subdirectories). The artifact mode is auto-detected — pass a file path for single-file optimization, or a directory path for multi-file optimization. The mutator rewrites artifacts in place while the eval stays fixed — same test cases, same assertions, different artifact versions. |
| 51 | + |
| 52 | +## Prerequisites |
| 53 | + |
| 54 | +- An eval file (EVAL.yaml or evals.json) that covers the behavior you care about |
| 55 | +- The artifact must be a file or directory within a git repository (autoresearch uses git for versioning) |
| 56 | +- Run at least one manual eval cycle first to validate your test cases |
| 57 | + |
| 58 | +:::tip |
| 59 | +Autoresearch is only as good as your eval. If your assertions don't catch the failures you care about, the optimizer won't fix them. Start with the [manual improvement loop](/docs/guides/skill-improvement-workflow/) to build confidence in your eval quality before going unattended. |
| 60 | +::: |
| 61 | + |
| 62 | +## Triggering Autoresearch |
| 63 | + |
| 64 | +Autoresearch runs through the `agentv-bench` Claude Code skill. Trigger it with natural language: |
| 65 | + |
| 66 | +``` |
| 67 | +"Run autoresearch on my classifier prompt" |
| 68 | +"Optimize this skill unattended for 5 cycles" |
| 69 | +"Run autoresearch on examples/features/autoresearch/EVAL.yaml" |
| 70 | +``` |
| 71 | + |
| 72 | +No CLI flags or YAML schema changes needed — the skill handles everything. |
| 73 | + |
| 74 | +## Output Structure |
| 75 | + |
| 76 | +Each autoresearch session creates a self-contained experiment directory: |
| 77 | + |
| 78 | +``` |
| 79 | +.agentv/results/runs/autoresearch-<name>/ |
| 80 | +├── _autoresearch/ |
| 81 | +│ ├── iterations.jsonl # Per-cycle data (score, decision, mutation) |
| 82 | +│ └── trajectory.html # Live-updating Chart.js visualization |
| 83 | +├── 2026-04-15T10-30-00/ # Cycle 1 run artifacts |
| 84 | +│ ├── index.jsonl |
| 85 | +│ ├── grading.json |
| 86 | +│ └── timing.json |
| 87 | +├── 2026-04-15T10-35-00/ # Cycle 2 run artifacts |
| 88 | +│ └── ... |
| 89 | +└── ... |
| 90 | +``` |
| 91 | + |
| 92 | +Autoresearch uses **git-based versioning** instead of backup files. Each successful mutation is committed (`git add && git commit`), and failed mutations are reverted (`git checkout`). The optimized artifact lives in the working tree and the latest commit — no separate `best.md` to copy. |
| 93 | + |
| 94 | +- **`_autoresearch/trajectory.html`** — Open in a browser to see the score trajectory, per-assertion breakdown, and cumulative cost. Auto-refreshes during the loop, becomes static on completion. |
| 95 | +- **`_autoresearch/iterations.jsonl`** — Machine-readable log of every cycle for downstream analysis. |
| 96 | + |
| 97 | +Review the mutation history with `git log` after the run completes. |
| 98 | + |
| 99 | +## The Keep/Drop Decision |
| 100 | + |
| 101 | +After each eval cycle, autoresearch runs `agentv compare` between the current candidate and the best baseline: |
| 102 | + |
| 103 | +```bash |
| 104 | +agentv compare <baseline>/index.jsonl <candidate>/index.jsonl --json |
| 105 | +``` |
| 106 | + |
| 107 | +The decision rule: |
| 108 | + |
| 109 | +| Condition | Decision | Outcome | |
| 110 | +|-----------|----------|---------| |
| 111 | +| `wins > losses` | **KEEP** | Promote to new baseline, reset convergence counter | |
| 112 | +| `wins <= losses` | **DROP** | Revert to best version, increment convergence counter | |
| 113 | +| `mean_delta == 0`, simpler artifact | **KEEP** | Simpler is better at equal performance | |
| 114 | + |
| 115 | +Three consecutive DROPs trigger convergence — the optimizer stops because it can't find improvements. |
| 116 | + |
| 117 | +## Example: Incident Severity Classifier |
| 118 | + |
| 119 | +Here's a real scenario showing autoresearch in action. We start with a minimal classifier prompt: |
| 120 | + |
| 121 | +```markdown |
| 122 | +# classifier-prompt.md (initial version) |
| 123 | +Classify the incident into P0, P1, P2, or P3. |
| 124 | +Give your answer as JSON with severity and reasoning fields. |
| 125 | +``` |
| 126 | + |
| 127 | +And an eval with 7 test cases covering edge cases — payment failures, SSL cert expiry, gradual memory leaks: |
| 128 | + |
| 129 | +```yaml |
| 130 | +# EVAL.yaml (stays fixed — only the prompt changes) |
| 131 | +tests: |
| 132 | + - id: total-outage |
| 133 | + assertions: |
| 134 | + - type: contains |
| 135 | + value: '"P0"' |
| 136 | + - type: is-json |
| 137 | + - "Reasoning mentions complete service outage" |
| 138 | + - id: payment-failures |
| 139 | + assertions: |
| 140 | + - type: contains |
| 141 | + value: '"P1"' |
| 142 | + - type: is-json |
| 143 | + - "Reasoning weighs revenue impact despite intermittent nature" |
| 144 | + # ... 5 more test cases |
| 145 | +``` |
| 146 | + |
| 147 | +Running autoresearch produces this trajectory: |
| 148 | + |
| 149 | +``` |
| 150 | +Cycle Score Decision Mutation |
| 151 | +───── ───── ──────── ────────────────────────────────────── |
| 152 | + 1 0.48 KEEP initial baseline — no mutations applied |
| 153 | + 2 0.62 KEEP added explicit JSON format, defined P0-P3 levels |
| 154 | + 3 0.52 DROP added verbose rules — over-constrained reasoning |
| 155 | + 4 0.71 KEEP added revenue-impact heuristic for P1 |
| 156 | + 5 0.81 KEEP enforced raw JSON output — removed code fences |
| 157 | + 6 0.86 KEEP added time-urgency rule for SSL/cert cases |
| 158 | + 7 0.90 KEEP improved reasoning template — cite impact metrics |
| 159 | + 8 0.86 DROP attempted decision tree merge — regressed |
| 160 | + 9 0.90 DROP minor wording cleanup — no meaningful change |
| 161 | + ↳ 3 consecutive drops → CONVERGED |
| 162 | +``` |
| 163 | + |
| 164 | +**Result:** 0.48 → 0.90 (+42 points) in 9 cycles, $0.03 total cost. The optimized prompt is in the working tree (and the latest git commit). |
| 165 | + |
| 166 | +Key observations: |
| 167 | +- **Cycle 3** shows a failed mutation (verbose rules hurt reasoning) — the ratchet discarded it and continued from the cycle 2 version |
| 168 | +- **Cycles 8–9** show convergence — the optimizer couldn't improve further and stopped automatically |
| 169 | +- **Per-assertion tracking** reveals which aspects improved: classification accuracy reached 100% by cycle 6, while JSON format compliance and reasoning quality improved more gradually |
| 170 | + |
| 171 | +## Convergence |
| 172 | + |
| 173 | +Autoresearch stops when either condition is met: |
| 174 | + |
| 175 | +- **3 consecutive no-improvement cycles** (configurable) — the optimizer has converged |
| 176 | +- **10 total cycles** (configurable) — hard limit to bound cost |
| 177 | + |
| 178 | +You can override both limits when triggering autoresearch: |
| 179 | + |
| 180 | +``` |
| 181 | +"Run autoresearch with max 20 cycles and convergence threshold of 5" |
| 182 | +``` |
| 183 | + |
| 184 | +## Best Practices |
| 185 | + |
| 186 | +**Start manual, then automate.** Run 2-3 manual eval cycles to validate your test cases catch real issues. Once you trust the eval, switch to autoresearch. |
| 187 | + |
| 188 | +**Same-model pairings work best.** The meta-agent running autoresearch should match the model used by the task agent (e.g., Claude optimizing a Claude agent). Same-model pairings produce better mutations because the optimizer has implicit knowledge of how the target model interprets instructions. |
| 189 | + |
| 190 | +**Watch the per-assertion chart.** If one assertion is stuck at 0% while others improve, the eval may be too strict or testing something the prompt can't control. Consider adjusting the assertion. |
| 191 | + |
| 192 | +**Review the optimized artifact.** Autoresearch improves scores, but always review the changes (`git diff <initial_sha>`) before adopting them. The optimizer may have found a valid but unexpected approach. |
| 193 | + |
| 194 | +**Keep artifact directories focused.** For directory mode, keep artifacts to 5–15 files. The mutator works best when it can reason about the full scope without reading dozens of files. Split large skill directories if needed. |
| 195 | + |
| 196 | +## Relationship to Manual Workflow |
| 197 | + |
| 198 | +| Aspect | Manual Loop | Autoresearch | |
| 199 | +|--------|-------------|--------------| |
| 200 | +| Human checkpoints | Every iteration | None (opted in to unattended) | |
| 201 | +| Keep/discard | You decide | Automated via `agentv compare` | |
| 202 | +| Mutation | You edit the skill | Mutator subagent rewrites | |
| 203 | +| Max iterations | Unbounded | 10 cycles or convergence | |
| 204 | +| Best for | Building eval intuition | Scaling optimization | |
| 205 | +| Trajectory chart | Not included | Auto-generated with live refresh | |
| 206 | + |
| 207 | +Start with the [manual loop](/docs/guides/skill-improvement-workflow/) to understand the workflow, then use autoresearch to scale it. |
0 commit comments