Exploration Dashboard Synthesizer is an OpenClaw and Codex-compatible skill that transforms unstructured exploration material into a structured Exploration Dashboard.
It is designed for notes, conversations, brainstorming fragments, research discussions, and AI collaboration traces where the goal is still emerging. The skill turns raw material into Big Ideas, Sessions, key points, next suggestions, assumptions, and unresolved questions.
The synthesis principle is simple: preserve structure faithfully. The Dashboard is a cognitive coordination artifact, not a narrative summary.
- Extracts major themes and clusters them into Big Ideas.
- Converts concrete fragments into Sessions.
- Classifies Sessions as
Exploration,Knowledge, orProposed. - Separates supported insights from assumptions and unresolved questions.
- Produces a Markdown Dashboard that can be reviewed, edited, and evolved.
- Keeps unsupported ideas out of conclusions and places ambiguous material into Proposed Sessions or Unresolved Questions.
Use this skill for:
- meeting notes from exploratory discussions
- brainstorming fragments
- research or strategy notes
- long AI conversation transcripts
- workshop notes
- early-stage product or methodology exploration
- scattered project knowledge that needs a first Dashboard structure
Do not use it as a replacement for:
- a normal task tracker when the work is already well-defined
- a final report or polished narrative article
- a formal requirements document
- an architecture decision record
- a bug tracker or implementation backlog
- Big Idea: a long-lived exploration direction that can contain multiple Sessions.
- Session: a bounded exploration unit associated with a Big Idea.
- Exploration Session: something that already happened.
- Knowledge Session: an existing artifact, method, metric, concept, or reusable module.
- Proposed Session: a suggested future exploration topic.
- Coverage: an estimated progress signal. Use
Unknownwhen unsupported by the input.
See docs/core-concepts.md for the full reference.
git clone https://github.com/buccaneermethodology/ExplorationDashboardSynthesizer.git
cd ExplorationDashboardSynthesizerInstall the Codex-compatible skill path directly from GitHub:
python3 ~/.codex/skills/.system/skill-installer/scripts/install-skill-from-github.py \
--repo buccaneermethodology/ExplorationDashboardSynthesizer \
--path skill/exploration-dashboard-synthesizerRestart Codex after installation, then invoke it with $exploration-dashboard-synthesizer.
Import skill.json into your OpenClaw environment.
Example CLI usage:
openclaw run exploration-dashboard-synthesizer \
--input "examples/meeting-notes.input.txt" \
--output dashboard.mdIf you are not using OpenClaw, copy the prompt from prompt.md, paste it into a capable LLM, and replace {{input_text}} with your material.
Treat the generated Dashboard as a working artifact. Review unsupported assumptions, rename Big Ideas if needed, and adjust Session boundaries before using it for coordination.
| Input | Expected Dashboard |
|---|---|
| meeting-notes.input.txt | meeting-notes.dashboard.md |
| ai-conversation.input.txt | ai-conversation.dashboard.md |
Minimal input:
Meeting notes:
- Alice: We tried using knowledge graphs for fault diagnosis, but accuracy is only 70%.
- Bob: I think we need higher quality knowledge, maybe extract rules from expert experience.
- Carol: Let us form a task force and start a pilot next month, target accuracy above 85%.
Expected output shape:
# Exploration Dashboard
## Big Idea: Fault Diagnosis with Knowledge Graph
- Big Idea Length: Unknown
- Coverage: Unknown
- Key Points:
- The current knowledge graph diagnosis approach has unstable accuracy around 70%.
- Next Suggestions:
- Explore methods for incorporating expert rules into the diagnosis knowledge graph.
### Sessions
| Session ID | Session Type | Topic | Scope | Purpose | Length (minutes) | Key Points |
|------------|--------------|-------|-------|---------|-------------------|------------|
| B1-K1 | Knowledge | Current diagnosis status | Knowledge graph based fault diagnosis | Capture the current baseline and pain point | - | Accuracy is around 70% and unstable. || File | Purpose |
|---|---|
| skill.json | OpenClaw skill package metadata and prompt template. |
| skill/exploration-dashboard-synthesizer/SKILL.md | Codex-compatible skill entry point for installation from GitHub. |
| prompt.md | Human-readable source for the prompt template. |
| docs/core-concepts.md | Concept definitions and output structure. |
| docs/bm-dashboard.md | BM Dashboard methodology article. |
| examples/ | Sample inputs and expected Dashboard outputs. |
| scripts/validate_repo.py | Repository validation script used by CI. |
| scripts/sync_prompt.py | Syncs prompt.md into skill.json. |
This repository creates an initial Exploration Dashboard from unstructured material.
For ongoing project-state maintenance after a Dashboard exists, use dashboard-governance-skill. The two repositories are complementary:
| Repository | Role |
|---|---|
ExplorationDashboardSynthesizer |
Synthesizes the initial Dashboard from raw material. |
dashboard-governance-skill |
Maintains Big Ideas, Sessions, Decisions, status, and emergent next steps during ongoing work. |
Run the repository validator:
python3 scripts/validate_repo.pyThe validator checks:
- required repository files
skill.jsonstructureprompt.mdandskill.jsonprompt synchronization- Codex-compatible
SKILL.mdpath and metadata - local Markdown links
If you edit prompt.md, sync it into skill.json:
python3 scripts/sync_prompt.py
python3 scripts/validate_repo.pyIf you have Codex's skill creator validator available, validate the installable skill path too:
python3 ~/.codex/skills/.system/skill-creator/scripts/quick_validate.py skill/exploration-dashboard-synthesizerIssues and pull requests are welcome. Useful contributions include:
- clearer prompt wording
- additional examples
- output-format refinements
- portability notes for other agent environments
- validation improvements
See CONTRIBUTING.md before opening a pull request.
MIT. See LICENSE.
- Maintainer: Tai Xiaomei buccaneermethodology@gmail.com
- GitHub: @buccaneermethodology