| name | memory-graph |
|---|---|
| description | A connected knowledge graph that replaces full conversation re-reading with instant context recall. Reduces token usage by 70–90% across Cowork, Code, and Chat by storing facts as graph nodes/edges on disk and loading only the relevant subgraph per task — never the full history. ALWAYS use this skill at the START of any multi-step task to recall what is already known. ALWAYS use this skill at the END of any task to store what was learned. Trigger on: any task where context from a previous session is needed, "remember this", "what did we do last time", "don't re-explore", "save this", "recall", or any task where you would otherwise ask the user something you've already been told. |
Every time Claude works on a task, it reads the full conversation history — even when 99% of it is irrelevant. A GitHub bio update that took 2 minutes spends 40k tokens re-reading prior sessions. This skill eliminates that.
Instead of reading the conversation, Claude reads a compact graph:
conversation history → 50,000 tokens (slow, wasteful)
memory/graph.json → 400 tokens (instant, targeted)
The graph stores facts as nodes (entities) connected by edges (relationships). You only load the nodes relevant to the current task.
memory/
graph.json ← The knowledge graph (source of truth)
hot-cache.md ← Top 30 nodes in human-readable form (= CLAUDE.md)
workflows/
github.md ← GitHub task patterns
browser.md ← Browser automation patterns
documents.md ← File creation patterns
tools-manifest.md ← Exact ToolSearch queries per tool category
The working directory is wherever CLAUDE.md lives (usually the session root).
At the start of ANY task, before using any tools:
1. Read memory/hot-cache.md (or CLAUDE.md) → ~150 tokens, covers 90% of cases
2. If the answer isn't there → read memory/graph.json and filter by task type
3. If still not found → ask the user ONCE, then store the answer
Never explore what you already know. If graph says github: dixitvision,
don't navigate to GitHub to check. Trust the graph.
Don't load the entire graph. Filter by node type:
import json
graph = json.load(open('memory/graph.json'))
# Get only what you need
user = graph['nodes'].get('user:primary')
repos = {k:v for k,v in graph['nodes'].items() if v['type'] == 'repo'}
last_task = graph['session_log'][-1] if graph['session_log'] else NoneOr read it with Bash and pipe through python to extract just the relevant field:
python3 -c "import json,sys; g=json.load(open('memory/graph.json')); print(g['nodes'].get('user:primary',{}).get('github','not found'))"Once you have context, match the task type to a pre-built workflow. Read the relevant workflow file — don't reinvent the process.
| Task pattern | Workflow file | Token cost |
|---|---|---|
| "update github bio/repo/README" | memory/workflows/github.md | ~300 (API calls) |
| "click/browse/fill form" | memory/workflows/browser.md | ~2,000 (batched) |
| "create doc/pdf/pptx/xlsx" | memory/workflows/documents.md | skill-specific |
| "remember this / save this" | → STORE operation (Step 4) | ~100 |
Read memory/tools-manifest.md before loading any tools. It tells you the EXACT ToolSearch query to load each tool category in ONE call.
Never do this:
ToolSearch("AskUserQuestion") ← 1 call
ToolSearch("TodoWrite") ← 2nd call
ToolSearch("tabs_context_mcp") ← 3rd call
ToolSearch("javascript_tool") ← 4th call
Always do this (from tools-manifest.md):
ToolSearch("chrome navigate screenshot find", 20) ← loads ALL Chrome tools at once
3 screenshots maximum per task. No exceptions.
| Screenshot # | When | Purpose |
|---|---|---|
| 1 | After first navigation | Assess page state, plan actions |
| 2 | After all actions complete | Verify result |
| 3 | ONLY if screenshot 2 showed failure | Debug only |
Never screenshot to:
- Check if a click worked (infer it from the next action's result)
- Find coordinates (use
find()+ refs instead) - Verify intermediate steps
Inference rule: If the action didn't error, assume it worked. Take the next action. Screenshot only at the end.
Use computer_batch for ALL sequences of 2+ browser actions:
BAD: left_click(button) → wait → screenshot → left_click(submit) [3 round trips]
GOOD: computer_batch([left_click(button), left_click(submit)]) [1 round trip]
When creating files on GitHub via browser, use this EXACT sequence every time:
1. navigate("https://github.com/USER/REPO/new/BRANCH?filename=PATH%2FFILE.md")
→ URL param pre-fills the filename — never type filename manually
2. write_clipboard(file_content)
3. left_click(editor_area at coordinate [784, 400])
4. key("ctrl+v")
→ If editor was empty and paste landed: proceed to step 5
→ If paste failed (editor still shows placeholder): repeat steps 3-4 once only
5. find("Commit changes button") → left_click(ref)
6. find("Commit changes submit button in dialog") → left_click(ref)
→ No screenshot needed — navigate to next task
Total: 0 screenshots for a file commit. You know it worked when GitHub redirects.
- Call 1: At task start with ALL todos set to pending
- Call 2: At task end marking everything completed
- NEVER call TodoWrite mid-task to update individual items
At the END of every task, update the graph. This is what makes the system compound — each task makes the next one cheaper.
- New entities discovered: new repos, new people, new URLs, new credentials paths
- Task completion: what was done, date, outcome
- Corrections: if user said "actually my username is X not Y" — update immediately
- Patterns that worked: if you found a faster way, add it to the relevant workflow file
import json, datetime
with open('memory/graph.json', 'r') as f:
graph = json.load(f)
# Add a node
graph['nodes']['repo:new-project'] = {
"type": "repo",
"name": "new-project",
"owner": "dixitvision",
"visibility": "public",
"lang": "Python",
"description": "...",
"topics": []
}
# Log the task
graph['session_log'].append({
"date": datetime.date.today().isoformat(),
"task": "created repo new-project",
"status": "done"
})
graph['updated'] = datetime.date.today().isoformat()
with open('memory/graph.json', 'w') as f:
json.dump(graph, f, indent=2)Then update CLAUDE.md / hot-cache.md if the new node is likely to be needed frequently.
| Type | Key fields | Example id |
|---|---|---|
user |
name, email, github, linkedin | user:primary |
repo |
name, owner, lang, topics, description | repo:omnimind |
task |
date, summary, status | task:2026-04-09-github |
service |
name, url, auth_method | service:github |
workflow |
name, steps, token_cost | workflow:github-api |
tool |
name, load_query, category | tool:chrome-mcp |
project |
name, status, stack | project:marketing-capstone |
When no graph.json exists yet, create it by interviewing the user:
- Ask: "What's your name, and what are the main services/tools you use?"
- Ask: "What are the 2-3 projects you're currently working on?"
- Ask: "What should I remember from our last session?" (if applicable)
- Write graph.json with the answers as nodes
- Write CLAUDE.md hot-cache from the same data
Do NOT ask more than 5 questions. Build from what the user says, not an exhaustive intake form.
| Task type | Old cost | With skill | Savings |
|---|---|---|---|
| Recall user context | 5,000 | 150 | 97% |
| GitHub bio/repo update | 12,000 | 500 | 96% |
| Browser form fill | 15,000 | 3,000 | 80% |
| Create a document | 8,000 | 2,000 | 75% |
| Load tools | 3,000 | 300 | 90% |
Graph is wrong / outdated: If the user corrects you ("that repo was renamed"),
update the node immediately and add a corrected_at timestamp. Never argue — just fix.
Graph doesn't have it: Say "I don't have [X] in memory yet. What is it?" Store the answer before continuing.
Multiple users / projects: Namespace node ids: user:alice, user:bob,
project:client-a, project:client-b.
Large graphs: Keep hot-cache.md under 100 lines. Anything older than 30 days
with no access gets moved to an archive/ section of graph.json.