Skip to content

Commit a557ff6

Browse files
authored
Merge pull request #18 from lerim-dev/docs/update-docs-for-openai
Docs: OpenAI alignment and remove bundled dashboard
2 parents c2c5999 + 3d32832 commit a557ff6

53 files changed

Lines changed: 563 additions & 8431 deletions

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

Dockerfile

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -16,9 +16,6 @@ RUN pip install --no-cache-dir /build && rm -rf /build
1616
ENV FASTEMBED_CACHE_PATH=/opt/lerim/models
1717
RUN python -c "from fastembed import TextEmbedding; TextEmbedding('BAAI/bge-small-en-v1.5')"
1818

19-
# Dashboard assets for the built-in web UI
20-
COPY dashboard/ /opt/lerim/dashboard/
21-
2219
EXPOSE 8765
2320

2421
HEALTHCHECK --interval=30s --timeout=5s --retries=3 \

README.md

Lines changed: 80 additions & 40 deletions
Original file line numberDiff line numberDiff line change
@@ -58,25 +58,70 @@ Lerim is file-first and primitive-first.
5858
- Project memory: `<repo>/.lerim/`
5959
- Global fallback: `~/.lerim/`
6060
- Search: file-based (no index required)
61-
- Orchestration: `openai-agents` lead agent + Codex filesystem sub-agent
62-
- Multi-provider: ResponsesProxy adapter enables Codex sub-agent across any LLM provider
63-
- Extraction/summarization: `dspy.ChainOfThought` with transcript windowing
61+
- Orchestration: OpenAI Agents SDK (`LerimOAIAgent`) with per-flow tools; non-OpenAI providers use `ResponsesProxy` + LiteLLM on the lead path
62+
- Extraction/summarization: DSPy pipelines with transcript windowing
6463

6564
### Sync path
6665

67-
<p align="center">
68-
<img src="assets/sync.png" alt="Sync path" width="700">
69-
</p>
66+
Runtime shape: one **lead agent** (OpenAI Agents SDK) calls **tools**; `extract_pipeline` / `summarize_pipeline` run **DSPy** with your `[roles.extract]` and `[roles.summarize]` models.
67+
68+
```mermaid
69+
flowchart TB
70+
subgraph lead["Lead"]
71+
OAI[LerimOAIAgent · OpenAI Agents SDK]
72+
end
73+
subgraph syncTools["Sync tools"]
74+
ep[extract_pipeline]
75+
sp[summarize_pipeline]
76+
bd[batch_dedup_candidates]
77+
wm[write_memory]
78+
wr[write_report]
79+
rf["read_file · list_files"]
80+
end
81+
subgraph dspy["DSPy LMs"]
82+
ex[roles.extract]
83+
su[roles.summarize]
84+
end
85+
OAI --> ep
86+
OAI --> sp
87+
OAI --> bd
88+
OAI --> wm
89+
OAI --> wr
90+
OAI --> rf
91+
ep -.-> ex
92+
sp -.-> su
93+
```
7094

71-
The sync path processes new agent sessions: reads transcript archives, extracts decision and learning candidates via DSPy, deduplicates against existing knowledge, and writes new entries to the project's knowledge store.
95+
Before that run, adapters **discover** sessions, **index** them, and **compact** traces — then the agent + tools above decide, write, and summarize.
7296

7397
### Maintain path
7498

75-
<p align="center">
76-
<img src="assets/maintain.png" alt="Maintain path" width="700">
77-
</p>
99+
Same **lead agent** pattern; **maintain** tools only (no DSPy pipelines on this flow).
100+
101+
```mermaid
102+
flowchart TB
103+
subgraph lead_m["Lead"]
104+
OAI_m[LerimOAIAgent · OpenAI Agents SDK]
105+
end
106+
subgraph maintainTools["Maintain tools"]
107+
ms[memory_search]
108+
ar[archive_memory]
109+
em[edit_memory]
110+
wh[write_hot_memory]
111+
wm2[write_memory]
112+
wr2[write_report]
113+
rf2["read_file · list_files"]
114+
end
115+
OAI_m --> ms
116+
OAI_m --> ar
117+
OAI_m --> em
118+
OAI_m --> wh
119+
OAI_m --> wm2
120+
OAI_m --> wr2
121+
OAI_m --> rf2
122+
```
78123

79-
The maintain path runs offline refinement over stored knowledge: merges duplicates, archives low-value entries, consolidates related learnings, and applies time-based decay to keep the context graph clean and relevant.
124+
The maintainer prompt guides merge, archive, consolidate, decay, and hot-memory — the agent chooses **how** to use the tools above.
80125

81126
## Quick start
82127

@@ -97,14 +142,18 @@ lerim project add . # add current project (repeat for other repos)
97142

98143
### 3. Set API keys
99144

100-
Set keys for the providers you configure (defaults: MiniMax primary, Z.AI fallback):
145+
Set keys for whatever you configure under `[roles.*]`. The shipped `default.toml`
146+
often uses **OpenCode Go** for roles — in that case set `OPENCODE_API_KEY` (see
147+
[model roles](https://docs.lerim.dev/configuration/model-roles/)). If you switch
148+
to MiniMax, Z.AI, OpenRouter, etc., set the matching env vars instead.
101149

102150
```bash
103-
export MINIMAX_API_KEY="..." # if using MiniMax (default)
104-
export ZAI_API_KEY="..." # if using Z.AI (default fallback)
151+
export OPENCODE_API_KEY="..." # when using provider opencode_go (common default)
152+
# export MINIMAX_API_KEY="..." # if roles use minimax
153+
# export ZAI_API_KEY="..." # if you add zai as fallback
105154
```
106155

107-
You only need keys for providers referenced in your `[roles.*]` config. See [model roles](https://docs.lerim.dev/configuration/model-roles/).
156+
You only need keys for providers referenced in your `[roles.*]` config.
108157

109158
### 4. Start Lerim
110159

@@ -113,7 +162,8 @@ lerim up
113162
```
114163

115164
That's it. Lerim is now running as a Docker service — syncing sessions, extracting
116-
decisions and learnings, refining memories, and serving a dashboard at `http://localhost:8765`.
165+
decisions and learnings, refining memories, and exposing the JSON API at `http://localhost:8765`.
166+
Use **[Lerim Cloud](https://lerim.dev)** for the web UI (session analytics, memories, settings).
117167

118168
### 5. Teach your agent about Lerim
119169

@@ -123,7 +173,9 @@ Install the Lerim skill so your agent knows how to query past context:
123173
lerim skill install
124174
```
125175

126-
This copies skill files (SKILL.md, cli-reference.md) into your agent's skill directory.
176+
This copies bundled skill files (`SKILL.md`, `cli-reference.md`) into
177+
`~/.agents/skills/lerim/` (shared by Cursor, Codex, OpenCode, …) and
178+
`~/.claude/skills/lerim/` (Claude Code).
127179

128180
### 6. Get the most out of Lerim
129181

@@ -152,21 +204,9 @@ ollama serve # start Ollama (runs outside Docker)
152204

153205
For Docker deployments, set `ollama = "http://host.docker.internal:11434"` in `[providers]` so the container can reach the host Ollama instance. See [model roles](https://docs.lerim.dev/configuration/model-roles/) for full configuration.
154206

155-
## Dashboard
156-
157-
The dashboard gives you a local UI for session analytics, knowledge browsing, and runtime status.
158-
159-
<p align="center">
160-
<img src="assets/dashboard.png" alt="Lerim dashboard" width="1100">
161-
</p>
162-
163-
### Tabs
207+
## Web UI (Lerim Cloud)
164208

165-
- **Overview**: high-level metrics and charts (sessions, messages, tools, errors, tokens, activity by day/hour, model usage).
166-
- **Runs**: searchable session list with status and metadata; open any run in a full-screen chat viewer.
167-
- **Memories**: library + editor for memory records (filter, inspect, edit title/body/kind/confidence/tags).
168-
- **Pipeline**: sync/maintain status, extraction queue state, and latest extraction report.
169-
- **Settings**: dashboard-editable config for server, model roles, and tracing; saves to `~/.lerim/config.toml`.
209+
The browser UI (sessions, memories, pipeline, settings) lives in **[lerim-cloud](https://github.com/lerim-dev/lerim-cloud)** and is served from **[lerim.dev](https://lerim.dev)**. The `lerim` daemon still exposes a **JSON API** on `http://localhost:8765` for the CLI and for Cloud to talk to your local runtime when connected.
170210

171211
## CLI reference
172212

@@ -213,16 +253,16 @@ API keys come from environment variables only. Set keys for the providers you us
213253

214254
| Variable | Provider | Default role |
215255
|----------|----------|-------------|
216-
| `MINIMAX_API_KEY` | MiniMax | Primary (all roles) |
217-
| `ZAI_API_KEY` | Z.AI | Fallback |
218-
| `OPENROUTER_API_KEY` | OpenRouter | Optional alternative |
219-
| `OPENAI_API_KEY` | OpenAI | Optional alternative |
220-
| `ANTHROPIC_API_KEY` | Anthropic | Optional alternative |
256+
| `OPENCODE_API_KEY` | OpenCode Go / Zen | Common default (see `default.toml`) |
257+
| `MINIMAX_API_KEY` | MiniMax | When `provider = "minimax"` |
258+
| `ZAI_API_KEY` | Z.AI | When using Z.AI |
259+
| `OPENROUTER_API_KEY` | OpenRouter | Optional |
260+
| `OPENAI_API_KEY` | OpenAI | Optional |
261+
| `ANTHROPIC_API_KEY` | Anthropic | Optional |
221262

222-
Default model config (from `src/lerim/config/default.toml`):
263+
Default model config (see `src/lerim/config/default.toml` — values change with releases):
223264

224-
- All roles: `provider=minimax`, `model=MiniMax-M2.5`
225-
- Fallback: `zai:glm-4.7` (lead/codex), `zai:glm-4.5-air` (extract/summarize)
265+
- Example defaults: `provider = "opencode_go"`, `model = "minimax-m2.5"` for lead, extract, and summarize; `fallback_models` on the lead role for quota errors.
226266

227267
### Development
228268

@@ -238,7 +278,7 @@ tests/run_tests.sh all
238278

239279
### Tracing (OpenTelemetry)
240280

241-
Lerim uses OpenTelemetry for agent observability, with traces routed through the OpenAI Agents SDK tracing layer.
281+
When enabled, tracing uses Logfire (OpenTelemetry): DSPy is instrumented; optional httpx captures LLM HTTP traffic. Built-in OpenAI Agents SDK cloud tracing is disabled in the runtime so spans are not exported to OpenAI by default.
242282

243283
```bash
244284
# Enable tracing

dashboard/README.md

Lines changed: 0 additions & 14 deletions
This file was deleted.

dashboard/assets/graph-explorer/graph-explorer.css

Lines changed: 0 additions & 1 deletion
This file was deleted.

dashboard/assets/graph-explorer/graph-explorer.js

Lines changed: 0 additions & 513 deletions
This file was deleted.

dashboard/assets/lerim.png

-233 KB
Binary file not shown.

dashboard/frontend/README.md

Lines changed: 0 additions & 10 deletions
This file was deleted.

dashboard/frontend/graph-explorer/README.md

Lines changed: 0 additions & 16 deletions
This file was deleted.

0 commit comments

Comments
 (0)