You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Runtime shape: one **lead agent** (OpenAI Agents SDK) calls **tools**; `extract_pipeline` / `summarize_pipeline` run **DSPy** with your `[roles.extract]` and `[roles.summarize]` models.
67
+
68
+
```mermaid
69
+
flowchart TB
70
+
subgraph lead["Lead"]
71
+
OAI[LerimOAIAgent · OpenAI Agents SDK]
72
+
end
73
+
subgraph syncTools["Sync tools"]
74
+
ep[extract_pipeline]
75
+
sp[summarize_pipeline]
76
+
bd[batch_dedup_candidates]
77
+
wm[write_memory]
78
+
wr[write_report]
79
+
rf["read_file · list_files"]
80
+
end
81
+
subgraph dspy["DSPy LMs"]
82
+
ex[roles.extract]
83
+
su[roles.summarize]
84
+
end
85
+
OAI --> ep
86
+
OAI --> sp
87
+
OAI --> bd
88
+
OAI --> wm
89
+
OAI --> wr
90
+
OAI --> rf
91
+
ep -.-> ex
92
+
sp -.-> su
93
+
```
70
94
71
-
The sync path processes new agent sessions: reads transcript archives, extracts decision and learning candidates via DSPy, deduplicates against existing knowledge, and writes new entries to the project's knowledge store.
95
+
Before that run, adapters **discover**sessions, **index** them, and **compact** traces — then the agent + tools above decide, write, and summarize.
Same **lead agent** pattern; **maintain** tools only (no DSPy pipelines on this flow).
100
+
101
+
```mermaid
102
+
flowchart TB
103
+
subgraph lead_m["Lead"]
104
+
OAI_m[LerimOAIAgent · OpenAI Agents SDK]
105
+
end
106
+
subgraph maintainTools["Maintain tools"]
107
+
ms[memory_search]
108
+
ar[archive_memory]
109
+
em[edit_memory]
110
+
wh[write_hot_memory]
111
+
wm2[write_memory]
112
+
wr2[write_report]
113
+
rf2["read_file · list_files"]
114
+
end
115
+
OAI_m --> ms
116
+
OAI_m --> ar
117
+
OAI_m --> em
118
+
OAI_m --> wh
119
+
OAI_m --> wm2
120
+
OAI_m --> wr2
121
+
OAI_m --> rf2
122
+
```
78
123
79
-
The maintain path runs offline refinement over stored knowledge: merges duplicates, archives low-value entries, consolidates related learnings, and applies time-based decay to keep the context graph clean and relevant.
124
+
The maintainer prompt guides merge, archive, consolidate, decay, and hot-memory — the agent chooses **how** to use the tools above.
80
125
81
126
## Quick start
82
127
@@ -97,14 +142,18 @@ lerim project add . # add current project (repeat for other repos)
97
142
98
143
### 3. Set API keys
99
144
100
-
Set keys for the providers you configure (defaults: MiniMax primary, Z.AI fallback):
145
+
Set keys for whatever you configure under `[roles.*]`. The shipped `default.toml`
146
+
often uses **OpenCode Go** for roles — in that case set `OPENCODE_API_KEY` (see
147
+
[model roles](https://docs.lerim.dev/configuration/model-roles/)). If you switch
148
+
to MiniMax, Z.AI, OpenRouter, etc., set the matching env vars instead.
101
149
102
150
```bash
103
-
export MINIMAX_API_KEY="..."# if using MiniMax (default)
104
-
export ZAI_API_KEY="..."# if using Z.AI (default fallback)
151
+
export OPENCODE_API_KEY="..."# when using provider opencode_go (common default)
152
+
# export MINIMAX_API_KEY="..." # if roles use minimax
153
+
# export ZAI_API_KEY="..." # if you add zai as fallback
105
154
```
106
155
107
-
You only need keys for providers referenced in your `[roles.*]` config. See [model roles](https://docs.lerim.dev/configuration/model-roles/).
156
+
You only need keys for providers referenced in your `[roles.*]` config.
108
157
109
158
### 4. Start Lerim
110
159
@@ -113,7 +162,8 @@ lerim up
113
162
```
114
163
115
164
That's it. Lerim is now running as a Docker service — syncing sessions, extracting
116
-
decisions and learnings, refining memories, and serving a dashboard at `http://localhost:8765`.
165
+
decisions and learnings, refining memories, and exposing the JSON API at `http://localhost:8765`.
166
+
Use **[Lerim Cloud](https://lerim.dev)** for the web UI (session analytics, memories, settings).
117
167
118
168
### 5. Teach your agent about Lerim
119
169
@@ -123,7 +173,9 @@ Install the Lerim skill so your agent knows how to query past context:
123
173
lerim skill install
124
174
```
125
175
126
-
This copies skill files (SKILL.md, cli-reference.md) into your agent's skill directory.
176
+
This copies bundled skill files (`SKILL.md`, `cli-reference.md`) into
177
+
`~/.agents/skills/lerim/` (shared by Cursor, Codex, OpenCode, …) and
For Docker deployments, set `ollama = "http://host.docker.internal:11434"` in `[providers]` so the container can reach the host Ollama instance. See [model roles](https://docs.lerim.dev/configuration/model-roles/) for full configuration.
154
206
155
-
## Dashboard
156
-
157
-
The dashboard gives you a local UI for session analytics, knowledge browsing, and runtime status.
-**Overview**: high-level metrics and charts (sessions, messages, tools, errors, tokens, activity by day/hour, model usage).
166
-
-**Runs**: searchable session list with status and metadata; open any run in a full-screen chat viewer.
167
-
-**Memories**: library + editor for memory records (filter, inspect, edit title/body/kind/confidence/tags).
168
-
-**Pipeline**: sync/maintain status, extraction queue state, and latest extraction report.
169
-
-**Settings**: dashboard-editable config for server, model roles, and tracing; saves to `~/.lerim/config.toml`.
209
+
The browser UI (sessions, memories, pipeline, settings) lives in **[lerim-cloud](https://github.com/lerim-dev/lerim-cloud)** and is served from **[lerim.dev](https://lerim.dev)**. The `lerim` daemon still exposes a **JSON API** on `http://localhost:8765` for the CLI and for Cloud to talk to your local runtime when connected.
170
210
171
211
## CLI reference
172
212
@@ -213,16 +253,16 @@ API keys come from environment variables only. Set keys for the providers you us
- Example defaults: `provider = "opencode_go"`, `model = "minimax-m2.5"` for lead, extract, and summarize; `fallback_models` on the lead role for quota errors.
226
266
227
267
### Development
228
268
@@ -238,7 +278,7 @@ tests/run_tests.sh all
238
278
239
279
### Tracing (OpenTelemetry)
240
280
241
-
Lerim uses OpenTelemetry for agent observability, with traces routed through the OpenAI Agents SDK tracing layer.
281
+
When enabled, tracing uses Logfire (OpenTelemetry): DSPy is instrumented; optional httpx captures LLM HTTP traffic. Built-in OpenAI Agents SDK cloud tracing is disabled in the runtime so spans are not exported to OpenAI by default.
0 commit comments