Skip to content

Add LLM-Assisted Context Prune Selection#318

Open
VooDisss wants to merge 6 commits intoNeuralNomadsAI:devfrom
VooDisss:context-management-tool
Open

Add LLM-Assisted Context Prune Selection#318
VooDisss wants to merge 6 commits intoNeuralNomadsAI:devfrom
VooDisss:context-management-tool

Conversation

@VooDisss
Copy link
Copy Markdown
Contributor

@VooDisss VooDisss commented Apr 11, 2026

User Workflow

  1. The user asks the LLM to use the context-pruning tool to select irrelevant messages.
  2. The LLM calls the tool with the badge range it wants to stage.
  3. The UI automatically selects those badges in the existing delete-selection flow.
  4. The user reviews the staged selection and either deletes everything or adjusts the selection manually in the UI before clicking the trashcan.

Screenshot

image

Summary

This PR adds a minimal context-pruning flow where the LLM can stage badge selections for removal, and the user then reviews or adjusts that selection before deleting anything.

Why

The previous flow required users to enter selection mode and manually pick every badge themselves, which is slow and tedious in longer sessions.

What Changed

  • Added a tool path for selecting context badges by range.
  • Reused the existing delete-selection UI instead of adding a second delete flow.
  • Kept deletion as a manual user confirmation step.

Help get this merged, guys

image

Observe the sacred pile of review stones. They are not for throwing at @shantur, no matter how mysteriously delayed the merge may seem. They are for channeling your energy into a respectful but persistent request that he review and merge the context management tooling.

Add the smallest UI-side primitive needed for plugin-driven context pruning without changing the existing deletion behavior yet.

This introduces a session-scoped staging store for pending prune-selection commands and wires the active message section to consume those commands. The bridge is intentionally a no-op at this checkpoint: it proves that external commands can reach the conversation view safely before we start mapping badge indices onto the current delete-selection state.

The purpose of this commit is to create a low-risk checkpoint for testing the plumbing in isolation. Normal message rendering and manual deletion behavior should remain unchanged, which makes it easier to detect regressions before the next slice adds real badge preselection logic.
Connect the context prune staging bridge to the existing delete-selection UI by resolving externally supplied badge indices against the current deletable timeline order.

This commit keeps the addressing model intentionally simple: indices are 1-based and map onto the current deletable badge list from oldest to newest. Once staged, the existing selection state, toolbar visibility, and manual adjustment behavior are reused instead of introducing a parallel deletion flow.

The goal of this checkpoint is to prove that external prune commands can preselect the same badges a user would otherwise select manually, while preserving the current trashcan confirmation path and compaction-boundary rules.
Introduce the first end-to-end command path that lets the chat model stage context-prune selections by badge index without performing deletion itself.

This adds a new injected plugin tool that accepts the LLM-provided range string, parses it locally, and sends the resolved 1-based badge indices to CodeNomad. The server forwards that command through the existing workspace instance event stream, and the UI consumes the resulting event to stage the selection in the active conversation view.

The tool remains intentionally thin: it does not infer what should be removed, and it does not delete anything. Its job is only to validate the range syntax, preserve the user's existing review step, and reuse the current trashcan confirmation flow instead of introducing a second mutation path.
Tighten the context-prune selection path so invalid or noisy inputs fail earlier and staged UI state behaves more predictably under edge cases.

This normalizes staged indices before they reach the conversation view, rejects empty or excessively large selections in the tool parser, and avoids clearing an existing selection when a stale command no longer maps to any current deletable badges. Those checks keep the side-effect tool thin while making the staging path safer to use from the LLM and less surprising for the user.

This commit is intentionally limited to validation and guardrails. It does not change the core selection model, introduce auto-delete behavior, or expand the feature beyond the existing review-and-trashcan workflow.
Refine the context-prune tool metadata so the model sees the intended calling pattern more clearly.

The description now makes it explicit that one call can include multiple individual indices and multiple ranges in a single final selection, and that repeated calls replace rather than merge with the currently staged selection. This is prompt-surface polish only and does not change the selection transport or deletion behavior.
Refine the context-prune tool metadata so the model can map the visible conversation it sees onto the numeric range the tool expects.

The updated wording now explains that the model typically sees chronological message ids like m0007, m0008, m0009, and should convert that visible pruneable order into 1-based positions for the tool call. It also includes a concrete example that turns a visible sequence into the expected range syntax, while preserving the existing rule that repeated calls replace rather than merge with the staged selection.

This is description-only guidance for the model surface. It does not change runtime behavior, transport, or deletion semantics.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant