Add LLM-Assisted Context Prune Selection#318
Open
VooDisss wants to merge 6 commits intoNeuralNomadsAI:devfrom
Open
Add LLM-Assisted Context Prune Selection#318VooDisss wants to merge 6 commits intoNeuralNomadsAI:devfrom
VooDisss wants to merge 6 commits intoNeuralNomadsAI:devfrom
Conversation
Add the smallest UI-side primitive needed for plugin-driven context pruning without changing the existing deletion behavior yet. This introduces a session-scoped staging store for pending prune-selection commands and wires the active message section to consume those commands. The bridge is intentionally a no-op at this checkpoint: it proves that external commands can reach the conversation view safely before we start mapping badge indices onto the current delete-selection state. The purpose of this commit is to create a low-risk checkpoint for testing the plumbing in isolation. Normal message rendering and manual deletion behavior should remain unchanged, which makes it easier to detect regressions before the next slice adds real badge preselection logic.
Connect the context prune staging bridge to the existing delete-selection UI by resolving externally supplied badge indices against the current deletable timeline order. This commit keeps the addressing model intentionally simple: indices are 1-based and map onto the current deletable badge list from oldest to newest. Once staged, the existing selection state, toolbar visibility, and manual adjustment behavior are reused instead of introducing a parallel deletion flow. The goal of this checkpoint is to prove that external prune commands can preselect the same badges a user would otherwise select manually, while preserving the current trashcan confirmation path and compaction-boundary rules.
Introduce the first end-to-end command path that lets the chat model stage context-prune selections by badge index without performing deletion itself. This adds a new injected plugin tool that accepts the LLM-provided range string, parses it locally, and sends the resolved 1-based badge indices to CodeNomad. The server forwards that command through the existing workspace instance event stream, and the UI consumes the resulting event to stage the selection in the active conversation view. The tool remains intentionally thin: it does not infer what should be removed, and it does not delete anything. Its job is only to validate the range syntax, preserve the user's existing review step, and reuse the current trashcan confirmation flow instead of introducing a second mutation path.
Tighten the context-prune selection path so invalid or noisy inputs fail earlier and staged UI state behaves more predictably under edge cases. This normalizes staged indices before they reach the conversation view, rejects empty or excessively large selections in the tool parser, and avoids clearing an existing selection when a stale command no longer maps to any current deletable badges. Those checks keep the side-effect tool thin while making the staging path safer to use from the LLM and less surprising for the user. This commit is intentionally limited to validation and guardrails. It does not change the core selection model, introduce auto-delete behavior, or expand the feature beyond the existing review-and-trashcan workflow.
Refine the context-prune tool metadata so the model sees the intended calling pattern more clearly. The description now makes it explicit that one call can include multiple individual indices and multiple ranges in a single final selection, and that repeated calls replace rather than merge with the currently staged selection. This is prompt-surface polish only and does not change the selection transport or deletion behavior.
Refine the context-prune tool metadata so the model can map the visible conversation it sees onto the numeric range the tool expects. The updated wording now explains that the model typically sees chronological message ids like m0007, m0008, m0009, and should convert that visible pruneable order into 1-based positions for the tool call. It also includes a concrete example that turns a visible sequence into the expected range syntax, while preserving the existing rule that repeated calls replace rather than merge with the staged selection. This is description-only guidance for the model surface. It does not change runtime behavior, transport, or deletion semantics.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
User Workflow
Screenshot
Summary
This PR adds a minimal context-pruning flow where the LLM can stage badge selections for removal, and the user then reviews or adjusts that selection before deleting anything.
Why
The previous flow required users to enter selection mode and manually pick every badge themselves, which is slow and tedious in longer sessions.
What Changed
Help get this merged, guys
Observe the sacred pile of review stones. They are not for throwing at @shantur, no matter how mysteriously delayed the merge may seem. They are for channeling your energy into a respectful but persistent request that he review and merge the context management tooling.