Conversation
# PR Title Implement shared compact split and unified tool-call diff layout --- Fixes #268 # PR Description ## Summary This PR makes tool-call diffs more compact in both `Unified` and `Split` views by reducing wasted horizontal space in line-number gutters and content indentation. ## What changed - introduced a shared compact-diff framework for tool-call diffs - kept mobile-specific policy limited to: - forcing unified mode below the breakpoint - enabling wrap only in mobile unified mode - added mode-specific compact applicators in the diff viewer: - unified applicator - split applicator - reduced gutter width waste by measuring rendered line-number text and tightening column width around it - removed unnecessary right-side content padding - aligned `+` / `-` markers closer to the left edge across both views - simplified cleanup after gatekeeper review by removing extra plumbing and residue ## Screenshots ### Before <img width="581" height="341" alt="image" src="https://github.com/user-attachments/assets/ec47b256-749a-4afc-8879-aaf33f0b46b6" /> ### After <img width="470" height="586" alt="image" src="https://github.com/user-attachments/assets/7258a5a2-47c4-408d-84bc-1b497761c7ad" /> ## Architectural approach This change intentionally uses: - shared policy in `packages/ui/src/components/tool-call/diff-render.tsx` - shared helper/measurement logic in `packages/ui/src/components/diff-viewer.tsx` - mode-specific applicators where unified and split DOM differ - CSS for shared visual spacing and alignment cleanup The goal was to keep the implementation architecturally clean and avoid building separate duplicated compact-diff features for: - mobile vs desktop - unified vs split Instead, the feature shares one compact-diff concept and only diverges where the upstream diff DOM requires separate handling. ## Files changed - `packages/ui/src/components/tool-call/diff-render.tsx` - `packages/ui/src/components/diff-viewer.tsx` - `packages/ui/src/styles/messaging/tool-call.css` - `packages/ui/src/types/message.ts` ## Validation Manual validation was performed in the running UI. Verified manually: - compact unified gutters on mobile - compact unified gutters on desktop - compact split gutters on desktop - tighter operator alignment in both modes Also verified: - `npm run typecheck` passes ## Notes - This PR is intended to address the compact diff layout problem described in the related issue. - Diff-specific CSS still lives in `tool-call.css`; future extraction into a smaller dedicated stylesheet is possible but not required for this change. --------- Co-authored-by: Shantur Rathore <i@shantur.com>
Add log level configuration support via config.yaml and UI settings. --------- Co-authored-by: Shantur Rathore <i@shantur.com>
## Summary - add a remote CodeNomad server launcher flow in the home screen, including saved server profiles, probe-before-connect behavior, and desktop bridge APIs for opening remote windows - add Electron support for remote server windows with per-window origin handling and self-signed certificate bypass, plus Tauri support for remote windows with clearer self-signed guidance - fix Tauri dev server resolution and window shutdown behavior so dev mode prefers the source server entry and the app only exits after the last window closes
## Summary - add SideCar support across the server and UI, including proxied tabs, picker/settings flows, and websocket-aware proxying - unify top-level tab handling so workspace instances and SideCars share the same tab model and navigation flows - limit SideCars to port-based services only, removing server-managed process control from the final API and UI --------- Co-authored-by: Shantur <shantur@Mac.home> Co-authored-by: Shantur <shantur@Shanturs-MacBook-Pro-M5.local>
## Summary - launch the Electron-managed server with `--unrestricted-root` by default - launch the Tauri-managed server with `--unrestricted-root` by default - stop relying on the server's `process.cwd()` fallback for desktop filesystem browsing -- Yours, [CodeNomadBot](https://github.com/NeuralNomadsAI/CodeNomad) Co-authored-by: Shantur Rathore <i@shantur.com>
…SPEED IMPROVEMENT) (#274) ## Summary - Wraps store-proxied array iteration in `untrack()` in two `createEffect` blocks and one `createMemo` in `message-section.tsx` to prevent SolidJS from creating O(n) per-element reactive subscriptions on every run - Replaces `ids.includes()` with `Set.has()` for O(1) cleanup lookups in the part-count tracking effect ## Problem Two `createEffect` blocks in `message-section.tsx` iterate the `messageIds()` store proxy array inside a tracked reactive context. This causes SolidJS to create **O(n) per-element subscriptions** on every run. When any element changes, all n subscriptions fire, re-running the entire effect — resulting in **O(n²) total work**. Additionally, the cleanup loop in the part-count tracking effect uses `ids.includes(trackedId)` which is O(n) per tracked ID, compounding to O(n²). For long-running sessions with large message history (e.g. 7569 messages), this caused **~4.8 seconds of input latency** when sending a new prompt. ## Fix 1. **Timeline sync effect (~line 738):** Wrap entire body in `untrack()`, replace `ids.slice()` with `[...ids]` to snapshot without proxy tracking 2. **Part-count tracking effect (~line 891):** Wrap iteration in `untrack()`, replace `ids.includes()` with `new Set(ids).has()` for O(1) lookups 3. **`lastAssistantIndex` memo:** Read message records via `untrack()` to avoid O(n) subscriptions on part-level updates ## Result On a 7569-message session: prompt input latency reduced from **~4.8s to ~42ms** (114x improvement).
## Summary - preserve the current prompt text when dismissing the `@` mention/file picker with `Esc` - let `Enter` fall back to normal prompt submission when the mention picker is open but there is no selectable result ## Verification - source inspection of the prompt input and picker flow - local `npm run typecheck --workspace @codenomad/ui` is blocked in this environment because workspace dependencies are not installed -- Yours, [CodeNomadBot](https://github.com/NeuralNomadsAI/CodeNomad) Co-authored-by: Shantur Rathore <i@shantur.com>
Send synthetic session notifications when background processes finish, fail, stop, or terminate so the originating agent can react without polling. Hide synthetic text-only prompts from the UI stream so operational notifications stay out of the visible transcript.
- Add build scripts for platform-specific builds with zip bundles - Update CI workflow to use --bundles flag for explicit target selection - macOS: use app,zip (removed dmg) - Windows: use nsis,zip - Linux: use appimage,deb,rpm
…nt window - Add 50ms debounce to zoom operations to prevent WebView2 IPC bottleneck - Enable transparent window mode for better Windows resize/zoom performance - Reduce zoom step from 0.2 to 0.1 for finer control
Session diffs now use a compact patch field instead of storing full before/after content. Added parsePatchToBeforeAfter utility to extract before/after from unified diff format, and updated MonacoDiffViewer to accept patch prop as alternative to before/after strings.
… viewer" This reverts commit 2e9ee2c.
This reverts commit 197898c.
…s viewer" This reverts commit af64291.
- Track messageInfoVersion in cache signature to rebuild when tokens arrive via SSE - Read tokens from step-finish part directly (embedded in SSE events) - Simplify available tokens to show full context window when no explicit input limit
Reverted debouncing logic and transparent window mode that were causing issues. Kept the zoom step reduction from 0.2 to 0.1 for finer control.
# PR Draft: Fix sticky auto-scroll during streaming chat responses Fixes #308 ## Summary This change makes chat auto-scroll easier to escape while assistant output is still streaming. The goal is to stop the viewport from repeatedly pulling the user back toward the bottom once they begin scrolling upward to inspect earlier content. ## Why Before this change, streaming updates could keep reasserting bottom-follow behavior during active rendering. That made auto-scroll feel sticky and forced users to scroll repeatedly or forcefully just to review earlier parts of an in-progress response. The intended behavior is simpler: once the user scrolls upward to leave follow mode, the UI should respect that decision instead of fighting it during subsequent stream updates. ## What Changed 1. Removed render-time force-bottom behavior from the shared follow-scroll helper path. 2. Updated streamed reasoning output to restore scroll without forcing the viewport back to the bottom. 3. Updated streamed tool-call output to use the same non-forcing restore behavior. ## Scope Boundaries Included: - Sticky auto-scroll behavior during streamed chat output - Shared follow-scroll behavior used by streamed nested panes - Reasoning and tool-call streaming paths that reused the same forced follow behavior Not included: - A full rewrite of the virtualized message list follow model - Broader scroll UX changes outside the streaming follow/escape behavior - Unrelated UI or plugin configuration changes in the worktree ## Technical Notes The core problem was not basic auto-scroll itself, but a render-time path that could keep forcing bottom-follow behavior while new streamed content was arriving. That meant a user's attempt to scroll upward could be overridden repeatedly by subsequent stream updates, which is why the auto-scroll felt sticky. The fix removes that override and keeps render-time restoration dependent on the current follow state instead. ## Files Changed - `packages/ui/src/lib/follow-scroll.tsx` - `packages/ui/src/components/message-block.tsx` - `packages/ui/src/components/tool-call.tsx` ## Verification Performed: 1. Reproduced the sticky auto-scroll behavior with a long multi-line streaming response. 2. Verified that scrolling upward during streaming now disengages follow more naturally in the affected streamed panes. 3. Ran `npm run typecheck --workspace @codenomad/ui`. 4. Ran `npm run build --workspace @codenomad/ui`. Build note: - The UI typecheck passes. - The UI build succeeds. - The build still emits existing third-party and chunk-size warnings unrelated to this change. ## Risks and Follow-up 1. The broader scroll-follow model is still more heuristic-heavy than ideal, so there may be future follow-up work to simplify it further. 2. This PR intentionally applies the smallest targeted fix to the known snap-back path instead of rewriting the full chat scroll system. --------- Co-authored-by: Shantur Rathore <i@shantur.com>
… SPEED IMPROVEMENT ) (#291) ## Summary - virtualize MessageTimeline so large session histories stop rendering the full timeline sidebar at once. - keep the existing full render path in selection mode so xray/selection behavior stays intact. - route active-segment scrolling through the virtualizer so timeline navigation still follows the selected message. ## Benefit - prompt field was very laggy in cession with big history and timeline had many bugs, this is fixed. - the session with big history now load as fast as a new session .
## Summary - Follow-up to #240 to make Windows desktop shutdown reliable this time, even when the tracked CLI wrapper PID exits before its descendants - Attach the spawned CLI process to a Windows Job Object with `KILL_ON_JOB_CLOSE`, so the desktop app owns the whole subtree instead of relying only on `taskkill /PID <wrapper> /T` - Keep the current graceful-then-force shutdown path, but add a robust OS-level fallback that reaps orphaned workspace processes when the wrapper is already gone ## Root Cause The previous Windows shutdown logic still depended on the PID tracked by Tauri. In practice that PID can be a short-lived Node wrapper. Once that wrapper exits, `taskkill` can report success or PID-not-found while descendants remain alive, and the desktop app no longer has a reliable handle to reap them. ## Validation - `cargo check --manifest-path packages/tauri-app/src-tauri/Cargo.toml` - `cargo build --release --manifest-path packages/tauri-app/src-tauri/Cargo.toml` - Manual local test: orphaned processes are cleaned up after desktop shutdown
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Thanks for contributions
Highlights
What’s Improved
Fixes
Docs
Contributors
Full Changelog: v0.13.3...v0.14.0