Refactor launcher core, backend contracts, and chat runtime#79
Conversation
📝 WalkthroughWalkthroughIntroduces Windows-native voice recognition; refactors chat streaming/cancellation and provider HTTP/payload/response; adds full image-generation stack (local sd.cpp, ComfyUI, cloud) with UI; replaces gpu_layers with compute_mode across engine/runtime/UI; overhauls bootstrap/composition; adds per-view console logs; expands integrations SDK/runtime and downloads selection; removes license API; broad infra/docs/CI updates. ChangesAI Chat and Image Pipeline
Engine Compute Mode and Settings
Windows Voice Recognition
Console Logs Per-View
Integrations Runtime and SDK
Downloads and Release Selection
Core Assembly and Bootstrap Refactor
Licensing Removal
Infrastructure / Persistence / CI / Docs
Sequence Diagram(s)sequenceDiagram
autonumber
actor UI
participant FE as Frontend Bridge
participant BE as Tauri Backend
participant ENG as Local Engine
participant CLOUD as OpenRouter
UI->>FE: Generate image (prompt, params)
FE->>BE: invoke generate_image(provider, model, params)
alt Cloud provider
BE->>CLOUD: POST /chat/completions
CLOUD-->>BE: JSON with image URLs
else Local sd.cpp
BE->>ENG: POST /jobs (create)
BE->>ENG: GET /jobs/{id} (poll)
ENG-->>BE: JSON with images/base64
else ComfyUI
BE->>ENG: POST /prompt (queue)
BE->>ENG: GET /history/{pid} (poll)
ENG-->>BE: Image assets
end
BE-->>FE: Normalized images (data URLs or URLs)
FE-->>UI: Render images and enable navigation
sequenceDiagram
autonumber
actor User
participant UI
participant FE as Frontend Bridge
participant BE as Tauri Backend (Voice)
participant OS as Windows Speech
User->>UI: Start voice input
UI->>FE: invoke recognize_voice_once({language})
FE->>BE: recognize_voice_once
BE->>OS: SpeechRecognizer.RecognizeAsync()
OS-->>BE: Result (text, status, confidence)
BE-->>FE: {text, status, confidence}
FE-->>UI: Deliver text or show toast/open settings
Estimated code review effort🎯 5 (Critical) | ⏱️ ~150 minutes Possibly related PRs
Poem
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
src/index.html (1)
5-8: 🛠️ Refactor suggestion | 🟠 MajorCSP currently allows
style-src 'unsafe-inline'andscript-src ... data:(security risk).This makes the browser far more permissive than typical hardening:
'unsafe-inline'enables inline style execution, and allowingdata:inscript-srccan increase exploit surface if any injection ever occurs.🛠️ Suggested CSP tightening (directional)
<meta http-equiv="Content-Security-Policy" - content="default-src 'self'; script-src 'self' data:; style-src 'self' 'unsafe-inline' https://fonts.googleapis.com; ... + content="default-src 'self'; script-src 'self'; style-src 'self' https://fonts.googleapis.com; ... " />If you rely on inline styles/scripts, the stronger alternative is CSP nonces/hashes—but that requires coordinated changes in your app/runtime to attach nonces to inline content.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/index.html` around lines 5 - 8, The Content-Security-Policy meta tag (the content attribute on the meta with http-equiv="Content-Security-Policy") is too permissive: remove "style-src 'unsafe-inline'" and the "data:" source from "script-src" to tighten CSP, and replace allowed inline/script usage with safer alternatives (CSP nonces or hashes) where inline styles/scripts are required; update the meta content string accordingly and ensure any inline styles/scripts in the app are migrated to external files or tagged with nonces/hashes referenced by the new policy.src/features/ai/services/EngineStatusService.test.ts (1)
19-32:⚠️ Potential issue | 🟡 MinorMock
tauriProvider.invoke()in the default Tauri fixture.
init()now always callsrefreshFromBackend(), so this shared mock exercises the error path on every test run becauseinvokeis missing. That makes the happy-path tests pass with an invalid bridge shape and leaves the new backend-sync path unverified.Proposed test-fixture fix
core = { i18n: { t: vi.fn((_: string, fallback: string = ''): string => fallback), }, tauriProvider: { isTauri: vi.fn().mockReturnValue(true), + invoke: vi.fn().mockResolvedValue('idle'), listen: vi .fn() .mockImplementation((event: string, cb: (payload: unknown) => void) => { listeners[event] = cb; return Promise.resolve(vi.fn());🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/ai/services/EngineStatusService.test.ts` around lines 19 - 32, The shared test fixture's tauriProvider is missing a mock for invoke, causing init() (which now always calls refreshFromBackend()) to hit the error path in every test; update the tauriProvider mock in the EngineStatusService tests so it includes an invoke function (e.g., a vi.fn() that returns a resolved value or appropriate bridge shape) to emulate the backend-sync response used by refreshFromBackend(), keeping the existing listen and isTauri mocks and the EngineStatusContext shape.src/features/chat/controllers/ChatGenerationController.ts (1)
247-307:⚠️ Potential issue | 🟠 MajorRestore token-estimation fallback when backend usage is missing.
usageis optional, but Line 286–307 now treats missing usage as0and skips context accounting. This undercounts assistant text tokens for providers that omit usage metadata.Suggested fix
- if (replyText !== '') { - this.handleTextReply(response, replyText, streamingHandle); + if (replyText !== '') { + await this.handleTextReply(response, replyText, streamingHandle); return; } - private handleTextReply( + private async handleTextReply( response: IChatResponse, replyText: string, streamingHandle?: StreamingMessageHandle | null, - ): void { - const tokens = this._resolveBackendCompletionTokens(response); + ): Promise<void> { + const backendTokens = this._resolveBackendCompletionTokens(response); + const tokens = + backendTokens > 0 + ? backendTokens + : await this._options.estimateReplyTokens(replyText); if (tokens > 0) { this._options.addContextTokens(tokens); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/chat/controllers/ChatGenerationController.ts` around lines 247 - 307, The backend may omit response.usage causing _resolveBackendCompletionTokens to return 0 and skip token accounting; update handleTextReply to fall back to estimating tokens from replyText when _resolveBackendCompletionTokens(response) returns 0 (and replyText is not empty). Specifically, after calling _resolveBackendCompletionTokens in handleTextReply, if tokens === 0 then call await this._options.estimateReplyTokens(replyText) (or synchronous if API is sync) and use that value for addContextTokens/finalize/appendAssistantMessage; keep _resolveBackendCompletionTokens unchanged or optionally extend it to accept replyText and perform the same fallback. Ensure you reference handleTextReply, _resolveBackendCompletionTokens, and this._options.estimateReplyTokens so the change is easy to locate.src/features/chat/controllers/ChatSendController.ts (1)
155-157:⚠️ Potential issue | 🟠 MajorCancellation via exception can leave streaming/image UI orphaned.
When cancellation causes
sendMessageto reject, the catch branch returns early without callingstreamingHandle.cancel()/imageHandle.cancel(). That can leave pending UI visible.💡 Suggested fix
public async sendChat(input: HTMLTextAreaElement | null): Promise<boolean> { if (this._isDestroyed || this._options.isSending()) return false; @@ - try { + let streamingHandle: StreamingMessageHandle | null = null; + let imageHandle: ImageGenerationHandle | null = null; + + try { const sendPlan = await this._sendFlow.prepare(text); if (this._wasDestroyed()) return false; @@ - let streamingHandle: StreamingMessageHandle | null = null; - let imageHandle: ImageGenerationHandle | null = null; - const ensureStreamingHandle = (): StreamingMessageHandle => { streamingHandle ??= this._options.createStreamingHandle(typingId); return streamingHandle; }; @@ } catch (error: unknown) { this._cleanupStreamingState(listenerId, typingId); if (this._isCancelRequested()) { + this._cancelStreamingHandle(streamingHandle); + imageHandle?.cancel(); return false; } if (!this._wasDestroyed()) { this._options.handleError(error); } return false;Also applies to: 201-205
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/chat/controllers/ChatSendController.ts` around lines 155 - 157, The catch branch in the send flow can return early and leave UI orphaned because streamingHandle and imageHandle aren’t cancelled; update the error handling to always call streamingHandle.cancel() and imageHandle.cancel() when those handles are non-null (preferably from a finally block or before any early return) so pending streaming/image UI is cleaned up; reference the local variables streamingHandle and imageHandle and the methods StreamingMessageHandle.cancel and ImageGenerationHandle.cancel in your change to ensure both handles are invoked on error or cancellation.src/features/settings/ui/ModuleSettingsEngineFieldRowRenderer.ts (1)
102-108:⚠️ Potential issue | 🟡 MinorGuard
addFileBrowseButton()withHTMLInputElementtoo.This branch now unconditionally casts
engineInput, while the earlier file setup path safely checksinstanceof HTMLInputElement. A misconfigured file row will only fail once Tauri is enabled. Reuse the same guard here.🛠️ Proposed fix
- if (options.isFile === true && this._deps.isTauri()) { + if ( + options.isFile === true && + this._deps.isTauri() && + engineInput instanceof HTMLInputElement + ) { this._deps.addFileBrowseButton( inputWrapper, - engineInput as HTMLInputElement, + engineInput, options.isImage === true, options.fileKind ?? 'model', ); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/settings/ui/ModuleSettingsEngineFieldRowRenderer.ts` around lines 102 - 108, The branch that calls this._deps.addFileBrowseButton is casting engineInput to HTMLInputElement without checking its runtime type; update the condition that currently checks options.isFile && this._deps.isTauri() to also verify engineInput is an HTMLInputElement (reuse the same instanceof guard used earlier) before calling addFileBrowseButton, and pass the safely-typed engineInput to addFileBrowseButton so the method is only invoked when engineInput is actually an HTMLInputElement (references: addFileBrowseButton, engineInput, options.isFile, _deps.isTauri, inputWrapper, ModuleSettingsEngineFieldRowRenderer).src-tauri/src/domain/ai/streaming.rs (1)
827-899:⚠️ Potential issue | 🟠 MajorHandle Ollama-style
message.contentanddonehere too.This branch now normalizes Ollama token counters, but it still ignores the matching content and terminal fields. A chunk shaped like
{"message":{"content":"hi"},"done":true}emits no text, never marks the terminal chunk, and can still fall into the incomplete-stream path later. Parsemessage.contentalongside the other content shapes and treat top-leveldone: trueas terminal.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/domain/ai/streaming.rs` around lines 827 - 899, The parser currently ignores Ollama-style message.content and done: true shapes, so update the logic around extract_token_usage/state.final_usage and the content/delta handling (functions/variables: extract_token_usage, state.final_usage, state.saw_terminal_chunk, StreamChunkResult, extract_error_message, delta, extract_stream_text, sink.emit(StreamEvent::ThoughtChunk), message_id) to: treat a top-level json.get("done").and_then(as_bool) true the same way as the existing "stop" check by setting state.saw_terminal_chunk = true; and include json.get("message").and_then(|m| m.get("content")).and_then(extract_stream_text) in the content resolution chain (alongside delta.content, choice.text/content, json.content/response, and token.text) so Ollama message.content emits text like other shapes and prevents falling into the incomplete-stream path.
🟠 Major comments (23)
src-tauri/src/domain/modules/controller/lifecycle.rs-340-357 (1)
340-357:⚠️ Potential issue | 🟠 MajorDon’t treat scan task failure as “no matches.”
At Line 350–355, falling back to
Vec::new()on scan task failure can cause duplicate starts (or missed cleanup) under failure conditions. This should be surfaced as an error path, at least forstart.Suggested direction
- async fn find_matching_script_processes(&self, entry_path: &Path) -> Vec<usize> { + async fn find_matching_script_processes( + &self, + entry_path: &Path, + ) -> Result<Vec<usize>, AppError> { let module_path = self.module_path.to_path_buf(); let entry_path = entry_path.to_path_buf(); match tokio::task::spawn_blocking(move || { process::find_script_module_processes(&module_path, &entry_path) }) .await { - Ok(pids) => pids, + Ok(pids) => Ok(pids), Err(error) => { - tracing::warn!( + tracing::error!( "Failed to scan matching script module processes for {}: {error}", self.module_id ); - Vec::new() + Err(AppError::Internal { + request_id: None, + message: format!("Process scan failed for {}: {error}", self.module_id), + }) } } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/domain/modules/controller/lifecycle.rs` around lines 340 - 357, The current find_matching_script_processes silently returns Vec::new() on spawn_blocking failure which hides scan errors and can cause duplicate starts; change find_matching_script_processes to return a Result<Vec<usize>, anyhow::Error> (or other crate error) instead of Vec<usize>, propagate the spawn_blocking error (include the join error and any inner error) back to callers, update callers (e.g. start path) to handle the Err case by surfacing/failing the start or performing explicit cleanup, and keep references to module_path, entry_path and process::find_script_module_processes when converting the match to a ?-style propagation so failures are not treated as “no matches.”src/features/ai/services/EngineStatusService.ts-205-210 (1)
205-210:⚠️ Potential issue | 🟠 MajorThe DOM observer is wired to the entire page.
Observing
document.bodywithchildList: trueandsubtree: truemeans every unrelated DOM mutation — including chat-streaming updates — triggers_applyActiveStatesToDom()and page-widequerySelectorAll()scans. Scope this to the launcher/module containers, or filter mutations before re-querying.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/ai/services/EngineStatusService.ts` around lines 205 - 210, The DOMObserver is currently attached to document.body causing every DOM mutation to trigger _applyActiveStatesToDom() and expensive querySelectorAll() scans; change the observer to a narrower element (e.g. the launcher/module container node your code manages) instead of document.body, or add early-return filtering inside the observer callback to only handle mutations that touch attributes/data-app-id/data-current-module or specific launcher/module nodes before calling _applyActiveStatesToDom(); update the observe call (the one referencing this._domObserver.observe(document.body,...)) and the observer callback that invokes _applyActiveStatesToDom() accordingly to limit scope and avoid full-page scans.src/features/ai/services/EngineStatusService.ts-146-157 (1)
146-157:⚠️ Potential issue | 🟠 MajorIgnore late backend refreshes after teardown or context swaps.
refreshFromBackend()applies whateverinvoke()returns even ifdestroy()ran orsetContext()replaced the bridge while the promise was in flight. A late resolve can repopulate_activeSlotsand re-stamp stale DOM after teardown.Suggested guard
public async refreshFromBackend(): Promise<void> { - if (this._context?.tauriProvider.isTauri() !== true) { + const context = this._context; + if (context?.tauriProvider.isTauri() !== true) { return; } try { - const state = - await this._context.tauriProvider.invoke<BackendEngineState>('get_engine_state'); + const state = await context.tauriProvider.invoke<BackendEngineState>( + 'get_engine_state', + ); + if (!this._initialized || this._context !== context) { + return; + } this._applyBackendState(state); } catch (error) { this._tracer.error('[EngineStatusService] Failed to refresh engine state:', error); } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/ai/services/EngineStatusService.ts` around lines 146 - 157, refreshFromBackend currently applies backend state even if destroy() ran or setContext() swapped the bridge while invoke() was in flight; to fix, capture the current context/tauriProvider (e.g., const currentProvider = this._context?.tauriProvider) before calling invoke(), and after await verify the same provider and that the service is not destroyed (or check an isDestroyed flag set by destroy()); only call this._applyBackendState(state) if the provider/context matches and not destroyed, otherwise ignore the late response and optionally log via this._tracer that the response was discarded to avoid repopulating _activeSlots or stamping stale DOM.src/features/chat/services/ChatSendFlow.ts-25-31 (1)
25-31:⚠️ Potential issue | 🟠 Major
tokenCountnow undercounts every text-only prompt.Line 31 now returns
0whenever the user sends plain text without attachments, becausecombinedTextis no longer counted at all. That silently changesPreparedChatSend.tokenCountfrom “prompt token count” to “attachment token count”, which is risky for any limit checks or UI that still treats it as total tokens. Either keep a text estimate here or rename the field at the type boundary so callers cannot misinterpret it.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/chat/services/ChatSendFlow.ts` around lines 25 - 31, The returned tokenCount currently only sums attachments (attachmentTokens) and omits the prompt text, causing PreparedChatSend.tokenCount to undercount text-only prompts; update the return to include an estimate of tokens for combinedText (e.g., call the existing token-estimation helper such as estimateTokens/estimateTextTokens or implement a small helper if none exists) and add that value to attachmentTokens (or, alternatively, if you intentionally want attachment-only values, rename the exported field to attachmentTokenCount at the type boundary so callers cannot misinterpret tokenCount). Reference combinedText, attachments, attachmentTokens, and PreparedChatSend.tokenCount when making the change.src-tauri/src/api/voice/mod.rs-76-82 (1)
76-82:⚠️ Potential issue | 🟠 MajorDo not silently fall back to the default recognizer for requested languages.
Line 81 swallows every
recognizer_for_language()failure and retries with the default recognizer. If the user explicitly asked for a language that Windows cannot create/use, they can get transcription in the OS default language with no signal back to the UI. The safe fallback is the existingNonepath on Lines 77-79; once a language was requested, this should surface an error instead of changing languages implicitly.Suggested fix
fn create_recognizer(language: Option<&str>) -> Result<SpeechRecognizer, AppError> { let Some(tag) = normalized_language_tag(language) else { return default_recognizer(); }; - recognizer_for_language(&tag).or_else(|_| default_recognizer()) + recognizer_for_language(&tag) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/api/voice/mod.rs` around lines 76 - 82, The current create_recognizer silently falls back to default_recognizer when recognizer_for_language fails, hiding errors for explicitly requested languages; change the Some(tag) branch to return the Result from recognizer_for_language(&tag) directly (propagate its Err) instead of .or_else(|_| default_recognizer()), while keeping the existing normalized_language_tag None path that uses default_recognizer(); reference functions: create_recognizer, normalized_language_tag, recognizer_for_language, default_recognizer.src-tauri/src/domain/engine/engine_args.rs-68-81 (1)
68-81:⚠️ Potential issue | 🟠 MajorRespect explicit compute overrides in
extra_args.This now unconditionally adds
--device/-nglbeforeconfig.extra_args, so migrated or manually-specified compute flags can produce conflicting CLI args. That makes the selectedcompute_modeambiguous and can silently run on the wrong device.Suggested fix
-fn push_llamacpp_compute_args(args: &mut Vec<String>, config: &EngineConfig) { +fn push_llamacpp_compute_args( + args: &mut Vec<String>, + existing_args: &[String], + config: &EngineConfig, +) { + if has_arg(existing_args, &["--device"]) || has_arg(existing_args, &["-ngl"]) { + return; + } + match config.compute_mode { EngineComputeMode::Gpu => { args.push("-ngl".to_string()); args.push("all".to_string()); } EngineComputeMode::Cpu => { args.push("--device".to_string()); args.push("none".to_string()); args.push("-ngl".to_string()); args.push("0".to_string()); } } }And at the call site:
- push_llamacpp_compute_args(&mut args, config); + push_llamacpp_compute_args(&mut args, &config.extra_args, config);Also applies to: 200-200
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/domain/engine/engine_args.rs` around lines 68 - 81, push_llamacpp_compute_args currently unconditionally appends device-related flags which can be overridden or conflicted by user/migrated flags in config.extra_args; update push_llamacpp_compute_args to first scan config.extra_args for any explicit compute/device flags (e.g. "--device", "-ngl", "--gpu", "--cpu" or similar project-specific tokens) and only push the default EngineComputeMode-derived flags when none are present, and apply the same guard at the other affected location referenced in the review so explicit extra_args always take precedence over generated flags.src/shared/shell/ui/ModuleCardRenderer.ts-105-105 (1)
105-105:⚠️ Potential issue | 🟠 MajorDon't treat the entire AI category as installed.
isAiCategory(category)makes every AI card skip the download branch, so uninstalled local engines now render action buttons instead of download/install controls. It also breaks the existing download-progress path, because those cards no longer have a.download-btn. If only cloud/API entries should bypass installation, key this off explicit app metadata rather than the whole category.Also applies to: 128-133
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/shared/shell/ui/ModuleCardRenderer.ts` at line 105, The code in ModuleCardRenderer is incorrectly using isAiCategory(category) to skip the install/download branch (called from _resolveCardState and the render path around the `.download-btn` logic), causing all AI-category cards to behave as if installed; change the guard to test explicit app metadata (e.g., app.source, app.installationType, or a boolean like app.isCloud or app.metadata.isRemote) instead of isAiCategory(category), update _resolveCardState and the render branches that look for `.download-btn` to only bypass installation for apps that are marked as cloud/API in their own metadata, and keep local AI engine entries going through the normal download/install and download-progress flows.src/features/settings/ui/ModuleSettingsUI.ts-453-454 (1)
453-454:⚠️ Potential issue | 🟠 MajorGuard notifier exceptions so save flow cannot be interrupted.
On Line 453, notification is executed before autosave. If
onModuleSettingsChangedthrows on Line 463, Line 454 never runs, so the setting isn’t persisted.Resilient callback handling
private _notifyCurrentModuleSettingsChanged(): void { const currentModule = this._context.currentModule; if (currentModule === undefined) { return; } - this._deps.onModuleSettingsChanged(currentModule); + try { + this._deps.onModuleSettingsChanged(currentModule); + } catch (error) { + this._deps.tracer.warn( + `[ModuleSettingsUI] onModuleSettingsChanged failed: ${String(error)}`, + ); + } }Also applies to: 457-464
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/settings/ui/ModuleSettingsUI.ts` around lines 453 - 454, Wrap calls to the notifier so exceptions cannot prevent the autosave; specifically, in ModuleSettingsUI replace the direct call to this._notifyCurrentModuleSettingsChanged() before this._getAutosaveController().debouncedSave(key, value) with a try/catch that logs or swallows errors from _notifyCurrentModuleSettingsChanged (and any calls to onModuleSettingsChanged inside it) and then always calls this._getAutosaveController().debouncedSave(key, value); apply the same pattern to the similar block covering lines 457-464 so debouncedSave is invoked regardless of notifier errors.src-tauri/src/domain/engine/config.rs-12-16 (1)
12-16:⚠️ Potential issue | 🟠 MajorDon’t default every fresh engine config to GPU.
build_default_engine_confignow forcesEngineComputeMode::Gpufor every engine. Since launch args are derived fromcompute_mode, fresh installs on CPU-only or unsupported-GPU machines can fail on first start until the user manually flips the setting. This default needs to come from engine capabilities/hardware detection, or fall back to CPU.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/domain/engine/config.rs` around lines 12 - 16, build_default_engine_config currently unconditionally sets compute_mode to EngineComputeMode::Gpu, causing CPU-only machines to fail; change it to pick a safe default from engine metadata or hardware detection instead of forcing GPU. Update build_default_engine_config to read a preferred mode from EngineDefinition (e.g., a capability or default_compute_mode field on EngineDefinition) or call a helper like detect_compute_mode()/EngineCapabilities::preferred_mode() and fall back to EngineComputeMode::Cpu when no GPU support is indicated, then pass that into EngineConfig before calling normalize_engine_config. Ensure you reference the EngineDefinition and EngineConfig types and the EngineComputeMode enum when implementing this fallback.src/infrastructure/tauri/TauriProvider.ts-198-198 (1)
198-198:⚠️ Potential issue | 🟠 MajorAvoid logging raw clipboard contents in mock mode.
This logs user-copied text verbatim, which can include secrets/PII. Log metadata (e.g., length) instead of payload.
Suggested fix
- this._tracer.info(`[Mock Clipboard] Write: ${text}`); + this._tracer.info(`[Mock Clipboard] Write (${text.length} chars)`);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/infrastructure/tauri/TauriProvider.ts` at line 198, The current mock clipboard log in TauriProvider (the this._tracer.info call that logs `[Mock Clipboard] Write: ${text}`) exposes raw clipboard contents; change it to avoid printing the payload and instead log metadata such as the content length or a redacted preview. Update the logging in the mock clipboard write path inside the TauriProvider class to call this._tracer.info with a message like "[Mock Clipboard] Write: length=<N>" (or a fixed redacted preview) rather than the full text variable so PII/secrets are not recorded.src/features/settings/ui/ModuleSettingsEngineRenderer.ts-159-162 (1)
159-162:⚠️ Potential issue | 🟠 MajorNotify only after config persistence succeeds.
notifySettingsChanged()fires immediately, even ifsetConfiglater fails. That can surface a false “settings changed/saved” state.Suggested fix
setConfig: (config) => { - void this._deps.engineConfigService.setConfig(config); - this._deps.notifySettingsChanged(); + void this._deps.engineConfigService + .setConfig(config) + .then(() => { + this._deps.notifySettingsChanged(); + }) + .catch((error) => { + this._deps.tracer.error( + `[ModuleSettingsEngineRenderer] Failed to persist engine config: ${String(error)}`, + ); + }); },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/settings/ui/ModuleSettingsEngineRenderer.ts` around lines 159 - 162, setConfig currently calls this._deps.engineConfigService.setConfig(config) fire-and-forget and always calls this._deps.notifySettingsChanged(); change it to wait for persistence to succeed before notifying: either make setConfig async and await this._deps.engineConfigService.setConfig(config) (or return the promise chain) and only call this._deps.notifySettingsChanged() in the fulfilled path, and on rejection propagate or handle the error so notifySettingsChanged is not called on failure; update the references to setConfig, this._deps.engineConfigService.setConfig, and this._deps.notifySettingsChanged accordingly.src-tauri/src/domain/ai/ai_dispatch.rs-243-248 (1)
243-248:⚠️ Potential issue | 🟠 MajorLocal
RequireRunningpath drops pre-merged session context.In this branch,
messages_contextis rebuilt fromrequest.messages.clone()instead of using merged session context. That can remove prior turns for session-based chats on already-running local engines.💡 Suggested fix
LocalEngineAccess::RequireRunning => { let status = active_local_engine_status( engine_manager, &request.provider, crate::domain::engine::types::Capability::Text, ) .await?; let base_url = format!("{}/v1", status.endpoint); let effective_model = resolve_local_text_model_id(&request.model, None, &request.provider); tracing::info!( engine = %status.id, endpoint = %base_url, "Using already running local engine" ); + let mut messages_context = request.messages.clone(); + if let Some(session_id) = &request.session_id { + messages_context = sessions.merge_request_messages(session_id, &request.messages); + } LocalEngineResolution { base_url, effective_model, - messages_context: local_messages_with_system_prompt( - request.messages.clone(), + messages_context: local_messages_with_system_prompt( + messages_context, settings_service, &request.provider, ) .await?, } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/domain/ai/ai_dispatch.rs` around lines 243 - 248, The RequireRunning branch rebuilds messages_context from request.messages.clone(), which drops previously merged session turns; change that call to pass the already-merged session context instead of request.messages.clone() by using the merged session messages variable used elsewhere in this function (replace request.messages.clone() passed into local_messages_with_system_prompt with the merged/combined session messages value), so messages_context is created from the preserved session context when calling local_messages_with_system_prompt.src/shared/shell/ui/ToastManager.ts-9-13 (1)
9-13:⚠️ Potential issue | 🟠 MajorIgnore nested link clicks before firing the toast action.
Actionable toasts still allow sanitized
<a>content. With the click handler on the toast root, following a link will also invokeonClick, which can trigger a second unrelated action.🛠️ Proposed fix
export interface ToastElement extends HTMLElement { _timeout?: ReturnType<typeof setTimeout>; _removeTimeout?: ReturnType<typeof setTimeout>; - _actionHandler?: () => void; + _actionHandler?: (event: MouseEvent) => void; } @@ - toast._actionHandler = onClick; + toast._actionHandler = (event: MouseEvent) => { + const target = event.target; + if (target instanceof HTMLElement && target.closest('a')) { + return; + } + onClick(); + }; toast.classList.add('toast--actionable'); toast.setAttribute('role', 'button'); toast.setAttribute('tabindex', '0'); - toast.addEventListener('click', onClick); + toast.addEventListener('click', toast._actionHandler); toast.addEventListener('keydown', this._handleActionKeydown);Also applies to: 250-269
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/shared/shell/ui/ToastManager.ts` around lines 9 - 13, The toast root click handler is firing the toast's _actionHandler even when a nested <a> (or other link-like element) is clicked; update the handler that references ToastElement._actionHandler so it first inspects the event target and returns early if the click originated inside a link-like element (e.g., if (event.target as HTMLElement).closest('a[href], [role="link"]') ) — do not call _actionHandler or prevent the link's default behavior in that case, otherwise proceed to invoke _actionHandler as before.src/features/settings/ui/ModuleSettingsEngineSelectField.ts-161-168 (1)
161-168:⚠️ Potential issue | 🟠 MajorPersist the inferred default option into the hidden input.
syncDisplay()now treats the first option as selected whenhiddenInput.value === '', but the actual form value stays empty. An untouched select can therefore render a valid choice while save/change logic still reads''.🛠️ Proposed fix
const syncDisplay = () => { const currentValue = hiddenInput.value === '' ? (options.options?.[0] ?? '') : hiddenInput.value; + if (hiddenInput.value !== currentValue) { + hiddenInput.value = currentValue; + } valueEl.textContent = options.optionLabels?.[currentValue] ?? currentValue; menu.querySelectorAll('.local-engine-select-option').forEach((node) => { if (node instanceof HTMLButtonElement) { node.classList.toggle('selected', node.dataset['value'] === currentValue); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/settings/ui/ModuleSettingsEngineSelectField.ts` around lines 161 - 168, syncDisplay currently treats the first option as the displayed selection when hiddenInput.value === '' but never updates the actual form state; modify syncDisplay so after computing currentValue (the inferred default from options.options?.[0]), if hiddenInput.value is empty assign hiddenInput.value = currentValue before updating valueEl and toggling option classes; reference the existing function name syncDisplay and the variables hiddenInput, currentValue, valueEl, options and menu when making the change.src/styles/features/ai-module-settings.css-848-857 (1)
848-857:⚠️ Potential issue | 🟠 MajorThese
.local-engine-api-controlselectors currently flatten every row in the API section.The first selector in each group already matches all label/input rows under
.local-engine-api-control, so model-path and future fields in that container lose their labels/chrome too. These need to be narrowed to the intended compute/context/system-prompt rows.Suggested narrowing
-.local-engine-api-control .local-engine-label-row, .local-engine-api-control .local-engine-field-row--compute-mode .local-engine-label-row, .local-engine-api-control .local-engine-field-row--context-size .local-engine-label-row, .local-engine-api-control .local-engine-field-row--llamacpp-system-prompt .local-engine-label-row { display: none; } -.local-engine-api-control .local-engine-input-row, +.local-engine-api-control .local-engine-field-row--compute-mode .local-engine-input-row, .local-engine-input-row:has(.local-engine-segmented-control) { padding: 0; gap: 0; background: transparent; border: none; border-radius: 0; }Also applies to: 1460-1467
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/styles/features/ai-module-settings.css` around lines 848 - 857, The CSS currently hides all label rows under .local-engine-api-control because the broad selector .local-engine-api-control .local-engine-label-row is included; narrow the selectors to target only the intended rows by removing the broad selector and using the specific class combinations (.local-engine-api-control .local-engine-field-row--compute-mode .local-engine-label-row, .local-engine-api-control .local-engine-field-row--context-size .local-engine-label-row, .local-engine-api-control .local-engine-field-row--llamacpp-system-prompt .local-engine-label-row) and likewise restrict the .local-engine-field-row { gap: 0 } rule to only those specific field-row variants so other rows (e.g., model-path) keep their labels and chrome; apply the same fix to the duplicate block later in the file.src/app/CoreBootstrapRunner.ts-22-29 (1)
22-29:⚠️ Potential issue | 🟠 MajorKeep the early init steps and shortcut registration inside the guarded bootstrap path.
Right now an exception from
bootstrap.aiBridge.init()orargs.registerGlobalShortcuts()bypasses the emergency reveal / splash-hide logic, so the app can fail while still stuck behind the splash.Proposed fix
export async function runCoreBootstrap(args: RunCoreBootstrapArgs): Promise<CoreBootstrapResult> { const { bootstrap } = args; bootstrap.tracer.debug('[Core] Init sequence started.'); - applyPlatformTheme(); - bootstrap.bridge.init(); - bootstrap.eventHandler.init(); - await bootstrap.aiBridge.init(); - const safetyTimeout = createBootstrapSafetyRevealTimer({ tracer: bootstrap.tracer, windowService: bootstrap.windowService, windowUI: bootstrap.windowUI, }); try { + applyPlatformTheme(); + bootstrap.bridge.init(); + bootstrap.eventHandler.init(); + await bootstrap.aiBridge.init(); + const bootstrapData = await fetchBootstrapData(bootstrap.tauriProvider, bootstrap.tracer); await hydrateCriticalServices({ bootstrapData, templateLoader: bootstrap.templateLoader, stateStore: bootstrap.stateStore, @@ }); await bootstrap.windowService.show(); await initializeImmediateUi(args.immediateUi); + args.registerGlobalShortcuts(); + bootstrap.tracer.debug('[Core] App Ready. Hiding splash...'); + await waitForNextPaintCycle(); + bootstrap.windowUI.hideSplashScreen(); } catch (e) { bootstrap.tracer.error(`[Core] Critical bootstrap failure: ${String(e)}`); void bootstrap.windowService.show().catch(() => { /* ignore emergency reveal failure */ }); bootstrap.windowUI.hideSplashScreen(); throw e; } finally { clearTimeout(safetyTimeout); } - - args.registerGlobalShortcuts(); - - bootstrap.tracer.debug('[Core] App Ready. Hiding splash...'); - await waitForNextPaintCycle(); - bootstrap.windowUI.hideSplashScreen(); return { currentPage: bootstrap.navigation.getCurrentPage() ?? null, }; }Also applies to: 68-72
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/CoreBootstrapRunner.ts` around lines 22 - 29, In runCoreBootstrap, the early init steps (applyPlatformTheme, bootstrap.bridge.init, bootstrap.eventHandler.init, await bootstrap.aiBridge.init) and args.registerGlobalShortcuts must be executed inside the guarded bootstrap path so exceptions cannot bypass the existing emergency reveal / splash-hide logic; move those calls into the try/catch (or wrap them with their own try that calls the same emergency reveal and splash-hide cleanup in its catch/finally) so any exception from bootstrap.aiBridge.init() or args.registerGlobalShortcuts() triggers the emergency reveal and splash-hide handling instead of leaving the app stuck behind the splash.src/features/ai/services/AIChatTransport.ts-180-197 (1)
180-197:⚠️ Potential issue | 🟠 MajorClear the active request immediately after a successful user cancel.
cancelActiveChatRequest()leaves_activeChatRequestIdset until the originalsend()finishes. A quick cancel→resend then goes through another stale-cancel round trip, and this explicit cancel path currently skips the timeout guard you already use for stale cancels.Proposed fix
public async cancelActiveChatRequest(): Promise<boolean> { if (this._context?.tauriProvider.isTauri() !== true) { return false; } const requestId = this._activeChatRequestId; if (requestId === null) { return false; } try { - return await this._context.tauriProvider.invoke<boolean>('cancel_chat_generation', { - requestId, - }); + const cancelled = await this._runWithTimeout( + this._context.tauriProvider.invoke<boolean>('cancel_chat_generation', { + requestId, + }), + STALE_REQUEST_CANCEL_TIMEOUT_MS, + 'AI request cancel timed out', + ); + if (cancelled && this._activeChatRequestId === requestId) { + this._activeChatRequestId = null; + } + return cancelled; } catch (error: unknown) { this._tracer.error('[AIChatTransport] IPC cancel error:', error); return false; } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/ai/services/AIChatTransport.ts` around lines 180 - 197, The cancelActiveChatRequest method leaves _activeChatRequestId set until send() finishes; update cancelActiveChatRequest so that after a successful invoke to this._context.tauriProvider.invoke('cancel_chat_generation', { requestId }) you clear this._activeChatRequestId (set it to null) before returning true, ensuring the in-memory active request is cleared immediately on user cancel; keep the existing try/catch and error logging around invoke and only clear the id when the IPC call reports success.src-tauri/src/api/ai/mod.rs-534-562 (1)
534-562:⚠️ Potential issue | 🟠 MajorKeep cancellation IDs aligned with the active assistant message.
The normal stream protocol keys updates by assistant
message_id, but this cancel branch emitsDone.message_idand returnsChatResponse.idusingrequest_id. If those identifiers differ, the frontend cannot reconcile the terminal event with the inflight assistant message and may leave it stuck in a generating state. Thread the real assistant message id into this branch instead of inventing one from the request id.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/api/ai/mod.rs` around lines 534 - 562, The cancellation branch in the tokio::select! uses request_id for the emitted Done.message_id and returned ChatResponse.id causing a mismatch with the assistant message id used by the normal stream protocol; update the cancellation branch in the select (the block around ai_service::process_chat_request and the `_ = cancellation => { ... }` arm) to use the actual assistant message id (the same message id produced/used by the successful stream path) instead of request_id when calling cancel_sink.emit(StreamEvent::Done { message_id: ... }) and when constructing the ChatResponse.id so the terminal event and the ChatResponse share the same assistant message identifier.src/features/chat/ui/ChatInputContextMenu.ts-138-179 (1)
138-179:⚠️ Potential issue | 🟠 MajorPrevent stale async opens from resurrecting the menu.
_open()fire-and-forgets_openWithClipboardState(). If clipboard access resolves afterclose(),destroy(), or a newer open, the stale task still appends a menu and re-registers document/window listeners. Add an open token/abort guard and bail out before mounting when the request is no longer current.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/chat/ui/ChatInputContextMenu.ts` around lines 138 - 179, The async _openWithClipboardState can mount a stale menu after await; introduce a per-open token (e.g., an incrementing openId or AbortController stored on the instance) that _open sets before calling _openWithClipboardState and that close()/destroy() increments/aborts; inside _openWithClipboardState capture the current token before awaiting _readClipboardForMenu and immediately bail if the token has changed or was aborted (do not append the menu or add listeners), and also ensure close()/destroy() clears any pending token and cleans up _menu and registered listeners to prevent resurrection by a stale task. Use the instance symbols _open, _openWithClipboardState, _readClipboardForMenu, close/destroy, _menu, and listener bindings (_boundDocumentPointerDown, _boundKeyDown, _boundClose) when implementing the guard.src/features/chat/services/VoiceInputService.ts-48-50 (1)
48-50:⚠️ Potential issue | 🟠 MajorGate voice support by capability, not just by Tauri runtime.
isSupported()now returnstruefor every Tauri desktop build. If the native recognizer is only available on a subset of hosts, this exposes a mic action that can only fail instead of staying disabled.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/chat/services/VoiceInputService.ts` around lines 48 - 50, The isSupported() implementation incorrectly gates voice support only by _hostBridge.isTauri(); update it to check the actual host capability for native speech recognition instead (e.g., call a capability method on _hostBridge such as hasCapability('speechRecognition') or isNativeRecognizerAvailable()) and return true only when both the runtime and capability are present; modify the isSupported() method to use _hostBridge.hasCapability('speechRecognition') (or add such a method to HostBridge if missing) rather than relying solely on _hostBridge.isTauri().src/features/chat/services/VoiceInputService.ts-85-93 (1)
85-93:⚠️ Potential issue | 🟠 Major
stop()does not stop the native recognition request.This only invalidates the frontend session. The in-flight
recognize_voice_oncecall keeps running, so the user can start a second session while the first native recognizer is still active.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/chat/services/VoiceInputService.ts` around lines 85 - 93, stop() currently only advances _sessionId and updates state but does not cancel the in-flight native recognition (recognize_voice_once), allowing the native recognizer to keep running; add logic in stop() to cancel the active native recognition request (e.g. call/abort whatever controller or cancel method you created for recognize_voice_once, such as this._recognitionAbortController.abort() or this._nativeRecognition.cancel()), clear the controller reference, and ensure any promise handlers for recognize_voice_once check session id or handle AbortError so no results are processed after stop(); keep calls to _setState('stopping') and _finishSession('user') and ensure cancellation happens before/alongside _finishSession to prevent overlapping sessions.src/features/chat/chat.ts-495-510 (1)
495-510:⚠️ Potential issue | 🟠 MajorRegeneration drops non-text user input.
This path rebuilds the next request from textarea text only. If the last turn included files or other structured content, regenerate sends a different prompt than the original one.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/chat/chat.ts` around lines 495 - 510, Regeneration currently uses only text from _historyController.regenerateLastTurn and restores it with _inputCoordinator.restore, dropping non-text user inputs (files/structured data); update the flow so regenerateLastResponse reconstructs the full last user turn (text plus attachments/metadata) and restores that to the input coordinator before calling sendChat. Concretely: change or overload _historyController.regenerateLastTurn to return a structured Turn object (e.g., { text, attachments, metadata }) or add a new method like getLastUserTurn, then replace the call to _inputCoordinator.restore(text) with a restore method that accepts the full Turn (or use existing _inputCoordinator.restoreWithStructuredInput) so files and other structured content are preserved when sendChat is invoked.src/app/CoreUiBridgeHelpers.ts-64-83 (1)
64-83:⚠️ Potential issue | 🟠 MajorThe timeout fallback still leaves backend token-count calls running.
Promise.raceonly returns the first-settled result; it does not canceltauriProvider.invoke(...). Thefinallyblock clears the timeout, but the underlying Tauri invocation continues executing. Under rapid typing or slow native tokenization, multiple stalecount_tokenscalls can accumulate in the backend even though the UI already fell back to the heuristic estimate.Consider implementing cancellation via
AbortControllerand passing an abort signal to the Tauri backend, or wrapping the invoke in a cooperative cancellation pattern.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/CoreUiBridgeHelpers.ts` around lines 64 - 83, The backend token-count call started in createTokenEstimator can continue running after withTimeout returns, causing stale tauriProvider.invoke('count_tokens', ...) calls to accumulate; update createTokenEstimator to support cancellation by creating an AbortController per invocation, pass its signal into deps.tauriProvider.invoke (or an equivalent cancellation option supported by the tauriProvider), and ensure the controller.abort() is called when the timeout (TOKEN_ESTIMATE_TIMEOUT_MS) elapses so the in-flight invoke is aborted; keep the existing fallback to estimateTokenCount(text) on timeout or error and preserve the try/catch around withTimeout so errors still log via deps.tracer.warn.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 9726f8ed-503d-47a2-b0cd-5e363c3fb5bc
📒 Files selected for processing (136)
src-tauri/Cargo.tomlsrc-tauri/capabilities/default.jsonsrc-tauri/resources/config/local_modules.jsonsrc-tauri/resources/locales/en.jsonsrc-tauri/resources/locales/ru.jsonsrc-tauri/resources/locales/zh.jsonsrc-tauri/src/api/ai/mod.rssrc-tauri/src/api/engine/mod.rssrc-tauri/src/api/mod.rssrc-tauri/src/api/voice/mod.rssrc-tauri/src/domain/ai/ai_dispatch.rssrc-tauri/src/domain/ai/ai_service.rssrc-tauri/src/domain/ai/image_service.rssrc-tauri/src/domain/ai/streaming.rssrc-tauri/src/domain/engine/config.rssrc-tauri/src/domain/engine/engine_args.rssrc-tauri/src/domain/engine/manager.rssrc-tauri/src/domain/engine/registry.rssrc-tauri/src/domain/engine/types.rssrc-tauri/src/domain/integration_api.rssrc-tauri/src/domain/modules/controller/lifecycle.rssrc-tauri/src/lib.rssrc/app/CoreAssembly.tssrc/app/CoreBootstrapRunner.test.tssrc/app/CoreBootstrapRunner.tssrc/app/CoreChatFactory.tssrc/app/CoreComposition.tssrc/app/CoreDeferredUiFactory.tssrc/app/CoreLifecycleController.tssrc/app/CoreRuntimeSupport.tssrc/app/CoreStateRestore.test.tssrc/app/CoreStateRestore.tssrc/app/CoreUiBridgeHelpers.tssrc/app/CoreUiFactory.tssrc/app/LazyUiAdapters.tssrc/app/init.tssrc/features/ai/services/AIBridge.test.tssrc/features/ai/services/AIBridge.tssrc/features/ai/services/AIChatTransport.test.tssrc/features/ai/services/AIChatTransport.tssrc/features/ai/services/AIProviderManager.test.tssrc/features/ai/services/EngineConfigService.test.tssrc/features/ai/services/EngineConfigService.tssrc/features/ai/services/EngineStatusService.test.tssrc/features/ai/services/EngineStatusService.tssrc/features/ai/types/IAIBridge.tssrc/features/ai/types/aiTypes.tssrc/features/chat/chat.test.tssrc/features/chat/chat.tssrc/features/chat/controllers/ChatGenerationController.test.tssrc/features/chat/controllers/ChatGenerationController.tssrc/features/chat/controllers/ChatHistoryController.test.tssrc/features/chat/controllers/ChatHistoryController.tssrc/features/chat/controllers/ChatSendController.test.tssrc/features/chat/controllers/ChatSendController.tssrc/features/chat/controllers/VoiceController.tssrc/features/chat/services/ChatControllerFactory.tssrc/features/chat/services/ChatSendFlow.tssrc/features/chat/services/ChatService.tssrc/features/chat/services/ChatUiStateHelper.tssrc/features/chat/services/ChatViewHelper.tssrc/features/chat/services/VoiceInputService.test.tssrc/features/chat/services/VoiceInputService.tssrc/features/chat/types/chatTypes.tssrc/features/chat/ui/ChatInputContextMenu.test.tssrc/features/chat/ui/ChatInputContextMenu.tssrc/features/chat/ui/ChatMessageInteractionController.tssrc/features/chat/ui/ChatStreamingMessage.tssrc/features/chat/ui/ChatUI.test.tssrc/features/chat/ui/ChatUI.tssrc/features/chat/ui/ChatUiDom.tssrc/features/home-overview/index.tssrc/features/home-overview/ui/HomeOverviewPlaceholderUI.tssrc/features/settings/ui/ModuleSettingsControllerFactory.tssrc/features/settings/ui/ModuleSettingsEngineFieldCatalog.test.tssrc/features/settings/ui/ModuleSettingsEngineFieldCatalog.tssrc/features/settings/ui/ModuleSettingsEngineFieldRowRenderer.tssrc/features/settings/ui/ModuleSettingsEngineFieldSupport.tssrc/features/settings/ui/ModuleSettingsEngineHtmlBuilder.tssrc/features/settings/ui/ModuleSettingsEngineRenderFlow.tssrc/features/settings/ui/ModuleSettingsEngineRenderer.test.tssrc/features/settings/ui/ModuleSettingsEngineRenderer.tssrc/features/settings/ui/ModuleSettingsEngineSelectField.tssrc/features/settings/ui/ModuleSettingsUI.tssrc/features/settings/ui/SettingsUI.test.tssrc/index.htmlsrc/infrastructure/tauri/TauriProvider.test.tssrc/infrastructure/tauri/TauriProvider.tssrc/shared/services/ModulePlatformService.test.tssrc/shared/services/ModulePlatformService.tssrc/shared/shell/AppUI.test.tssrc/shared/shell/AppUI.tssrc/shared/shell/ui/AppUiCardActionFlow.test.tssrc/shared/shell/ui/AppUiCardActionFlow.tssrc/shared/shell/ui/AppUiModuleFlow.test.tssrc/shared/shell/ui/AppUiSelectionFlow.test.tssrc/shared/shell/ui/AppUiSelectionFlow.tssrc/shared/shell/ui/ModuleCardRenderer.test.tssrc/shared/shell/ui/ModuleCardRenderer.tssrc/shared/shell/ui/ToastManager.test.tssrc/shared/shell/ui/ToastManager.tssrc/shared/types/bindings.tssrc/shared/utils/moduleCategoryPolicy.test.tssrc/shared/utils/moduleCategoryPolicy.tssrc/styles/app.csssrc/styles/base/design-tokens.csssrc/styles/base/document-reset.csssrc/styles/base/inter-font-faces.csssrc/styles/base/motion.csssrc/styles/base/scrollbars.csssrc/styles/base/startup-critical.csssrc/styles/components/action-buttons.csssrc/styles/components/form-controls.csssrc/styles/components/shared-surfaces.csssrc/styles/components/status.csssrc/styles/components/svg-icons.csssrc/styles/features/ai-module-settings.csssrc/styles/features/chat-page.csssrc/styles/features/console-page.csssrc/styles/features/downloads-page.csssrc/styles/features/feature-style-template.csssrc/styles/features/home-page-and-module-cards.csssrc/styles/features/module-selection-modal.csssrc/styles/features/modules-dashboard.csssrc/styles/features/modules-page-standalone.csssrc/styles/features/preferences-panel-placeholder.csssrc/styles/features/release-check-placeholder.csssrc/styles/features/settings-page.csssrc/styles/features/system-monitor.csssrc/styles/layouts/app-header.csssrc/styles/layouts/app-main.csssrc/styles/layouts/app-sidebar.csssrc/styles/layouts/modal-shell.csssrc/styles/layouts/splash-screen.csssrc/styles/layouts/toast-notifications.csssrc/styles/main.css
💤 Files with no reviewable changes (23)
- src-tauri/resources/config/local_modules.json
- src-tauri/src/domain/ai/image_service.rs
- src/shared/utils/moduleCategoryPolicy.ts
- src/styles/features/preferences-panel-placeholder.css
- src/styles/features/feature-style-template.css
- src/features/home-overview/index.ts
- src/styles/components/status.css
- src/styles/features/release-check-placeholder.css
- src/styles/features/downloads-page.css
- src/features/home-overview/ui/HomeOverviewPlaceholderUI.ts
- src/styles/components/shared-surfaces.css
- src/styles/main.css
- src/app/CoreStateRestore.ts
- src/styles/features/console-page.css
- src/styles/features/modules-page-standalone.css
- src/styles/layouts/modal-shell.css
- src/styles/features/system-monitor.css
- src-tauri/src/domain/engine/registry.rs
- src/styles/components/action-buttons.css
- src/shared/utils/moduleCategoryPolicy.test.ts
- src/styles/components/form-controls.css
- src/styles/features/modules-dashboard.css
- src/styles/features/module-selection-modal.css
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/features/chat/services/VoiceInputService.ts (1)
70-87:⚠️ Potential issue | 🟠 MajorLogical cancellation still leaves the service blocked.
stop()only invalidates the session id;_nativeRecognitionActivestaystrueuntil the pendinginvoke()settles. That meansstart()can keep returningfalseafter a stop if the native call is slow or hangs, so the new cancel UX is not actually immediate.Also applies to: 93-137
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/chat/services/VoiceInputService.ts` around lines 70 - 87, The bug is that stop() only invalidates _sessionId while _nativeRecognitionActive remains true until the native invoke settles, blocking subsequent start() calls; update stop(), start(), and _recognize() so cancellation is immediate: set _nativeRecognitionActive = true in start() before calling _recognize(sessionId,...), ensure stop() sets _nativeRecognitionActive = false (and optionally signals an AbortController if you add one) so the native call is considered cancelled, and make _recognize(sessionId, ...) check the current _sessionId at key points and return early if mismatched, plus set _nativeRecognitionActive = false in all completion paths (finally) to avoid leaving the flag stuck; reference methods/fields: start, stop, _recognize, _sessionId, and _nativeRecognitionActive.src-tauri/src/domain/engine/engine_args.rs (1)
107-143:⚠️ Potential issue | 🟠 Major
build_llamacpp_argsdrops userextra_argsentirely.This function checks
config.extra_argsto suppress defaults, but never appends them to the final argv, so user-provided llama.cpp flags are silently ignored at runtime.💡 Proposed fix
pub(super) fn build_llamacpp_args(config: &EngineConfig, port: u16) -> Vec<String> { @@ if is_qwen_model(config.model_path.as_deref()) { push_arg_if_missing(&mut args, &config.extra_args, &["--jinja"], None); push_arg_if_missing( &mut args, &config.extra_args, &["--reasoning-format"], Some("deepseek"), ); push_arg_if_missing(&mut args, &config.extra_args, &["--no-context-shift"], None); push_arg_if_missing(&mut args, &config.extra_args, &["--flash-attn"], Some("on")); } + args.extend(config.extra_args.clone()); args }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/domain/engine/engine_args.rs` around lines 107 - 143, build_llamacpp_args currently inspects config.extra_args only to suppress defaults but never appends the user's extra args to the returned Vec, so user-provided flags are dropped; update build_llamacpp_args to append entries from config.extra_args (after you finish adding defaults and calling push_llamacpp_compute_args and all push_arg_if_missing calls) into args, taking care to avoid adding duplicates of flags already inserted by push_arg_if_missing (you can check existing args before pushing). Reference: function build_llamacpp_args and the helper push_arg_if_missing to locate where to append the extra args.
🧹 Nitpick comments (2)
src/features/chat/chat.ts (1)
511-511: Prefer passing live sending state instead of a hardcodedfalse.This keeps the contract with
ChatHistoryController.regenerateLastTurn(...)explicit and future-proof.♻️ Proposed tweak
- const text = await this._historyController.regenerateLastTurn(false); + const text = await this._historyController.regenerateLastTurn(this._state.isSending);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/chat/chat.ts` at line 511, Replace the hardcoded false when calling ChatHistoryController.regenerateLastTurn by passing the component's live sending state; locate the call to this._historyController.regenerateLastTurn(false) and change the argument to the appropriate runtime flag (for example this._isSending, this.sending, or this.isSending()) so the method receives the actual current sending state rather than a fixed false.src/styles/features/ai-module-settings.css (1)
1504-1518: Section-specific input overrides use!important.The context and system-prompt sections override input styles with
!important. While functional, this can make future style modifications difficult. Consider increasing selector specificity instead if maintainability becomes a concern.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/styles/features/ai-module-settings.css` around lines 1504 - 1518, The CSS uses !important in the rules for .local-engine-section--context .local-engine-input and .local-engine-section--system-prompt .local-engine-input--textarea; remove the !important tokens and instead increase selector specificity or add a dedicated modifier class (e.g., .local-engine-section--context .local-engine-input.specific or .local-engine-section--system-prompt .local-engine-input--textarea--override) so these rules win without !important; update the selectors for background and border declarations accordingly and ensure any competing base rules (e.g., .local-engine-input or .local-engine-input--textarea) are not more specific than the new selectors.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src-tauri/resources/locales/en.json`:
- Around line 297-299: The memory-error help text still uses the phrase "GPU
layers" which is inconsistent with the new compute-mode labels; locate the
relevant localization key (e.g., the JSON entry that mentions "GPU layers") and
update its value to use the current terminology such as "GPU" or "Compute
Device" (for example: replace "GPU layers" with "GPU (Compute Device)" or "GPU
compute mode") so wording aligns with keys like
"ui.settings.engine.compute_mode", "ui.settings.engine.compute_mode_cpu", and
"ui.settings.engine.compute_mode_gpu".
In `@src-tauri/resources/locales/zh.json`:
- Around line 294-296: The translation still uses the legacy “GPU layers”
wording; update the localized strings to match the compute-mode UI by replacing
any occurrences of legacy phrases (e.g. “GPU 层”, “GPU layers”) with the new
wording used by the compute-mode keys such as "ui.settings.engine.compute_mode",
"ui.settings.engine.compute_mode_cpu", and
"ui.settings.engine.compute_mode_gpu"; also locate the memory error copy string
(search translations for “memory”, “out of memory”, or any key like
"ui.errors.memory") and change its instruction to reference the compute-mode
control wording (e.g. 指向 “计算设备”/“GPU”/“CPU” as appropriate) so users are not
directed to the removed “GPU layers” control.
In `@src/features/chat/chat.ts`:
- Around line 366-375: cancelTextGeneration reads the live activeProviderId
which can change mid-flight, so cancellation can target the wrong backend; fix
by recording the providerId when a generation is started (e.g., set
this._currentGenerationProviderId in the start/init method that launches
text/image jobs) and have cancelTextGeneration use that stored id (falling back
to this._aiBridge.getState().activeProviderId only if no stored id). Update uses
of this._generationController.isImageProvider(...) and the calls to
this._aiBridge.cancelImageGeneration()/cancelTextGeneration() to pass or consult
this captured provider id, and clear the stored id when the job completes or is
cancelled.
In `@src/features/chat/services/VoiceInputService.ts`:
- Around line 49-57: The isSupported() method in VoiceInputService is currently
optimistic when the bridge lacks hasCapability and returns true; change it to
fail closed by returning false when hasCapability is missing. Update
VoiceInputService.isSupported to invoke this._hostBridge.isTauri() as before,
then use the optional hasCapability on the capabilityBridge and default to false
(not true) when hasCapability is undefined so the service won't attempt
recognize_voice_once against bridges that don't advertise speechRecognition.
- Around line 121-135: The try block that calls the native bridge also calls the
consumer callback onResult(text) so exceptions from onResult are being treated
as native recognition errors; change this by isolating the consumer call: after
computing text = response.text.trim() and after the sessionId check but before
calling this._finishSession('ended'), invoke onResult(text) inside its own small
try/catch (e.g., try { onResult(text) } catch (err) {
this._tracer.warn(`[VoiceInputService] onResult handler threw: ${err}`); } ) so
consumer exceptions are logged and swallowed without converting into a
recognition error or calling this._onError/_finishSession with an error code;
leave the existing native-recognition catch block to only handle errors from the
bridge.
In `@src/features/chat/ui/ChatInputContextMenu.ts`:
- Around line 23-49: The CHAT_INPUT_CONTEXT_MENU_ITEMS array uses Russian
fallback strings; update the fallback values for the items with actions 'cut',
'copy', 'paste', and 'selectAll' to English (e.g., 'Cut', 'Copy', 'Paste',
'Select all') so the defaults match the app's English locale and other
components.
In `@src/features/settings/ui/ModuleSettingsEngineRenderer.ts`:
- Around line 159-162: The setConfig handler currently calls
this._deps.notifySettingsChanged() unconditionally after invoking
this._deps.engineConfigService.setConfig(config); change setConfig to await the
persistence call (make the setter async or return the promise) and only call
this._deps.notifySettingsChanged() after the await resolves successfully; if the
persistence promise rejects, catch/log or propagate the error instead of firing
notifySettingsChanged so notifications only occur on successful saves (refer to
setConfig and this._deps.engineConfigService.setConfig).
---
Outside diff comments:
In `@src-tauri/src/domain/engine/engine_args.rs`:
- Around line 107-143: build_llamacpp_args currently inspects config.extra_args
only to suppress defaults but never appends the user's extra args to the
returned Vec, so user-provided flags are dropped; update build_llamacpp_args to
append entries from config.extra_args (after you finish adding defaults and
calling push_llamacpp_compute_args and all push_arg_if_missing calls) into args,
taking care to avoid adding duplicates of flags already inserted by
push_arg_if_missing (you can check existing args before pushing). Reference:
function build_llamacpp_args and the helper push_arg_if_missing to locate where
to append the extra args.
In `@src/features/chat/services/VoiceInputService.ts`:
- Around line 70-87: The bug is that stop() only invalidates _sessionId while
_nativeRecognitionActive remains true until the native invoke settles, blocking
subsequent start() calls; update stop(), start(), and _recognize() so
cancellation is immediate: set _nativeRecognitionActive = true in start() before
calling _recognize(sessionId,...), ensure stop() sets _nativeRecognitionActive =
false (and optionally signals an AbortController if you add one) so the native
call is considered cancelled, and make _recognize(sessionId, ...) check the
current _sessionId at key points and return early if mismatched, plus set
_nativeRecognitionActive = false in all completion paths (finally) to avoid
leaving the flag stuck; reference methods/fields: start, stop, _recognize,
_sessionId, and _nativeRecognitionActive.
---
Nitpick comments:
In `@src/features/chat/chat.ts`:
- Line 511: Replace the hardcoded false when calling
ChatHistoryController.regenerateLastTurn by passing the component's live sending
state; locate the call to this._historyController.regenerateLastTurn(false) and
change the argument to the appropriate runtime flag (for example
this._isSending, this.sending, or this.isSending()) so the method receives the
actual current sending state rather than a fixed false.
In `@src/styles/features/ai-module-settings.css`:
- Around line 1504-1518: The CSS uses !important in the rules for
.local-engine-section--context .local-engine-input and
.local-engine-section--system-prompt .local-engine-input--textarea; remove the
!important tokens and instead increase selector specificity or add a dedicated
modifier class (e.g., .local-engine-section--context
.local-engine-input.specific or .local-engine-section--system-prompt
.local-engine-input--textarea--override) so these rules win without !important;
update the selectors for background and border declarations accordingly and
ensure any competing base rules (e.g., .local-engine-input or
.local-engine-input--textarea) are not more specific than the new selectors.
🪄 Autofix (Beta)
✅ Autofix completed
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 5ed98065-5504-4289-9a75-f3d4aa109656
📒 Files selected for processing (28)
src-tauri/resources/locales/en.jsonsrc-tauri/resources/locales/ru.jsonsrc-tauri/resources/locales/zh.jsonsrc-tauri/src/domain/ai/ai_service.rssrc-tauri/src/domain/engine/config.rssrc-tauri/src/domain/engine/engine_args.rssrc-tauri/src/domain/engine/manager.rssrc-tauri/src/domain/engine/types.rssrc/app/CoreUiBridgeHelpers.tssrc/features/ai/services/AIChatTransport.tssrc/features/chat/chat.tssrc/features/chat/controllers/ChatHistoryController.tssrc/features/chat/services/VoiceInputService.tssrc/features/chat/ui/ChatInputContextMenu.tssrc/features/settings/ui/ModuleSettingsEngineFieldCatalog.test.tssrc/features/settings/ui/ModuleSettingsEngineFieldCatalog.tssrc/features/settings/ui/ModuleSettingsEngineFieldController.tssrc/features/settings/ui/ModuleSettingsEngineFieldRowRenderer.tssrc/features/settings/ui/ModuleSettingsEngineFieldSupport.tssrc/features/settings/ui/ModuleSettingsEngineHtmlBuilder.tssrc/features/settings/ui/ModuleSettingsEngineInfoPopover.tssrc/features/settings/ui/ModuleSettingsEngineRenderFlow.tssrc/features/settings/ui/ModuleSettingsEngineRenderer.tssrc/infrastructure/tauri/TauriProvider.tssrc/scripts/check-size.jssrc/shared/types/bindings.tssrc/styles/features/ai-module-settings.csssrc/vite.config.ts
🚧 Files skipped from review as they are similar to previous changes (11)
- src/features/chat/controllers/ChatHistoryController.ts
- src/features/settings/ui/ModuleSettingsEngineFieldCatalog.test.ts
- src/infrastructure/tauri/TauriProvider.ts
- src-tauri/src/domain/engine/types.rs
- src-tauri/resources/locales/ru.json
- src/features/settings/ui/ModuleSettingsEngineFieldSupport.ts
- src/shared/types/bindings.ts
- src/app/CoreUiBridgeHelpers.ts
- src/features/ai/services/AIChatTransport.ts
- src-tauri/src/domain/ai/ai_service.rs
- src-tauri/src/domain/engine/manager.rs
|
Note Autofix is a beta feature. Expect some limitations and changes as we gather feedback and continue to improve it. An unexpected error occurred while generating fixes: 13 INTERNAL: Received RST_STREAM with code 2 (Internal server error) |
|
Note Autofix is a beta feature. Expect some limitations and changes as we gather feedback and continue to improve it. Fixes Applied SuccessfullyFixed 9 file(s) based on 7 unresolved review comments. Files modified:
Commit: The changes have been pushed to the Time taken: |
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (2)
src/features/chat/chat.ts (1)
366-391:⚠️ Potential issue | 🟠 MajorKeep the captured provider until streaming cleanup finishes.
currentGenerationProviderIdis cleared as soon asisSendingflips tofalse, butChatSendController.cancelActiveSend()still calls this callback while streaming state exists (src/features/chat/controllers/ChatSendController.ts:114-121). In that window, Line 367 falls back to the liveactiveProviderId, so switching providers can still cancel the wrong backend. Clear the stored provider only after the active stream/image job has fully cleaned up.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/chat/chat.ts` around lines 366 - 391, The code clears this._state.currentGenerationProviderId inside the setSending setter when value becomes false, but cancelActiveSend in ChatSendController may still rely on the captured provider while streaming cleanup runs; remove the immediate clearing from setSending (leave currentGenerationProviderId set when flipping to false) and instead add an explicit call to clear the stored provider from the cleanup path in ChatSendController (or from the stream/image-job completion code) after all streaming/image cleanup finishes (refer to currentGenerationProviderId, setSending, and ChatSendController.cancelActiveSend to locate where to change and where to call the new clear operation).src/features/settings/ui/ModuleSettingsEngineRenderer.ts (1)
159-166:⚠️ Potential issue | 🟠 Major
notifySettingsChanged()can still fire after a failed save.This wrapper only works if
engineConfigService.setConfig()rejects on persistence errors. The currentEngineConfigService.setConfig()implementation catches Tauri invoke failures and resolves, so this block still callsnotifySettingsChanged()on failed saves; downstream,ModuleSettingsEngineFieldController.handleSave()will also show a success indicator for that failed write. Please make the service rethrow/return an explicit success result and gate notification on that outcome.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/settings/ui/ModuleSettingsEngineRenderer.ts` around lines 159 - 166, The current setConfig wrapper calls notifySettingsChanged() regardless of whether persistence actually succeeded; update the flow so notifications only fire on confirmed success: modify EngineConfigService.setConfig to either rethrow errors from the Tauri invoke or return an explicit success boolean/result, then in ModuleSettingsEngineRenderer.setConfig (the async setConfig lambda that calls this._deps.engineConfigService.setConfig) check that returned success before calling this._deps.notifySettingsChanged(); ensure errors are propagated (or non-success returned) so ModuleSettingsEngineFieldController.handleSave doesn’t show a success indicator for failed writes.
🧹 Nitpick comments (3)
src/features/chat/services/VoiceInputService.ts (1)
50-52: Prefer typing capability support onIBridgeover local casting.The ad-hoc cast weakens type guarantees. Consider adding optional
hasCapabilitytoIBridgeand removing this per-call structural cast.💡 Proposed direction
- const capabilityBridge = this._hostBridge as IBridge & { - hasCapability?: (capability: string) => boolean; - }; if (!this._hostBridge.isTauri()) { return false; } - return capabilityBridge.hasCapability?.('speechRecognition') ?? false; + return this._hostBridge.hasCapability?.('speechRecognition') ?? false;// src/shared/types/IBridge.ts export interface IBridge { // ... hasCapability?(capability: string): boolean; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/chat/services/VoiceInputService.ts` around lines 50 - 52, The code currently does a local structural cast to add hasCapability on this._hostBridge (const capabilityBridge = this._hostBridge as IBridge & { hasCapability?: (capability: string) => boolean; }), which weakens types; instead add an optional hasCapability signature to the IBridge interface and remove the inline cast and the capabilityBridge variable usage—update the IBridge declaration (add hasCapability?(capability: string): boolean) and replace uses of capabilityBridge/_hostBridge in VoiceInputService (and any other callers) to rely on the typed _hostBridge.hasCapability optional method directly.src/features/settings/ui/ModuleSettingsEngineRenderer.test.ts (1)
420-449: Strengthen this test to provehandleSave()really awaits persistence.
setConfigis a plain synchronous mock here, so this still passes ifhandleSave()regresses to fire-and-forget and callsshowSaveIndicator()too early. Use a deferred promise and assert the indicator stays off until that promise resolves.🧪 Suggested test shape
- const setConfig = vi.fn(); + let resolveSave: (() => void) | undefined; + const setConfig = vi.fn( + () => + new Promise<void>((resolve) => { + resolveSave = resolve; + }), + ); const showSaveIndicator = vi.fn(); const fieldController = new ModuleSettingsEngineFieldController({ getSettings: () => ({}), setConfig, debouncedSave: vi.fn(), @@ - await fieldController.handleSave(engineInput, { + const savePromise = fieldController.handleSave(engineInput, { key: 'extra_args', type: 'text', isEngineConfig: true, config: config as never, }); expect(config.extra_args).toEqual(['--ctx', '4096']); expect(setConfig).toHaveBeenCalledWith(config); + expect(showSaveIndicator).not.toHaveBeenCalled(); + + resolveSave?.(); + await savePromise; expect(showSaveIndicator).toHaveBeenCalled();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/settings/ui/ModuleSettingsEngineRenderer.test.ts` around lines 420 - 449, Update the test so it proves handleSave awaits persistence: replace the synchronous setConfig mock with a deferred Promise (e.g. a Promise with its resolve exposed) and have setConfig return that promise; call fieldController.handleSave(...) and assert showSaveIndicator has NOT been called immediately, then resolve the deferred promise and await completion, finally assert setConfig was called with config, config.extra_args updated, and showSaveIndicator called after the promise resolves. Target the ModuleSettingsEngineFieldController.handleSave flow and the existing mocks setConfig and showSaveIndicator to implement this behavior.src/features/settings/ui/ModuleSettingsEngineRenderer.ts (1)
328-373: Preserve accessible form semantics for the compute-mode control.This replaces a native select with plain buttons, so it needs explicit group semantics to keep the field understandable to assistive tech. Prefer native radios here, or add a labeled
radiogroup/radiopattern instead of relying on a hidden input plusaria-pressed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/settings/ui/ModuleSettingsEngineRenderer.ts` around lines 328 - 373, The compute-mode control currently built in _createComputeModeControl uses plain buttons and a hidden input which breaks native form/radio semantics; change it to expose proper radio semantics by either rendering native <input type="radio"> elements or by adding ARIA roles and keyboard handling: give the container (root) role="radiogroup" and an accessible label, set each button to role="radio" with aria-checked reflecting selected state (use button.dataset['value'] compared to hiddenInput.value), ensure keyboard navigation (ArrowLeft/ArrowRight/Home/End) moves focus and updates hiddenInput.value, keep syncDisplay to update aria-checked/aria-pressed and classList, and dispatch the change event on hiddenInput when selection changes so existing consumers of engineInput still work. Use the existing symbols (_createComputeModeControl, hiddenInput, buttons, syncDisplay, customSelect) when applying the changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/features/chat/chat.ts`:
- Around line 500-526: In regenerateLastResponse(), avoid mutating history
before confirming the resend can start: first perform a dry-run/preflight check
(or call the same preparation logic sendChat uses) to ensure sending is
possible, and only then call _historyController.regenerateLastTurn(false);
alternatively, if you must call regenerateLastTurn() before sendChat(), capture
the previous assistant turn from _historyController (or use a peek method) and
restore it via _historyController when sendChat() fails or throws; modify
regenerateLastResponse() to call sendChat() inside a try/catch and restore the
prior turn on any error so the assistant reply is never lost.
In `@src/features/chat/services/VoiceInputService.ts`:
- Around line 135-138: The call to the consumer callback this._onError in
VoiceInputService can throw and skip the subsequent session cleanup
(this._finishSession), so wrap the this._onError invocation (after building
payload via this._toErrorPayload) in a try/catch (or try/finally) to catch any
exception from the consumer and ensure this._finishSession(...) always runs;
also log any exception with this._tracer.error so the original consumer error is
visible while guaranteeing the session transitions to 'startup_failed' or
'error' as before.
In `@src/features/chat/ui/ChatInputContextMenu.ts`:
- Around line 273-287: The copy/cut branch in ChatInputContextMenu loses focus
because it closes the menu without returning focus to the input; update the
block handling action === 'copy' || action === 'cut' to call input.focus()
before every this.close() (including the early-return when selectedText === '')
and also call input.focus() after performing the cut (after
this._dispatchInputChange(input)) but before closing so focus always returns to
the input; references: ChatInputContextMenu, input, this._deps.copyText,
this._dispatchInputChange, this.close.
---
Duplicate comments:
In `@src/features/chat/chat.ts`:
- Around line 366-391: The code clears this._state.currentGenerationProviderId
inside the setSending setter when value becomes false, but cancelActiveSend in
ChatSendController may still rely on the captured provider while streaming
cleanup runs; remove the immediate clearing from setSending (leave
currentGenerationProviderId set when flipping to false) and instead add an
explicit call to clear the stored provider from the cleanup path in
ChatSendController (or from the stream/image-job completion code) after all
streaming/image cleanup finishes (refer to currentGenerationProviderId,
setSending, and ChatSendController.cancelActiveSend to locate where to change
and where to call the new clear operation).
In `@src/features/settings/ui/ModuleSettingsEngineRenderer.ts`:
- Around line 159-166: The current setConfig wrapper calls
notifySettingsChanged() regardless of whether persistence actually succeeded;
update the flow so notifications only fire on confirmed success: modify
EngineConfigService.setConfig to either rethrow errors from the Tauri invoke or
return an explicit success boolean/result, then in
ModuleSettingsEngineRenderer.setConfig (the async setConfig lambda that calls
this._deps.engineConfigService.setConfig) check that returned success before
calling this._deps.notifySettingsChanged(); ensure errors are propagated (or
non-success returned) so ModuleSettingsEngineFieldController.handleSave doesn’t
show a success indicator for failed writes.
---
Nitpick comments:
In `@src/features/chat/services/VoiceInputService.ts`:
- Around line 50-52: The code currently does a local structural cast to add
hasCapability on this._hostBridge (const capabilityBridge = this._hostBridge as
IBridge & { hasCapability?: (capability: string) => boolean; }), which weakens
types; instead add an optional hasCapability signature to the IBridge interface
and remove the inline cast and the capabilityBridge variable usage—update the
IBridge declaration (add hasCapability?(capability: string): boolean) and
replace uses of capabilityBridge/_hostBridge in VoiceInputService (and any other
callers) to rely on the typed _hostBridge.hasCapability optional method
directly.
In `@src/features/settings/ui/ModuleSettingsEngineRenderer.test.ts`:
- Around line 420-449: Update the test so it proves handleSave awaits
persistence: replace the synchronous setConfig mock with a deferred Promise
(e.g. a Promise with its resolve exposed) and have setConfig return that
promise; call fieldController.handleSave(...) and assert showSaveIndicator has
NOT been called immediately, then resolve the deferred promise and await
completion, finally assert setConfig was called with config, config.extra_args
updated, and showSaveIndicator called after the promise resolves. Target the
ModuleSettingsEngineFieldController.handleSave flow and the existing mocks
setConfig and showSaveIndicator to implement this behavior.
In `@src/features/settings/ui/ModuleSettingsEngineRenderer.ts`:
- Around line 328-373: The compute-mode control currently built in
_createComputeModeControl uses plain buttons and a hidden input which breaks
native form/radio semantics; change it to expose proper radio semantics by
either rendering native <input type="radio"> elements or by adding ARIA roles
and keyboard handling: give the container (root) role="radiogroup" and an
accessible label, set each button to role="radio" with aria-checked reflecting
selected state (use button.dataset['value'] compared to hiddenInput.value),
ensure keyboard navigation (ArrowLeft/ArrowRight/Home/End) moves focus and
updates hiddenInput.value, keep syncDisplay to update aria-checked/aria-pressed
and classList, and dispatch the change event on hiddenInput when selection
changes so existing consumers of engineInput still work. Use the existing
symbols (_createComputeModeControl, hiddenInput, buttons, syncDisplay,
customSelect) when applying the changes.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 8f7b4b5e-1a7e-49f8-9f58-1d5e8d695e79
📒 Files selected for processing (9)
src-tauri/resources/locales/en.jsonsrc-tauri/resources/locales/zh.jsonsrc/features/chat/chat.tssrc/features/chat/services/ChatControllerState.tssrc/features/chat/services/VoiceInputService.tssrc/features/chat/ui/ChatInputContextMenu.tssrc/features/settings/ui/ModuleSettingsEngineFieldController.tssrc/features/settings/ui/ModuleSettingsEngineRenderer.test.tssrc/features/settings/ui/ModuleSettingsEngineRenderer.ts
✅ Files skipped from review due to trivial changes (1)
- src/features/chat/services/ChatControllerState.ts
There was a problem hiding this comment.
Actionable comments posted: 19
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
src/app/CoreEntry.ts (1)
69-75:⚠️ Potential issue | 🟠 MajorGuard teardown in init-failure catch path.
At Line 73,
coreInstance.destroy()is unguarded inside theinit().catch(...)callback. If it throws, this path can fail noisily and skip reliable failure reporting.Proposed fix
coreInstance.init().catch((error: unknown) => { if (state.activeCoreInstance === coreInstance) { clearBootState(); } - coreInstance.destroy(); tracer.error(`[Core] Boot failed: ${String(error)}`); + try { + coreInstance.destroy(); + } catch (destroyError: unknown) { + tracer.error(`[Core] Destroy after boot failure failed: ${String(destroyError)}`); + } });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/CoreEntry.ts` around lines 69 - 75, The catch handler on coreInstance.init() should guard the call to coreInstance.destroy() so a thrown error there doesn't bypass failure handling; wrap coreInstance.destroy() in its own try/catch (or Promise.resolve().catch) and log any destroy error via tracer.error, preserve calling clearBootState() when state.activeCoreInstance === coreInstance, and ensure the original boot error is always reported with tracer.error(`[Core] Boot failed: ...`) even if destroy fails.src/features/console/ui/ConsoleUI.ts (1)
255-261:⚠️ Potential issue | 🟠 MajorScope the async tab refresh to the requested view and observe failures.
This callback rerenders whichever view is active when
fetchLogs(view)settles, so a slower request from a previous tab can clear/rerender the wrong pane. Rejected fetches are also left unhandled.Suggested fix
private setLogView(view: string, btn: HTMLElement): void { this._viewState.activeViewId = view; this._activateTab('.console-tab', '.logs-pane', `logs-${view}`, btn); this.renderLogs(true); - void this.service.fetchLogs(view).then(() => { - this.renderLogs(true); - }); + void this.service + .fetchLogs(view) + .then(() => { + if (this._viewState.activeViewId === view) { + this.renderLogs(true); + } + }) + .catch((error: unknown) => { + if (this._viewState.activeViewId === view) { + this.renderLogs(true); + } + // log or surface the failure here instead of leaving it unhandled + }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/console/ui/ConsoleUI.ts` around lines 255 - 261, The async callback in setLogView is currently re-rendering whatever is active when fetchLogs(view) settles and doesn't handle rejections; capture the requested view into a local variable (e.g. const requestedView = view) before calling this.service.fetchLogs(requestedView), then in the promise chain only call this.renderLogs(true) if this._viewState.activeViewId === requestedView, and add a .catch to the fetchLogs call to observe/log failures (use existing logger or a safe fallback). This change touches setLogView, service.fetchLogs, renderLogs, and _viewState.activeViewId.src/features/chat/controllers/ChatGenerationController.ts (1)
280-306:⚠️ Potential issue | 🟠 MajorKeep the estimator fallback when backend usage is absent.
response.usageis optional, so this now records0tokens for any provider that doesn’t return completion usage. That breaks context accounting and message stats for those paths. Please fall back toestimateReplyTokens(replyText)when backend usage is missing instead of hard-zeroing it.Proposed fix
- private handleTextReply( + private async handleTextReply( response: IChatResponse, replyText: string, streamingHandle?: StreamingMessageHandle | null, - ): void { - const tokens = this._resolveBackendCompletionTokens(response); + ): Promise<void> { + const tokens = await this._resolveCompletionTokens(response, replyText); if (tokens > 0) { this._options.addContextTokens(tokens); } @@ this._options.pushAssistantMessage(replyText, response.thought_signature); } - private _resolveBackendCompletionTokens(response: IChatResponse): number { + private async _resolveCompletionTokens( + response: IChatResponse, + replyText: string, + ): Promise<number> { const completionTokens = response.usage?.completion_tokens; - if (typeof completionTokens !== 'number' || !Number.isFinite(completionTokens)) { - return 0; + if (typeof completionTokens === 'number' && Number.isFinite(completionTokens)) { + return Math.max(0, Math.trunc(completionTokens)); } - return Math.max(0, Math.trunc(completionTokens)); + return replyText.trim() === '' ? 0 : await this._options.estimateReplyTokens(replyText); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/chat/controllers/ChatGenerationController.ts` around lines 280 - 306, The token resolver currently returns 0 when response.usage is missing, breaking context accounting; update _resolveBackendCompletionTokens to fall back to estimating tokens from the reply text (use estimateReplyTokens(replyText)) when response.usage?.completion_tokens is absent or invalid — either change the signature to _resolveBackendCompletionTokens(response, replyText) or add an optional replyText param and call it from handleTextReply so handleTextReply passes replyText; ensure the returned value is still a non-negative integer (Math.trunc/Math.max as before) and used unchanged by the existing addContextTokens/streamingHandle append logic.src-tauri/src/api/system/logs.rs (1)
92-96:⚠️ Potential issue | 🟠 MajorCanonicalize
view_idbefore clearing buffered logs.
get_console_logs()now normalizes engine aliases, butclear_console_logs()still callslogs::clear_logs_for_view(&view_id)with the raw value. A request likeengine:stable-diffusionwill truncate thesdcppfiles while leaving the in-memoryengine:sdcppentries intact, so the tab repopulates on the next refresh.Suggested fix
pub fn clear_console_logs(view_id: String) -> Result<(), AppError> { - let target = resolve_console_log_target(&view_id); - logs::clear_logs_for_view(&view_id); - clear_console_log_target(&view_id, &target)?; + let view_id = canonical_console_view_id(&view_id); + let target = resolve_console_log_target(&view_id); + logs::clear_logs_for_view(&view_id); + clear_console_log_target(&view_id, &target)?; Ok(()) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/api/system/logs.rs` around lines 92 - 96, clear_console_logs currently passes the raw view_id into logs::clear_logs_for_view which leaves normalized aliases (e.g., engine:sdcpp) untouched; use the same canonicalized id used by get_console_logs instead. Call resolve_console_log_target(&view_id) (already present) and extract the canonical view id from the returned target, then pass that canonical id into logs::clear_logs_for_view and into clear_console_log_target so buffered logs are removed for the normalized key rather than the raw alias. Ensure you reference the resolved target from resolve_console_log_target and replace uses of the raw view_id in clear_console_logs accordingly.src-tauri/src/domain/ai/streaming.rs (1)
391-410:⚠️ Potential issue | 🟠 MajorDon’t finalize a stream successfully without any terminal signal.
This now returns
ok: truewhen the connection drops after partial content but before[DONE]/finish_reason/stop. That turns truncated output into a persisted assistant reply instead of surfacing a transport failure.💡 Suggested direction
- if !saw_done && !state.saw_terminal_chunk { - tracing::warn!( - request_id = %request_id, - message_id = %message_id, - chunks = state.chunks_emitted, - "AI stream ended without completion marker after emitting content" - ); - } + if !saw_done && !state.saw_terminal_chunk { + return Ok(ChatResponse { + id: message_id, + ok: false, + reply: None, + error: Some("AI stream ended before a completion marker was received".to_string()), + model: Some(req.model), + thought_signature: None, + usage: state.final_usage, + }); + }If some providers legitimately omit
[DONE], normalize that earlier inhandle_stream_json_line(...)sostate.saw_terminal_chunkis set explicitly instead of treating every dropped stream as success.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/domain/ai/streaming.rs` around lines 391 - 410, The current end-of-stream logic in streaming.rs treats a dropped connection without a terminal signal as success; update the end-of-stream handling so that if saw_done is false and state.saw_terminal_chunk is false you return a ChatResponse with ok: false (and an appropriate error) instead of allowing a persisted successful reply; additionally, ensure normalization of terminal signals inside handle_stream_json_line(...) by explicitly setting state.saw_terminal_chunk for providers that omit [DONE]/finish_reason/stop so the end-of-stream check can rely on that flag (references: variables saw_done, state.saw_terminal_chunk, state.full_content, state.chunks_emitted; function handle_stream_json_line and the ChatResponse construction).
♻️ Duplicate comments (2)
src/features/chat/chat.ts (2)
644-655:⚠️ Potential issue | 🟠 MajorDon’t remove the previous assistant turn until the resend is known to start.
regenerateLastTurn(...)mutates history first, andsendChat()can still stop on activation failure or another send-time error. That leaves the conversation with the last assistant reply removed and no replacement.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/chat/chat.ts` around lines 644 - 655, regenerateLastTurn currently mutates history immediately which removes the assistant turn before sendChat may fail; instead, obtain the regenerated text without mutating state (or use a non-mutating preview API) and only apply the history mutation when the resend actually starts/succeeds: call a preview method on _historyController (or add one) to get text for _inputCoordinator.restore, show the toast on empty result, and only then perform the actual mutation (or call the existing regenerateLastTurn) immediately before invoking this.sendChat(); ensure any references to _state.isSending, _ui.showToast, _inputCoordinator.restore, and sendChat remain consistent and add rollback logic to reinsert the original assistant turn if sendChat fails after an in-place mutation.
472-474:⚠️ Potential issue | 🟠 MajorKeep
currentGenerationProviderIdin sync on the restored-image path.This path writes
isSendingdirectly instead of going through the normal send-state bookkeeping, so cancellation falls back to the live active provider rather than the provider that owns the restored image job.💡 Suggested fix
this._state.isSending = true; + this._state.currentGenerationProviderId = + this._aiBridge.getState().activeProviderId; this._generationController.startImagePreviewPolling(imageHandle); this._scheduleRestoredImageGenerationCheck(); @@ this._generationController.stopImagePreviewPolling(); this._state.isSending = false; + this._state.currentGenerationProviderId = null; await this._historyController.loadHistory();Also applies to: 496-499
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/chat/chat.ts` around lines 472 - 474, The restored-image path sets this._state.isSending directly and starts preview polling without updating the provider tracking, so cancellation targets the live provider instead of the one that owns the restored job; update the restored-image flow to also set this._state.currentGenerationProviderId to the provider id for the restored image before calling this._generationController.startImagePreviewPolling(imageHandle) and this._scheduleRestoredImageGenerationCheck(), and apply the same change to the other restored-image block around the second occurrence (the block that mirrors lines 496-499) so send-state and provider id remain in sync for cancellation.
🟡 Minor comments (4)
src/features/chat/services/ChatActivationCoordinator.ts-24-24 (1)
24-24:⚠️ Potential issue | 🟡 MinorNormalize
promptOverridethe same way as textarea input.Line 24 trims only
input?.valuebut notpromptOverride. A whitespace-padded override can change provider selection and auto-start behavior unexpectedly.Proposed fix
- const prompt = promptOverride ?? input?.value.trim() ?? ''; + const prompt = promptOverride?.trim() ?? input?.value.trim() ?? '';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/chat/services/ChatActivationCoordinator.ts` at line 24, The prompt construction in ChatActivationCoordinator currently only trims input?.value but not promptOverride, so whitespace-padded promptOverride can affect provider selection/auto-start; update the prompt assignment to normalize promptOverride the same way (e.g., trim and fallback to empty string) before using it in the expression that defines prompt (refer to promptOverride and the line building const prompt = promptOverride ?? input?.value.trim() ?? ''), ensuring promptOverride is .trim()-ed (and handled if undefined/empty) so both sources are normalized consistently.src/features/ai/services/EngineStatusService.ts-7-12 (1)
7-12:⚠️ Potential issue | 🟡 MinorOptional properties in union variants can cause unsafe property access.
Each union variant uses optional markers (
starting?,ready?, etc.), but_applyBackendStateaccesses these properties directly after'key' in statechecks (e.g.,state.ready.slotsat line 280). Theinoperator confirms key existence but not that the value is defined, so TypeScript's narrowing may not prevent accessingundefined.Use required properties for a proper discriminated union:
🛡️ Proposed fix for type safety
type BackendEngineState = | 'idle' - | { starting?: { engine_id: string } } - | { swapping?: { from: string; to: string } } - | { ready?: { slots: Array<{ engine: BackendEngineStatus }> } } - | { error?: EngineErrorPayload }; + | { starting: { engine_id: string } } + | { swapping: { from: string; to: string } } + | { ready: { slots: Array<{ engine: BackendEngineStatus }> } } + | { error: EngineErrorPayload };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/ai/services/EngineStatusService.ts` around lines 7 - 12, The BackendEngineState union uses optional properties (e.g., starting?, ready?, swapping?, error?) which allows a variant to have the property key present but value undefined, making later accesses in _applyBackendState (e.g., state.ready.slots) unsafe; change the union to a proper discriminated union by making those variant properties required (remove the ?), or replace the current shape with an explicit discriminant field (like kind/status: 'idle'|'starting'|'ready'|'swapping'|'error') and corresponding required payloads, then update _applyBackendState to switch on that discriminant (or the ensured required keys) so TypeScript can safely narrow and you no longer access possibly undefined properties.src-tauri/src/domain/ai/provider_http.rs-31-41 (1)
31-41:⚠️ Potential issue | 🟡 MinorUse saturating math for the final delay calculation.
backoff_multiplieris saturated, but Line 41 still multiplies and adds with plainu64arithmetic. High attempt counts can overflow here and panic the retry path instead of backing off.Proposed fix
pub(super) fn retry_delay(attempt: u32, status: StatusCode) -> std::time::Duration { let capped_attempt = attempt.max(1); let base_ms = if status == StatusCode::TOO_MANY_REQUESTS { 700u64 } else { 350u64 }; let backoff_multiplier = 2u64.saturating_pow(capped_attempt.saturating_sub(1)); let jitter_ms = rand::random_range(0..150u64); + let delay_ms = base_ms + .saturating_mul(backoff_multiplier) + .saturating_add(jitter_ms); - std::time::Duration::from_millis(base_ms * backoff_multiplier + jitter_ms) + std::time::Duration::from_millis(delay_ms) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/domain/ai/provider_http.rs` around lines 31 - 41, The retry_delay function can overflow when computing base_ms * backoff_multiplier + jitter_ms; change the final arithmetic to use saturating operations (e.g., compute let ms = base_ms.saturating_mul(backoff_multiplier).saturating_add(jitter_ms); then construct the Duration from ms) so multiplication/addition won't panic on large attempt values; keep existing computation of capped_attempt and backoff_multiplier as-is and apply saturating_mul/saturating_add before Duration::from_millis.src-tauri/src/api/ai/mod.rs-453-459 (1)
453-459:⚠️ Potential issue | 🟡 MinorRegister cancellation before the first awaited setup step.
fill_chat_request_api_key()runs before the request is inserted intoChatCancellationRegistry, so an immediate cancel during key lookup/secure-storage access returnsfalseand the request keeps going. Register as soon asrequest_idis known, then clear the entry on the early-error path too.Suggested fix
let mut request = request; let request_id = ensure_request_id(&mut request); + let cancellation = cancellation_registry.register(&request_id); let model = request.model.clone(); - fill_chat_request_api_key(&mut request, &config_service).await?; + if let Err(error) = fill_chat_request_api_key(&mut request, &config_service).await { + cancellation_registry.clear(&request_id); + return Err(error); + } - let cancellation = cancellation_registry.register(&request_id); let sink = create_stream_sink(request_id.clone(), chat_channel, thought_channel);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/api/ai/mod.rs` around lines 453 - 459, Register the request in the ChatCancellationRegistry immediately after ensure_request_id and before any await (i.e., before calling fill_chat_request_api_key), so that an immediate cancel during async key lookup is honored; move the cancellation_registry.register(&request_id) line up to just after let request_id = ensure_request_id(&mut request), then proceed with fill_chat_request_api_key and create_stream_sink; additionally, ensure any early-error/early-return paths that occur after registering will remove the entry by calling the registry's removal method (e.g., cancellation_registry.unregister(&request_id) or cancellation_registry.remove/clear(&request_id) depending on the available API) so entries aren't leaked.
🧹 Nitpick comments (8)
src-tauri/src/domain/system/hardware_probe.rs (1)
89-94: Add a focused regression test for Vulkan brand-to-class mapping.The new
matchon Line 89 is cleaner, but this path is now more important to keep stable. Please add a unit test that assertsbackend="vulkan"+ AMD/Intel names map toAmdGpu/IntelGpu, and non-branded names fall back as expected.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/domain/system/hardware_probe.rs` around lines 89 - 94, Add a unit test that covers the Vulkan mapping branch by exercising the code path that uses backend = "vulkan" and asserting the resulting AcceleratorClass: supply names that gpu_name_brand would classify as AMD and Intel and assert they map to AcceleratorClass::AmdGpu and AcceleratorClass::IntelGpu respectively, and supply a non-branded name (or an empty/unknown string) and assert it falls back to AcceleratorClass::GenericGpu when detected is true or AcceleratorClass::CpuOnly when detected is false; reference the gpu_name_brand function and the AcceleratorClass enum and test the code path that contains the "vulkan" match to ensure regression coverage.src/features/ai/services/EngineStatusService.ts (1)
202-222: Consider scope refinement if performance becomes a concern.The observer watches
document.bodywithsubtree: true, which could be intensive on complex DOMs. The current mitigation (attribute filtering +_shouldSyncDomForMutations+ animation frame coalescing) is reasonable for a desktop app. If performance issues arise, consider scoping the observer to a more specific container element.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/ai/services/EngineStatusService.ts` around lines 202 - 222, The MutationObserver in _startDomSyncObserver currently observes document.body with subtree: true which can be expensive; to mitigate, change the observation root from document.body to a narrower container (make the root element configurable or obtain a specific app container element) so only relevant subtree changes are observed, and keep the existing attributeFilter / _shouldSyncDomForMutations / _scheduleDomSync logic; update _startDomSyncObserver to accept or resolve a scoped root element and use that element in this._domObserver.observe(...) instead of document.body.src/features/chat/controllers/ChatHistoryController.ts (1)
186-195: Hardcoded prompt-detection heuristic is fragile but acceptable.The detection relies on specific strings (
"Stable Diffusion"and"Return only the final prompt text"). If the prompt template changes, these exchanges may leak into visible history. Consider extracting these markers to a constant or adding a structured metadata flag in the future.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/chat/controllers/ChatHistoryController.ts` around lines 186 - 195, The hardcoded heuristic in _isImagePromptPreparationRequest that checks IChatMessage['content'] for the literal strings "Stable Diffusion" and "Return only the final prompt text" is fragile; extract those marker strings into clearly named constants (e.g., IMAGE_PROMPT_MARKER and IMAGE_PROMPT_RETURN_MARKER) and use those constants in the _isImagePromptPreparationRequest function so future template changes are centralized, and add a TODO comment noting to prefer a structured metadata flag on the chat message (e.g., content.metadata?.isImagePromptPreparation) for a future migration.src-tauri/src/domain/ai/image_settings.rs (1)
163-169: Minor redundancy in candidate generation.When
suffixis already lowercase with no underscores, all three candidates may be identical. This causes harmless redundant lookups but could be optimized by deduplicating. Low priority given the small candidate count.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/domain/ai/image_settings.rs` around lines 163 - 169, The candidate generator build_setting_candidates currently returns three possibly-duplicate strings; change it to produce a deduplicated list in order (e.g., collect into a Vec<String>, push each formatted candidate only if not already present) and update the function signature from fn build_setting_candidates(prefix: &str, suffix: &str) -> [String; 3] to -> Vec<String>; adjust any callers to accept a Vec<String> instead of a fixed-size array so redundant lookups are avoided while preserving candidate order.src-tauri/src/domain/ai/image_comfyui.rs (1)
12-15: DuplicateImageRequestSettingsContextstruct.This struct is also defined in
image_settings.rs. Consider extracting to a shared location to reduce duplication, though since both arepub(super)visibility, the impact is limited.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/domain/ai/image_comfyui.rs` around lines 12 - 15, The duplicate ImageRequestSettingsContext struct should be consolidated: move the definition of ImageRequestSettingsContext into a shared module (e.g., a new or existing image_settings_shared or common module), adjust its visibility so both image_comfyui.rs and image_settings.rs can access it (use pub(crate) or define and re-export it with pub use), then remove the duplicate struct from image_comfyui.rs and update imports/usages in functions and types that reference ImageRequestSettingsContext to import the single shared definition; ensure any module re-exports or visibility changes preserve the original access patterns (replace local pub(super) definitions with the shared symbol and update use statements accordingly).src-tauri/src/domain/ai/image_response.rs (1)
21-24: Hardcoded PNG MIME type for OpenAI-compatible responses.The OpenAI-compatible parsing assumes
b64_jsonis always PNG. If the API supports other formats (e.g., viaresponse_format), this could produce incorrect data URLs. Consider inferring the MIME from a sibling field or making PNG the documented default.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/domain/ai/image_response.rs` around lines 21 - 24, The current OpenAI-compatible parsing in the item.get("b64_json") map hardcodes "image/png" in the data URL; change this to infer the MIME type from a sibling field (e.g., check item.get("mime"), item.get("format"), or item.get("image_format") and use its value if present) and only fall back to "image/png" as the default when no format is provided; update the mapping that builds the data URL so it uses the detected MIME string instead of the fixed "image/png" and ensure any decoders/consumers expecting PNG still work with the fallback.src/features/console/services/ConsoleLogService.ts (1)
117-118: Return a copy fromgetLogsForView().This exposes the live array stored in
_logsByView, so any caller can mutate service state outside the dedupe/limit path. Returning a clone avoids that leak.Proposed fix
public getLogsForView(viewId: string): ILogEntry[] { - return this._logsByView.get(this._canonicalViewId(viewId)) ?? []; + return [...(this._logsByView.get(this._canonicalViewId(viewId)) ?? [])]; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/console/services/ConsoleLogService.ts` around lines 117 - 118, getLogsForView currently returns the live array stored in _logsByView (keyed by _canonicalViewId(viewId)), which allows external callers to mutate service state; change getLogsForView to return a shallow copy of the array (e.g., clone the array you retrieve from _logsByView or clone the default empty array) so callers receive a copy and cannot modify the internal _logsByView entries outside the service's dedupe/limit logic (refer to getLogsForView, _logsByView, _canonicalViewId, and ILogEntry).src/features/settings/ui/ModuleSettingsEngineRenderFlow.ts (1)
207-220: Don’t silently drop unmapped text fields.This new map makes rendering depend on
field.keymatching a hardcoded selector list. Any future text field returned bygetTextFields()will now vanish quietly instead of failing fast. Please add an assertion/log or a fallback render path here.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/settings/ui/ModuleSettingsEngineRenderFlow.ts` around lines 207 - 220, The code currently drops any text field from options.getTextFields() whose field.key is not in the fieldTargets map; update the loop in ModuleSettingsEngineRenderFlow so unmapped keys are handled instead of silently returning: when targetSelector is undefined, log a warning (including field.key and options.appId) via the module's logger or console.warn to make the missing mapping visible, and/or provide a fallback render target by querying a default container (e.g., a dedicated '#default-text-fields' within options.container) and proceed to render into that element; keep the existing behavior for mapped keys and ensure the check still verifies target instanceof HTMLElement before rendering.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src-tauri/src/api/ai/mod.rs`:
- Around line 462-485: The cancellation path is emitting StreamEvent::Done with
the request_id instead of the assistant message identifier; update the code to
track the active assistant message_id (e.g. an Option<String> named
active_message_id) that gets set whenever a StreamChunk/StreamChunkPayload is
emitted in ai_service::process_chat_request (or in the stream handler), and in
the cancellation arm replace message_id: request_id.clone() with message_id:
active_message_id.clone().unwrap_or_else(|| request_id.clone()) so Done
correlates to the real streamed message id; ensure active_message_id is
accessible/mutably updated from where chunks are produced and used in the
cancellation block.
In `@src-tauri/src/api/system/logs.rs`:
- Around line 189-200: The fallback return in canonical_engine_id currently
returns the original trimmed engine_id which prevents normalization for
non-stable-diffusion names; change the fallback to return the normalized key
instead (i.e., return the computed `key` rather than
`engine_id.trim().to_string()`) so that names differing only by case/sep are
canonicalized; update the match arm in canonical_engine_id to use `key` as the
default.
In `@src-tauri/src/domain/ai/provider_payload.rs`:
- Around line 9-10: is_local_base_url currently uses brittle substring checks;
replace it with robust URL parsing: in is_local_base_url(base_url: &str) parse
the URL with url::Url::parse, get host_str(), and if host is present attempt to
parse it as std::net::IpAddr and return true for ip.is_loopback() or (for IPv4)
ip.is_private(); otherwise treat non-IP hostnames as local only if they equal
"localhost" or end with ".local" (or other explicit local hostnames you want to
support). Use the url crate and std::net::IpAddr in this function to avoid false
positives/negatives and keep the function signature is_local_base_url unchanged.
In `@src-tauri/src/domain/ai/provider_response.rs`:
- Around line 18-25: The extractor in provider_response.rs (used by
generate_completion()) only reads choices[0].message.content, so non-stream
providers (e.g., Ollama/llama.cpp) whose responses are at top-level keys are
missed; update the reply_text resolution to check, in order: top-level
"message"->"content", top-level "content", top-level "response", and then fall
back to the existing choices[0].message.content path (and accept both string and
single-item array forms), using the existing extract_message_text helper to
normalize each candidate; adjust any pattern matching around reply_text to use
the first non-empty extracted value.
In `@src-tauri/src/domain/engine/engine_args.rs`:
- Around line 64-83: sdcpp_extra_args currently only filters out the unsupported
flag token and leaves the next token (its value) as a positional argument;
update sdcpp_extra_args to iterate over config.extra_args with an index or
iterator that can peek/skip so when a flag from SDCPP_UNSUPPORTED_FLAGS or
SDCPP_SERVER_UNSUPPORTED_FLAGS matches without an '=' suffix you also skip the
immediately following argument (the value), whereas flags matched via
strip_prefix(... '=' ) continue to be removed as before; ensure you reference
sdcpp_extra_args, EngineConfig.extra_args, SDCPP_UNSUPPORTED_FLAGS and
SDCPP_SERVER_UNSUPPORTED_FLAGS and preserve the cloned Vec<String> return
behavior.
- Around line 85-93: The two stubbed functions
resolve_sdcpp_preview_path(extra_args: &[String]) and
sdcpp_preview_enabled(extra_args: &[String]) currently always return None/false,
which disables SD.cpp preview end-to-end and breaks
EngineManager::active_image_preview_path()/get_image_generation_preview();
restore their original behavior by parsing extra_args to detect the SD.cpp
preview flags/arguments (e.g., a preview path flag like "--sdcpp-preview-path"
or a boolean enable flag used elsewhere), have resolve_sdcpp_preview_path return
Some(PathBuf) when the preview path argument is present and valid, and have
sdcpp_preview_enabled return true when either a dedicated enable flag or a valid
preview path is detected so the preview API/UI continues to function. Ensure you
reference and preserve the same argument names used by the rest of the code so
EngineManager::active_image_preview_path() can find a real path.
In `@src-tauri/src/domain/engine/manager.rs`:
- Around line 217-235: The code currently removes the slot on any try_wait()
error which can leave a live child process running; change the Err branch in the
try_wait() handling (the match on existing.process.try_wait()) so you do not
unconditionally remove slots.remove(&primary_cap) — instead either keep the slot
until an explicit stop succeeds or call the manager's kill_engine(...) /
kill_engine_handle(...) routine for that existing handle and only remove the
slot after kill_engine reports success (and log the kill attempt/result); apply
the same fix to the analogous block around the other occurrence (the block
covering the 544-565 area) so that try_wait() errors lead to an explicit
stop/kill attempt before removing the slot.
- Around line 476-495: stop_slot_after_error currently reads the slot's engine
id, emits an error, then calls stop_slot later which can race with a hot-swap;
change the flow to capture the engine id under slots.lock().await, call
self.emitter.emit_error(&engine_id, message), then re-acquire slots.lock().await
and verify the current slot's definition.id still equals the captured engine_id
before calling self.stop_slot(capability).await (if not equal, skip stopping).
Use the existing symbols: stop_slot_after_error, slots.lock().await,
engine.definition.id, self.emitter.emit_error, and self.stop_slot to implement
the compare-and-stop check.
In `@src-tauri/src/domain/modules/github_release_selection.rs`:
- Around line 491-495: The code confuses NVIDIA driver versions and CUDA toolkit
versions: fields cuda_driver_major/minor are populated from
nvml.sys_cuda_driver_version() but are also used by
supports_cuda_at_least(major, minor), causing mismatched checks between
supports_cuda_track() and the ComfyUI scoring path. Fix by splitting the data
into two distinct fields (e.g., driver_major/driver_minor and
cuda_toolkit_major/cuda_toolkit_minor), keep
detect_cuda_driver_version()/nvml.sys_cuda_driver_version() populating the
driver_* fields, and update probe_gpu_info() to populate cuda_toolkit_* (or
leave None if unknown). Then make supports_cuda_track() continue using driver_*
and change supports_cuda_at_least() to use cuda_toolkit_* so each scoring path
checks the correct version type (referencing the fields
cuda_driver_major/cuda_driver_minor, new cuda_toolkit_major/cuda_toolkit_minor,
probe_gpu_info(), detect_cuda_driver_version(), supports_cuda_track(), and
supports_cuda_at_least()).
In `@src/app/CoreBootstrapRunner.test.ts`:
- Around line 133-147: The order assertions currently use "??
Number.POSITIVE_INFINITY" which lets them pass if the compared mock was never
called; before each invocationCallOrder comparison, explicitly assert the
involved mocks were called (e.g., use toHaveBeenCalled or toHaveBeenCalledTimes
> 0) for bootstrap.bridge.init, bootstrap.eventHandler.init,
bootstrap.aiBridge.init, bootstrap.templateLoader.loadAndInject,
immediateUi.sidebarUI.init, immediateUi.navigationUI.init,
registerGlobalShortcuts, and bootstrap.windowUI.hideSplashScreen, then perform
the invocationCallOrder comparisons without falling back to POSITIVE_INFINITY so
failures surface when a step did not execute.
In `@src/app/CoreEntry.ts`:
- Around line 45-49: The call to activeCoreInstance.destroy() inside
destroyActiveCoreInstance can throw and prevent clearBootState() from running;
wrap the destroy call on the CoreEntry state
(getCoreEntryState().activeCoreInstance.destroy()) in a try/catch/finally so
that any exception is caught/logged and clearBootState() is executed in the
finally block to ensure boot state is always cleared; reference the
destroyActiveCoreInstance function, getCoreEntryState,
activeCoreInstance.destroy, and clearBootState when making the change.
In `@src/features/ai/services/AIBridge.ts`:
- Around line 184-195: The stopEngineSlot method captures providerId before
awaiting this._runtime.stopEngineSlot, which can lead to acting on a stale
provider; after the await, re-check this._manager.activeProviderId and only call
this._manager.stopProvider() and this._engineStatus.setEngineState(providerId,
'idle') if the activeProviderId still equals the captured providerId (and keep
the existing capability/providerPolicy checks), so you don't stop or mark idle a
provider that switched during the await.
In `@src/features/ai/services/AIBridgeMessageController.ts`:
- Around line 80-88: The prepareImagePrompt method must not trigger UI
side-effects: instead of calling _handleMissingProvider(...) or
_handleMissingApiKey(...) (which broadcast chat responses and show toasts),
change the early-exit checks in prepareImagePrompt to perform only state checks
(use this._deps.manager.activeProviderId, await
this._deps.manager.refreshActiveApiKey(), this._deps.manager.apiKey and
this._deps.manager.isActive()) and return a silent IBridgeResponse error object
directly (or call a new private helper that constructs a non-broadcasting error
response) so no toasts/chat broadcasts occur from prepareImagePrompt; keep the
UI-facing _handleMissingProvider/_handleMissingApiKey for callers that should
show errors.
In `@src/features/ai/services/AIChatTransport.ts`:
- Around line 145-147: The current post-response check only awaits finalization
when text stream listeners exist; change the condition in the method (inside
AIChatTransport, where response.ok and streamDone are referenced) to also
consider active thought-stream listeners (e.g., check both _streamListeners.size
and the thought-stream listener collection such as _thoughtStreamListeners.size
or equivalent) so that if either text or thought chunk subscribers are present
you call await this._waitForStreamFinalization(streamDone) to prevent premature
resolution and truncated reasoning streams.
In `@src/features/chat/chat.ts`:
- Around line 601-605: In sendChat(), when the early-return path triggers
because this._state.isSending is true and you call await
this._sendController.cancelActiveSend(), ensure you also reset the
image-generation trigger by clearing this._forceImageGeneration (the same flag
set by sendImageGenerationFromMenu()) before returning; locate the sendChat
method and add a statement to reset this._forceImageGeneration on that
cancel-only branch so subsequent sends don't accidentally use the
image-generation path.
In `@src/features/chat/ui/ChatImageController.ts`:
- Around line 69-87: The keydown and wheel handlers (_boundImageViewerKeydown
and _boundImageViewerWheel) are preventing native browser zoom (Ctrl + +/−/0 and
Ctrl+wheel) without providing an alternative; either remove the prevents for
those combos or implement and hook up a zoom feature before calling
event.preventDefault(). Update _boundImageViewerKeydown to not intercept ['+',
'-', '=', '0'] when event.ctrlKey (or only intercept when you have a local zoom
handler available), and update _boundImageViewerWheel to only call
event.preventDefault() if the controller implements and applies zoom logic
(otherwise allow the event to propagate to preserve native browser zoom). Ensure
references to closeImageViewer and _showAdjacentImage remain unchanged.
In `@src/features/console/ui/ConsoleFilterControlHelper.ts`:
- Around line 163-178: The bug is that setting button.dataset['confirmingAll'] =
'true' can coexist with the normal 'confirming' state and lead to an accidental
onClearAllLogs() call; in _handleClearAllButton, ensure the "all-clear" arm is
cleared before enabling the normal clear confirmation: when you set
confirmingAll on the button in _handleClearAllButton, explicitly clear any
'confirming' state (or call _resetClearConfirmation() to remove both flags) so
the regular confirming flow cannot be activated while confirmingAll is set, and
likewise ensure that entering the normal confirming state clears confirmingAll;
update _handleClearAllButton and any code paths that set 'confirming' to
reference and clear the opposite flag so only one confirm mode is active at a
time (use _resetClearConfirmation, button.dataset['confirmingAll'],
button.dataset['confirming'], and _deps.onClearAllLogs to locate relevant
logic).
In `@src/features/settings/ui/ModuleSettingsEngineFieldController.ts`:
- Around line 81-82: The setupEvents() code currently invokes handleSave(...)
with a void cast, which allows handleSave (which awaits setConfig()) to run
unobserved and overlap; change the event handlers to enqueue or await saves
instead of fire-and-forget: implement a simple serialization mechanism (e.g., an
in-controller promise chain or a boolean "isSaving" with a pending flag) used by
handleSave(), ensure setupEvents() calls handleSave without void and awaits or
appends to the chain, catch and log errors from setConfig() inside handleSave(),
and apply the same pattern to the other handlers referenced in this file (the
block that mirrors handleSave usage later) so saves run sequentially and
rejections are handled (references: setupEvents(), handleSave(), setConfig()).
In `@src/features/settings/ui/ModuleSettingsEngineFieldSupport.ts`:
- Around line 570-608: tokenizeEngineExtraArgs currently strips outer quotes but
does not unescape escaped characters, so quoted tokens like
"C:\\Models\\foo.gguf" or "{\"key\":\"val\"}" get corrupted when round-tripped
by formatEngineExtraArgToken; update tokenizeEngineExtraArgs to treat backslash
as an escape when inside quotes (and when it escapes a quote, backslash, or
other char) so that sequences like \" and \\ are translated back to " and \
respectively, and only toggle/remove quote characters when they are not escaped;
keep formatEngineExtraArgToken unchanged but ensure the inverse escaping in
tokenizeEngineExtraArgs so tokens round-trip correctly.
---
Outside diff comments:
In `@src-tauri/src/api/system/logs.rs`:
- Around line 92-96: clear_console_logs currently passes the raw view_id into
logs::clear_logs_for_view which leaves normalized aliases (e.g., engine:sdcpp)
untouched; use the same canonicalized id used by get_console_logs instead. Call
resolve_console_log_target(&view_id) (already present) and extract the canonical
view id from the returned target, then pass that canonical id into
logs::clear_logs_for_view and into clear_console_log_target so buffered logs are
removed for the normalized key rather than the raw alias. Ensure you reference
the resolved target from resolve_console_log_target and replace uses of the raw
view_id in clear_console_logs accordingly.
In `@src-tauri/src/domain/ai/streaming.rs`:
- Around line 391-410: The current end-of-stream logic in streaming.rs treats a
dropped connection without a terminal signal as success; update the
end-of-stream handling so that if saw_done is false and state.saw_terminal_chunk
is false you return a ChatResponse with ok: false (and an appropriate error)
instead of allowing a persisted successful reply; additionally, ensure
normalization of terminal signals inside handle_stream_json_line(...) by
explicitly setting state.saw_terminal_chunk for providers that omit
[DONE]/finish_reason/stop so the end-of-stream check can rely on that flag
(references: variables saw_done, state.saw_terminal_chunk, state.full_content,
state.chunks_emitted; function handle_stream_json_line and the ChatResponse
construction).
In `@src/app/CoreEntry.ts`:
- Around line 69-75: The catch handler on coreInstance.init() should guard the
call to coreInstance.destroy() so a thrown error there doesn't bypass failure
handling; wrap coreInstance.destroy() in its own try/catch (or
Promise.resolve().catch) and log any destroy error via tracer.error, preserve
calling clearBootState() when state.activeCoreInstance === coreInstance, and
ensure the original boot error is always reported with tracer.error(`[Core] Boot
failed: ...`) even if destroy fails.
In `@src/features/chat/controllers/ChatGenerationController.ts`:
- Around line 280-306: The token resolver currently returns 0 when
response.usage is missing, breaking context accounting; update
_resolveBackendCompletionTokens to fall back to estimating tokens from the reply
text (use estimateReplyTokens(replyText)) when response.usage?.completion_tokens
is absent or invalid — either change the signature to
_resolveBackendCompletionTokens(response, replyText) or add an optional
replyText param and call it from handleTextReply so handleTextReply passes
replyText; ensure the returned value is still a non-negative integer
(Math.trunc/Math.max as before) and used unchanged by the existing
addContextTokens/streamingHandle append logic.
In `@src/features/console/ui/ConsoleUI.ts`:
- Around line 255-261: The async callback in setLogView is currently
re-rendering whatever is active when fetchLogs(view) settles and doesn't handle
rejections; capture the requested view into a local variable (e.g. const
requestedView = view) before calling this.service.fetchLogs(requestedView), then
in the promise chain only call this.renderLogs(true) if
this._viewState.activeViewId === requestedView, and add a .catch to the
fetchLogs call to observe/log failures (use existing logger or a safe fallback).
This change touches setLogView, service.fetchLogs, renderLogs, and
_viewState.activeViewId.
---
Minor comments:
In `@src-tauri/src/api/ai/mod.rs`:
- Around line 453-459: Register the request in the ChatCancellationRegistry
immediately after ensure_request_id and before any await (i.e., before calling
fill_chat_request_api_key), so that an immediate cancel during async key lookup
is honored; move the cancellation_registry.register(&request_id) line up to just
after let request_id = ensure_request_id(&mut request), then proceed with
fill_chat_request_api_key and create_stream_sink; additionally, ensure any
early-error/early-return paths that occur after registering will remove the
entry by calling the registry's removal method (e.g.,
cancellation_registry.unregister(&request_id) or
cancellation_registry.remove/clear(&request_id) depending on the available API)
so entries aren't leaked.
In `@src-tauri/src/domain/ai/provider_http.rs`:
- Around line 31-41: The retry_delay function can overflow when computing
base_ms * backoff_multiplier + jitter_ms; change the final arithmetic to use
saturating operations (e.g., compute let ms =
base_ms.saturating_mul(backoff_multiplier).saturating_add(jitter_ms); then
construct the Duration from ms) so multiplication/addition won't panic on large
attempt values; keep existing computation of capped_attempt and
backoff_multiplier as-is and apply saturating_mul/saturating_add before
Duration::from_millis.
In `@src/features/ai/services/EngineStatusService.ts`:
- Around line 7-12: The BackendEngineState union uses optional properties (e.g.,
starting?, ready?, swapping?, error?) which allows a variant to have the
property key present but value undefined, making later accesses in
_applyBackendState (e.g., state.ready.slots) unsafe; change the union to a
proper discriminated union by making those variant properties required (remove
the ?), or replace the current shape with an explicit discriminant field (like
kind/status: 'idle'|'starting'|'ready'|'swapping'|'error') and corresponding
required payloads, then update _applyBackendState to switch on that discriminant
(or the ensured required keys) so TypeScript can safely narrow and you no longer
access possibly undefined properties.
In `@src/features/chat/services/ChatActivationCoordinator.ts`:
- Line 24: The prompt construction in ChatActivationCoordinator currently only
trims input?.value but not promptOverride, so whitespace-padded promptOverride
can affect provider selection/auto-start; update the prompt assignment to
normalize promptOverride the same way (e.g., trim and fallback to empty string)
before using it in the expression that defines prompt (refer to promptOverride
and the line building const prompt = promptOverride ?? input?.value.trim() ??
''), ensuring promptOverride is .trim()-ed (and handled if undefined/empty) so
both sources are normalized consistently.
---
Duplicate comments:
In `@src/features/chat/chat.ts`:
- Around line 644-655: regenerateLastTurn currently mutates history immediately
which removes the assistant turn before sendChat may fail; instead, obtain the
regenerated text without mutating state (or use a non-mutating preview API) and
only apply the history mutation when the resend actually starts/succeeds: call a
preview method on _historyController (or add one) to get text for
_inputCoordinator.restore, show the toast on empty result, and only then perform
the actual mutation (or call the existing regenerateLastTurn) immediately before
invoking this.sendChat(); ensure any references to _state.isSending,
_ui.showToast, _inputCoordinator.restore, and sendChat remain consistent and add
rollback logic to reinsert the original assistant turn if sendChat fails after
an in-place mutation.
- Around line 472-474: The restored-image path sets this._state.isSending
directly and starts preview polling without updating the provider tracking, so
cancellation targets the live provider instead of the one that owns the restored
job; update the restored-image flow to also set
this._state.currentGenerationProviderId to the provider id for the restored
image before calling
this._generationController.startImagePreviewPolling(imageHandle) and
this._scheduleRestoredImageGenerationCheck(), and apply the same change to the
other restored-image block around the second occurrence (the block that mirrors
lines 496-499) so send-state and provider id remain in sync for cancellation.
---
Nitpick comments:
In `@src-tauri/src/domain/ai/image_comfyui.rs`:
- Around line 12-15: The duplicate ImageRequestSettingsContext struct should be
consolidated: move the definition of ImageRequestSettingsContext into a shared
module (e.g., a new or existing image_settings_shared or common module), adjust
its visibility so both image_comfyui.rs and image_settings.rs can access it (use
pub(crate) or define and re-export it with pub use), then remove the duplicate
struct from image_comfyui.rs and update imports/usages in functions and types
that reference ImageRequestSettingsContext to import the single shared
definition; ensure any module re-exports or visibility changes preserve the
original access patterns (replace local pub(super) definitions with the shared
symbol and update use statements accordingly).
In `@src-tauri/src/domain/ai/image_response.rs`:
- Around line 21-24: The current OpenAI-compatible parsing in the
item.get("b64_json") map hardcodes "image/png" in the data URL; change this to
infer the MIME type from a sibling field (e.g., check item.get("mime"),
item.get("format"), or item.get("image_format") and use its value if present)
and only fall back to "image/png" as the default when no format is provided;
update the mapping that builds the data URL so it uses the detected MIME string
instead of the fixed "image/png" and ensure any decoders/consumers expecting PNG
still work with the fallback.
In `@src-tauri/src/domain/ai/image_settings.rs`:
- Around line 163-169: The candidate generator build_setting_candidates
currently returns three possibly-duplicate strings; change it to produce a
deduplicated list in order (e.g., collect into a Vec<String>, push each
formatted candidate only if not already present) and update the function
signature from fn build_setting_candidates(prefix: &str, suffix: &str) ->
[String; 3] to -> Vec<String>; adjust any callers to accept a Vec<String>
instead of a fixed-size array so redundant lookups are avoided while preserving
candidate order.
In `@src-tauri/src/domain/system/hardware_probe.rs`:
- Around line 89-94: Add a unit test that covers the Vulkan mapping branch by
exercising the code path that uses backend = "vulkan" and asserting the
resulting AcceleratorClass: supply names that gpu_name_brand would classify as
AMD and Intel and assert they map to AcceleratorClass::AmdGpu and
AcceleratorClass::IntelGpu respectively, and supply a non-branded name (or an
empty/unknown string) and assert it falls back to AcceleratorClass::GenericGpu
when detected is true or AcceleratorClass::CpuOnly when detected is false;
reference the gpu_name_brand function and the AcceleratorClass enum and test the
code path that contains the "vulkan" match to ensure regression coverage.
In `@src/features/ai/services/EngineStatusService.ts`:
- Around line 202-222: The MutationObserver in _startDomSyncObserver currently
observes document.body with subtree: true which can be expensive; to mitigate,
change the observation root from document.body to a narrower container (make the
root element configurable or obtain a specific app container element) so only
relevant subtree changes are observed, and keep the existing attributeFilter /
_shouldSyncDomForMutations / _scheduleDomSync logic; update
_startDomSyncObserver to accept or resolve a scoped root element and use that
element in this._domObserver.observe(...) instead of document.body.
In `@src/features/chat/controllers/ChatHistoryController.ts`:
- Around line 186-195: The hardcoded heuristic in
_isImagePromptPreparationRequest that checks IChatMessage['content'] for the
literal strings "Stable Diffusion" and "Return only the final prompt text" is
fragile; extract those marker strings into clearly named constants (e.g.,
IMAGE_PROMPT_MARKER and IMAGE_PROMPT_RETURN_MARKER) and use those constants in
the _isImagePromptPreparationRequest function so future template changes are
centralized, and add a TODO comment noting to prefer a structured metadata flag
on the chat message (e.g., content.metadata?.isImagePromptPreparation) for a
future migration.
In `@src/features/console/services/ConsoleLogService.ts`:
- Around line 117-118: getLogsForView currently returns the live array stored in
_logsByView (keyed by _canonicalViewId(viewId)), which allows external callers
to mutate service state; change getLogsForView to return a shallow copy of the
array (e.g., clone the array you retrieve from _logsByView or clone the default
empty array) so callers receive a copy and cannot modify the internal
_logsByView entries outside the service's dedupe/limit logic (refer to
getLogsForView, _logsByView, _canonicalViewId, and ILogEntry).
In `@src/features/settings/ui/ModuleSettingsEngineRenderFlow.ts`:
- Around line 207-220: The code currently drops any text field from
options.getTextFields() whose field.key is not in the fieldTargets map; update
the loop in ModuleSettingsEngineRenderFlow so unmapped keys are handled instead
of silently returning: when targetSelector is undefined, log a warning
(including field.key and options.appId) via the module's logger or console.warn
to make the missing mapping visible, and/or provide a fallback render target by
querying a default container (e.g., a dedicated '#default-text-fields' within
options.container) and proceed to render into that element; keep the existing
behavior for mapped keys and ensure the check still verifies target instanceof
HTMLElement before rendering.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/features/console/services/ConsoleLogService.ts (1)
86-110:⚠️ Potential issue | 🟠 MajorDelay local buffer resets until the backend clear succeeds.
Both clear paths wipe
_logsByViewand reset timestamps before the Tauri command completes. If the invoke fails, the service returnsfalsewith local state already erased, and the next fetch starts again fromsince = 0.Proposed fix
public async clearLogs(viewId = 'general'): Promise<boolean> { const normalizedViewId = this._canonicalViewId(viewId); - this._logsByView.set(normalizedViewId, []); - this._lastTimestampByView.set(normalizedViewId, 0); try { await this.bridge.invoke('clear_console_logs', { viewId: normalizedViewId }); + this._logsByView.set(normalizedViewId, []); + this._lastTimestampByView.set(normalizedViewId, 0); return true; } catch (error) { this._tracer.error('[ConsoleLogService] Clear logs failed:', error); return false; } } public async clearAllLogs(): Promise<boolean> { - this._logsByView.clear(); - this._lastTimestampByView.clear(); try { await this.bridge.invoke('clear_logs'); + this._logsByView.clear(); + this._lastTimestampByView.clear(); return true; } catch (error) { this._tracer.error('[ConsoleLogService] Clear all logs failed:', error); return false; } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/console/services/ConsoleLogService.ts` around lines 86 - 110, Move the local state resets to only occur after the backend clear succeeds: in clearLogs (use _canonicalViewId, normalizedViewId, bridge.invoke('clear_console_logs', ...)) and in clearAllLogs (bridge.invoke('clear_logs')), call this._logsByView.set(normalizedViewId, []) and this._lastTimestampByView.set(normalizedViewId, 0) (for clearLogs) or this._logsByView.clear() and this._lastTimestampByView.clear() (for clearAllLogs) inside the try block after await this.bridge.invoke(...); keep the current catch behavior and log errors with this._tracer.error('[ConsoleLogService] Clear ... failed:', error) so failed invokes do not prematurely erase local buffers or reset timestamps.
♻️ Duplicate comments (3)
src/app/CoreEntry.ts (1)
45-49:⚠️ Potential issue | 🟠 MajorMake state cleanup exception-safe in
destroyActiveCoreInstance().At Line 47, if
destroy()throws,clearBootState()at Line 48 is skipped. That can leave stale boot flags and also short-circuit the HMR dispose cleanup path that runs afterward. The risk is real because destroy chains throughsrc/app/CoreLifecycleController.ts(Lines 168-177) andsrc/app/CoreComposition.ts(Lines 68-72), which execute multiple teardown operations.Proposed fix
function destroyActiveCoreInstance(): void { const state = getCoreEntryState(); - state.activeCoreInstance?.destroy(); - clearBootState(); + const instance = state.activeCoreInstance; + try { + instance?.destroy(); + } finally { + clearBootState(); + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/CoreEntry.ts` around lines 45 - 49, The current destroyActiveCoreInstance() calls state.activeCoreInstance?.destroy() then clearBootState(), so if destroy() throws the cleanup is skipped; update destroyActiveCoreInstance() to call state.activeCoreInstance?.destroy() inside a try/finally and move clearBootState() into the finally block so clearBootState() always runs (and rethrow any error after finally to preserve behavior). Reference: destroyActiveCoreInstance(), getCoreEntryState(), clearBootState(), and state.activeCoreInstance?.destroy() — if destroy may become async in future, make the wrapper await the call inside try/finally.src/features/chat/chat.ts (1)
645-655:⚠️ Potential issue | 🟠 MajorDon’t rewind history before resend is known to start.
regenerateLastTurn()mutates persisted/local history first, butsendChat()can still bail out on activation or send-time failure. That leaves the previous assistant reply removed with no replacement. Please preflight the resend, or restore the removed turn whensendChat()does not actually start.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/chat/chat.ts` around lines 645 - 655, regenerateLastTurn mutates persisted history before sendChat may actually start; save the current assistant turn first, then call regenerateLastTurn, and if sendChat fails to start or throws (i.e., does not begin sending), restore the saved turn and input. Concretely: capture the previous assistant text from _historyController (e.g., getLastAssistantTurn or similar), call this._historyController.regenerateLastTurn(...), call this._inputCoordinator.restore(text) and await this.sendChat(), and in the sendChat failure/early-return/exception path restore the saved turn via _historyController (and _inputCoordinator) so the previous reply is not lost. Ensure error/early-return branches reference regenerateLastTurn, sendChat, _historyController, _inputCoordinator, and _state.isSending.src-tauri/src/api/system/logs.rs (1)
190-200:⚠️ Potential issue | 🟠 MajorReturn the normalized fallback from
canonical_engine_id().The default arm still returns the original trimmed string, so aliases that differ only by case or separators can produce different engine view IDs. That keeps the duplicate-tab / duplicate-status problem alive.
Proposed fix
fn canonical_engine_id(engine_id: &str) -> String { let key = engine_id .trim() .to_ascii_lowercase() .replace([' ', '_'], "-"); match key.as_str() { "stable-diffusion" | "stable-diffusion.cpp" | "stable-diffusion-cpp" | "stable.diffusion.cpp" => "sdcpp".to_string(), - _ => engine_id.trim().to_string(), + _ => key, } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/api/system/logs.rs` around lines 190 - 200, canonical_engine_id currently returns the original trimmed engine_id in the default match arm, which defeats the normalization and allows aliases that differ only by case/sep to create duplicates; change the default branch to return the normalized `key` (the trimmed, lowercased, separator-normalized value) instead of `engine_id.trim().to_string()` so all fallbacks use the canonical form (refer to function canonical_engine_id and the local variable `key`).
🧹 Nitpick comments (4)
src/features/settings/ui/ModuleSettingsEngineFieldSupport.ts (2)
409-417: Variable shadowing: parametergroupsshadows outer variable.The parameter
groupson line 409 shadows the outergroupsvariable declared on line 302. While the code works correctly becausesyncTokens()reassigns the outer variable viainput.value, this shadowing makes the code harder to follow and could lead to subtle bugs during maintenance.♻️ Suggested rename to avoid shadowing
- const setGroups = (groups: string[], options: { emit?: boolean } = {}) => { - input.value = flattenGroups(groups); + const setGroups = (newGroups: string[], options: { emit?: boolean } = {}) => { + input.value = flattenGroups(newGroups); syncTokens(); if (options.emit === false) { return; } input.dispatchEvent(new Event('input', { bubbles: true })); input.dispatchEvent(new Event('change', { bubbles: true })); };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/settings/ui/ModuleSettingsEngineFieldSupport.ts` around lines 409 - 417, The parameter name groups in the setGroups function shadows the outer variable also named groups; rename the parameter (e.g., to newGroups) to avoid shadowing, update all references inside setGroups (flattenGroups(newGroups), assign input.value, call syncTokens() and handle options.emit) and ensure syncTokens() still reads the updated input.value to reassign the outer groups variable; keep the function signature setGroups(...) and other symbols (flattenGroups, syncTokens, input.dispatchEvent) unchanged.
650-656: Consider adding a comment explaining the'null'string check.Line 653's condition
!(value === 'null' && defaultValue === '')handles an edge case where the literal string'null'was stored. A brief inline comment explaining this would help future maintainers understand why this special case exists.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/settings/ui/ModuleSettingsEngineFieldSupport.ts` around lines 650 - 656, Add a concise inline comment next to the conditional that checks (value !== undefined && value !== null && !(value === 'null' && defaultValue === '')), explaining that the special-case prevents treating the literal string "null" (which may have been persistently stored) as a real value when the default is an empty string; reference the variables input, key, defaultValue and settings so maintainers can find the check quickly and understand why the 'null' string is excluded.src/features/chat/ui/ChatImageController.ts (1)
58-59: Remove the no-op document wheel listener while the viewer is open.The wheel handler currently does nothing, so registering/removing it adds global listener churn without behavior.
♻️ Suggested cleanup
- private readonly _boundImageViewerWheel: (event: WheelEvent) => void; @@ - this._boundImageViewerWheel = (event: WheelEvent) => { - if (!event.ctrlKey) return; - }; @@ - document.addEventListener('wheel', this._boundImageViewerWheel, { passive: false }); @@ - document.removeEventListener('wheel', this._boundImageViewerWheel);Also applies to: 84-86, 335-345
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/chat/ui/ChatImageController.ts` around lines 58 - 59, Remove the no-op wheel listener registration/removal to avoid unnecessary global listener churn: delete the unused _boundImageViewerWheel class property and any addEventListener/removeEventListener calls that reference it (locations around where _imageViewerOverlay is set/cleared and the blocks at lines ~84-86 and ~335-345), and ensure opening/closing the image viewer only manages meaningful listeners or handlers (leave any real wheel/scroll behavior in methods that actually implement functionality such as the image viewer's zoom/pan handlers).src/features/ai/services/AIBridge.ts (1)
9-9: Prefer importing the send-options type from the bridge contract.Pulling
AIBridgeSendMessageOptionsfromAIBridgeMessageControllermakes this service depend on a concrete controller for a shared public type. Moving that type to../types/IAIBridge(or a dedicated types module) keeps the dependency direction cleaner.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/ai/services/AIBridge.ts` at line 9, Replace the import of AIBridgeSendMessageOptions from AIBridgeMessageController with an import from the bridge contract/types module (e.g., IAIBridge) so this service no longer depends on the concrete controller; move or re-export the AIBridgeSendMessageOptions type into the IAIBridge/types module, update the import in AIBridge.ts to reference that shared type, and ensure any other modules that used the old import are updated to the new centralized type export.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src-tauri/src/api/system/logs.rs`:
- Around line 145-149: Frontend debug/trace are being dropped by the global
subscriber (logger.rs) because the subscriber filters the "frontend" target at
info, so trace_frontend_log never reaches FrontendLayer; to fix, either adjust
the subscriber's filter to set the "frontend" target level to trace (or remove
the target-level info filter) so debug/trace pass through to FrontendLayer, or
change trace_frontend_log to also forward debug/trace messages into
logs::add_log (in addition to tracing) so the console store receives them;
update the subscriber configuration in logger.rs or the trace_frontend_log
function accordingly and ensure FrontendLayer sees trace/debug levels.
In `@src/features/ai/services/AIChatTransport.ts`:
- Around line 199-206: The current code always logs a successful stale-request
cancellation even though cancel_chat_generation returns a boolean; change the
logic around the _runWithTimeout call so you capture its boolean result (from
_context?.tauriProvider.invoke<boolean>('cancel_chat_generation', { requestId })
wrapped by _runWithTimeout with STALE_REQUEST_CANCEL_TIMEOUT_MS) and only call
_tracer.info('[AIChatTransport] Cancelled stale active request before restart')
when the result is true; if it returns false or throws/returns undefined, log a
distinct warning or debug message indicating cancellation failed or the request
likely continued. Ensure you still propagate/handle timeouts/errors from
_runWithTimeout as before.
In `@src/features/chat/chat.ts`:
- Around line 472-474: This code path sets this._state.isSending = true without
recording which provider is handling the restored image, which leaves
currentGenerationProviderId null and can cause stop actions to target the wrong
backend; fix by capturing and assigning the provider id before starting
polling—either call the same sending-state routine you use elsewhere (e.g., the
method that records currentGenerationProviderId) or explicitly set
this._state.currentGenerationProviderId = <providerId> (for example the provider
id you used to create the image job) immediately before calling
this._generationController.startImagePreviewPolling(imageHandle) and
this._scheduleRestoredImageGenerationCheck().
- Around line 490-498: Wrap the awaited calls to
this._aiBridge.getImageGenerationPreview() and
this._historyController.loadHistory() in a try/catch so any rejection won’t
break the restored-preview poll chain; in the catch block call
this._generationController.stopImagePreviewPolling(), set this._state.isSending
= false, and log the error (via existing logger or console) so the timer chain
started by _scheduleRestoredImageGenerationCheck() is not left in a broken state
and sending state is always cleared on failure.
In `@src/features/chat/controllers/ChatGenerationController.ts`:
- Around line 299-307: The _resolveBackendCompletionTokens function can throw if
this._options.estimateReplyTokens(replyText) rejects; change it so estimator
failures are caught and degraded to zero tokens instead of bubbling the error
(which interrupts handleTextReply). Specifically, inside
_resolveBackendCompletionTokens around the call to
this._options.estimateReplyTokens, wrap the call in a try/catch (or handle the
returned promise rejection) and return 0 if an exception occurs or the result is
not a finite number, ensuring completionTokens from response.usage is still
preferred.
In `@src/features/console/ui/ConsoleUI.ts`:
- Around line 279-283: The current ConsoleUI.clearAllLogs calls
this.service.clearAllLogs() but ignores its boolean failure result; update
ConsoleUI.clearAllLogs to await the return value (e.g., const success = await
this.service.clearAllLogs()), and only call this.renderLogs(true) and
this._clipboardHelper.showLogsCleared() when success is true; when success is
false, do not re-render or show the cleared toast and instead surface an error
(e.g., show a failure toast or call an existing error handler) so backend
failures are communicated to the user.
---
Outside diff comments:
In `@src/features/console/services/ConsoleLogService.ts`:
- Around line 86-110: Move the local state resets to only occur after the
backend clear succeeds: in clearLogs (use _canonicalViewId, normalizedViewId,
bridge.invoke('clear_console_logs', ...)) and in clearAllLogs
(bridge.invoke('clear_logs')), call this._logsByView.set(normalizedViewId, [])
and this._lastTimestampByView.set(normalizedViewId, 0) (for clearLogs) or
this._logsByView.clear() and this._lastTimestampByView.clear() (for
clearAllLogs) inside the try block after await this.bridge.invoke(...); keep the
current catch behavior and log errors with
this._tracer.error('[ConsoleLogService] Clear ... failed:', error) so failed
invokes do not prematurely erase local buffers or reset timestamps.
---
Duplicate comments:
In `@src-tauri/src/api/system/logs.rs`:
- Around line 190-200: canonical_engine_id currently returns the original
trimmed engine_id in the default match arm, which defeats the normalization and
allows aliases that differ only by case/sep to create duplicates; change the
default branch to return the normalized `key` (the trimmed, lowercased,
separator-normalized value) instead of `engine_id.trim().to_string()` so all
fallbacks use the canonical form (refer to function canonical_engine_id and the
local variable `key`).
In `@src/app/CoreEntry.ts`:
- Around line 45-49: The current destroyActiveCoreInstance() calls
state.activeCoreInstance?.destroy() then clearBootState(), so if destroy()
throws the cleanup is skipped; update destroyActiveCoreInstance() to call
state.activeCoreInstance?.destroy() inside a try/finally and move
clearBootState() into the finally block so clearBootState() always runs (and
rethrow any error after finally to preserve behavior). Reference:
destroyActiveCoreInstance(), getCoreEntryState(), clearBootState(), and
state.activeCoreInstance?.destroy() — if destroy may become async in future,
make the wrapper await the call inside try/finally.
In `@src/features/chat/chat.ts`:
- Around line 645-655: regenerateLastTurn mutates persisted history before
sendChat may actually start; save the current assistant turn first, then call
regenerateLastTurn, and if sendChat fails to start or throws (i.e., does not
begin sending), restore the saved turn and input. Concretely: capture the
previous assistant text from _historyController (e.g., getLastAssistantTurn or
similar), call this._historyController.regenerateLastTurn(...), call
this._inputCoordinator.restore(text) and await this.sendChat(), and in the
sendChat failure/early-return/exception path restore the saved turn via
_historyController (and _inputCoordinator) so the previous reply is not lost.
Ensure error/early-return branches reference regenerateLastTurn, sendChat,
_historyController, _inputCoordinator, and _state.isSending.
---
Nitpick comments:
In `@src/features/ai/services/AIBridge.ts`:
- Line 9: Replace the import of AIBridgeSendMessageOptions from
AIBridgeMessageController with an import from the bridge contract/types module
(e.g., IAIBridge) so this service no longer depends on the concrete controller;
move or re-export the AIBridgeSendMessageOptions type into the IAIBridge/types
module, update the import in AIBridge.ts to reference that shared type, and
ensure any other modules that used the old import are updated to the new
centralized type export.
In `@src/features/chat/ui/ChatImageController.ts`:
- Around line 58-59: Remove the no-op wheel listener registration/removal to
avoid unnecessary global listener churn: delete the unused
_boundImageViewerWheel class property and any
addEventListener/removeEventListener calls that reference it (locations around
where _imageViewerOverlay is set/cleared and the blocks at lines ~84-86 and
~335-345), and ensure opening/closing the image viewer only manages meaningful
listeners or handlers (leave any real wheel/scroll behavior in methods that
actually implement functionality such as the image viewer's zoom/pan handlers).
In `@src/features/settings/ui/ModuleSettingsEngineFieldSupport.ts`:
- Around line 409-417: The parameter name groups in the setGroups function
shadows the outer variable also named groups; rename the parameter (e.g., to
newGroups) to avoid shadowing, update all references inside setGroups
(flattenGroups(newGroups), assign input.value, call syncTokens() and handle
options.emit) and ensure syncTokens() still reads the updated input.value to
reassign the outer groups variable; keep the function signature setGroups(...)
and other symbols (flattenGroups, syncTokens, input.dispatchEvent) unchanged.
- Around line 650-656: Add a concise inline comment next to the conditional that
checks (value !== undefined && value !== null && !(value === 'null' &&
defaultValue === '')), explaining that the special-case prevents treating the
literal string "null" (which may have been persistently stored) as a real value
when the default is an empty string; reference the variables input, key,
defaultValue and settings so maintainers can find the check quickly and
understand why the 'null' string is excluded.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: fa7d278a-a8f3-4d71-9023-8708e40d8c65
📒 Files selected for processing (17)
src-tauri/src/api/ai/mod.rssrc-tauri/src/api/system/logs.rssrc-tauri/src/domain/ai/provider_http.rssrc/app/CoreEntry.tssrc/features/ai/services/AIBridge.tssrc/features/ai/services/AIBridgeMessageController.tssrc/features/ai/services/AIChatTransport.tssrc/features/chat/chat.tssrc/features/chat/controllers/ChatGenerationController.tssrc/features/chat/services/ChatActivationCoordinator.tssrc/features/chat/ui/ChatImageController.tssrc/features/chat/ui/ChatUI.test.tssrc/features/console/services/ConsoleLogService.tssrc/features/console/ui/ConsoleFilterControlHelper.tssrc/features/console/ui/ConsoleUI.tssrc/features/settings/ui/ModuleSettingsEngineFieldSupport.tssrc/scripts/check-size.js
✅ Files skipped from review due to trivial changes (1)
- src-tauri/src/api/ai/mod.rs
🚧 Files skipped from review as they are similar to previous changes (2)
- src-tauri/src/domain/ai/provider_http.rs
- src/features/ai/services/AIBridgeMessageController.ts
|
You are seeing this message because GitHub Code Scanning has recently been set up for this repository, or this pull request contains the workflow file for the Code Scanning tool. What Enabling Code Scanning Means:
For more information about GitHub Code Scanning, check out the documentation. |
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/features/chat/ui/ChatImageController.ts (1)
222-301:⚠️ Potential issue | 🟡 MinorRe-sync viewer labels when the locale changes.
The overlay is cached after the first open, so the close button label, prev/next labels, and preview
alttext are frozen in whatever language was active at that moment. Reapply those translations on eachopenImageViewer()or from the existing translation-refresh path so the preview UI stays localized after runtime language switches.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/chat/ui/ChatImageController.ts` around lines 222 - 301, The cached viewer overlay created in _ensureImageViewer() keeps labels/alt text frozen in the original locale; update the close button title/aria-label, prev/next buttons' aria-labels, and the image alt each time the viewer is shown (e.g., call this from openImageViewer()) or hook into your translation refresh path to reapply translations to the stored elements (_imageViewerOverlay, _imageViewerImage, _imageViewerPrevButton, _imageViewerNextButton, and the close button retrieved from _imageViewerOverlay) so they are reset using this._deps.translate('ui.chat.close_image_preview'...), 'ui.chat.previous_image', 'ui.chat.next_image', and 'ui.chat.image_preview' whenever the locale changes or before opening the viewer.
♻️ Duplicate comments (1)
src-tauri/src/api/ai/mod.rs (1)
474-479:⚠️ Potential issue | 🟠 MajorDone event in cancellation path uses
request_idinstead of actual assistantmessage_id.The cancellation branch emits
StreamEvent::Donewithmessage_id: request_id.clone(), but if anyChatChunkorThoughtChunkevents were already emitted with a real assistantmessage_id, this terminalDonewon't correlate with those chunks. The frontend may leave the cancelled message in a pending state.Consider tracking the active
message_id(e.g., via anArc<Mutex<Option<String>>>shared with the sink) and using it here, falling back torequest_idonly if no chunks were emitted.,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/api/ai/mod.rs` around lines 474 - 479, The cancellation branch currently emits StreamEvent::Done with message_id: request_id.clone(), which can mismatch earlier ChatChunk/ThoughtChunk assistant message IDs; change the implementation to track the active assistant message id (e.g., an Arc<Mutex<Option<String>>> named active_message_id shared with the sink where ChatChunk/ThoughtChunk/ThoughtChunk code updates it when emitting an assistant message id) and, in the _ = cancellation => arm, read the active_message_id and emit StreamEvent::Done with that value (falling back to request_id.clone() only if the active_message_id is None) when calling cancel_sink.emit; update places that emit chunks (where ChatChunk/ThoughtChunk are constructed) to set active_message_id appropriately.
🧹 Nitpick comments (3)
.github/workflows/codeql.yml (1)
36-53: Pin actions to immutable SHA hashes for supply-chain integrity and reproducibility.
actions/checkout@v6andgithub/codeql-action@v4use floating major version tags. Update to commit SHAs:
actions/checkout@v6→de0fac2e4500dabe0009e67214ff5f5447ce83ddgithub/codeql-action/*@v4→95e58e9a2cdfd71adc6e0353d5c52f41a045d225Apply this to both
github/codeql-action/initandgithub/codeql-action/analyze.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/codeql.yml around lines 36 - 53, Replace floating major tags with the provided immutable commit SHAs: change actions/checkout@v6 to actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd, and update both github/codeql-action/init@v4 and github/codeql-action/analyze@v4 to github/codeql-action/init@95e58e9a2cdfd71adc6e0353d5c52f41a045d225 and github/codeql-action/analyze@95e58e9a2cdfd71adc6e0353d5c52f41a045d225 respectively so the workflow pins are immutable and reproducible.src/features/chat/ui/ChatImageController.ts (1)
467-472: Extract the shared tooltip/ARIA label helper.This is the same
title/data-tooltip/aria-labelsync logic already present insrc/features/chat/ui/ChatTranslationRefresher.ts. Keeping both copies will drift the next time tooltip behavior changes, so this is worth centralizing in one shared helper.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/chat/ui/ChatImageController.ts` around lines 467 - 472, The title/data-tooltip/aria-label synchronization in ChatImageController._syncImageActionLabel is duplicated in ChatTranslationRefresher; extract a shared helper (e.g., setTooltipAndAriaLabel(element: HTMLElement, label: string)) into a common UI/util module, have it set title, dataset['tooltip'], and aria-label, then replace the body of _syncImageActionLabel and the corresponding method in ChatTranslationRefresher to call this new helper after obtaining the label via this._deps.translate (keep using the same translate key/fallback pattern)..github/workflows/security-audit.yml (1)
54-55: Consider pinningcargo-auditversion for better determinism in CI runs.The Rust toolchain is pinned to 1.94.1, but
cargo-auditinstalls the latest available version each run. Whilecargo-audit0.22.1 (latest) is fully compatible with Rust 1.94.1, pinning would prevent surprises from future upstream releases and ensure consistent behavior across runs.Suggested improvement
- - name: Install cargo-audit - run: cargo install cargo-audit --locked --quiet + - name: Install cargo-audit + run: cargo install cargo-audit --version 0.22.1 --locked --quiet🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/security-audit.yml around lines 54 - 55, The CI step named "Install cargo-audit" currently installs the latest cargo-audit; pin it to a specific version (e.g., 0.22.1) to ensure deterministic runs by updating the install command used in that step (the cargo install invocation for cargo-audit) to include a --version argument with the chosen version while keeping --locked and --quiet.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/dependency-review.yml:
- Around line 16-18: The workflow grants only pull-requests: read but sets
comment-summary-in-pr: always for actions/dependency-review-action@v4 which
needs write permission to post PR comments; update the workflow permissions
block to set pull-requests: write (replace the existing pull-requests: read) so
the dependency review action can create/update PR comments.
In `@src-tauri/src/api/engine/mod.rs`:
- Around line 76-77: The delete_engine function currently calls
crate::domain::engine::detector::delete_installed_engine(&engine_id) directly;
change it to first canonicalize the provided engine_id (use your
registry/canonicalization helper e.g. canonicalize_engine_id or equivalent in
crate::domain::engine::registry) and then check the engine runtime state (use
the runtime/state API such as is_engine_running or whatever function tracks
active engines) and return an AppError if the engine is currently running; only
call crate::domain::engine::detector::delete_installed_engine with the canonical
ID when the runtime-check passes. Ensure errors returned use AppError and
include the canonical ID in messages.
In `@src-tauri/src/domain/system/hardware_probe.rs`:
- Around line 326-337: nvidia_probe_from_nvml() currently treats any detected
CUDA driver version as proof of an NVIDIA GPU; change it to first verify NVML
reports at least one physical device before returning a CUDA GpuInfo.
Specifically, in nvidia_probe_from_nvml() (and the two fallback call sites that
rely on it), call the NVML device-count API (or equivalent helper) and only map
detect_cuda_driver_version() into Some(GpuInfo) when the device count > 0 and a
driver version exists; otherwise return None so CPU-only systems with installed
drivers aren't misclassified.
In `@src/app/CoreEntry.ts`:
- Around line 115-127: The dispose callback can abort if
destroyActiveCoreInstance() throws, leaving stale handlers/flags; wrap the
destroyActiveCoreInstance() call in a try/catch (or try/finally) so that
regardless of errors you still remove state.bootHandler and
state.beforeUnloadHandler, null them, and reset state.coreBootBound and
state.coreBeforeUnloadBound; catch and log the error (but do not rethrow) so HMR
cleanup continues. Ensure you reference the same symbols:
import.meta.hot.dispose callback, destroyActiveCoreInstance(),
state.bootHandler, state.beforeUnloadHandler, state.coreBootBound and
state.coreBeforeUnloadBound.
- Around line 72-80: The rejection handler for coreInstance.init()
unconditionally calls coreInstance.destroy(), which can double-tear down if
destroyActiveCoreInstance() already cleared state.activeCoreInstance; change the
catch to only call coreInstance.destroy() when state.activeCoreInstance ===
coreInstance (same guard used for clearBootState()), so avoid invoking destroy()
out-of-band. In other words, inside the catch for coreInstance.init() check
state.activeCoreInstance === coreInstance before calling coreInstance.destroy(),
and keep the existing tracer.error handling for any destroy error (use the same
guard for both clearBootState() and destroy() to prevent a second teardown).
In `@src/features/console/services/ConsoleLogService.ts`:
- Around line 286-300: The default branch of _canonicalEngineId currently
returns engineId.trim() which bypasses the normalized key; change the default to
return the computed key variable so the function consistently returns the
lowercased, whitespace/underscore-normalized value (i.e., replace the default
return value with return key) while keeping the special-case mapping for
'stable-diffusion' variants in the _canonicalEngineId method.
---
Outside diff comments:
In `@src/features/chat/ui/ChatImageController.ts`:
- Around line 222-301: The cached viewer overlay created in _ensureImageViewer()
keeps labels/alt text frozen in the original locale; update the close button
title/aria-label, prev/next buttons' aria-labels, and the image alt each time
the viewer is shown (e.g., call this from openImageViewer()) or hook into your
translation refresh path to reapply translations to the stored elements
(_imageViewerOverlay, _imageViewerImage, _imageViewerPrevButton,
_imageViewerNextButton, and the close button retrieved from _imageViewerOverlay)
so they are reset using this._deps.translate('ui.chat.close_image_preview'...),
'ui.chat.previous_image', 'ui.chat.next_image', and 'ui.chat.image_preview'
whenever the locale changes or before opening the viewer.
---
Duplicate comments:
In `@src-tauri/src/api/ai/mod.rs`:
- Around line 474-479: The cancellation branch currently emits StreamEvent::Done
with message_id: request_id.clone(), which can mismatch earlier
ChatChunk/ThoughtChunk assistant message IDs; change the implementation to track
the active assistant message id (e.g., an Arc<Mutex<Option<String>>> named
active_message_id shared with the sink where ChatChunk/ThoughtChunk/ThoughtChunk
code updates it when emitting an assistant message id) and, in the _ =
cancellation => arm, read the active_message_id and emit StreamEvent::Done with
that value (falling back to request_id.clone() only if the active_message_id is
None) when calling cancel_sink.emit; update places that emit chunks (where
ChatChunk/ThoughtChunk are constructed) to set active_message_id appropriately.
---
Nitpick comments:
In @.github/workflows/codeql.yml:
- Around line 36-53: Replace floating major tags with the provided immutable
commit SHAs: change actions/checkout@v6 to
actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd, and update both
github/codeql-action/init@v4 and github/codeql-action/analyze@v4 to
github/codeql-action/init@95e58e9a2cdfd71adc6e0353d5c52f41a045d225 and
github/codeql-action/analyze@95e58e9a2cdfd71adc6e0353d5c52f41a045d225
respectively so the workflow pins are immutable and reproducible.
In @.github/workflows/security-audit.yml:
- Around line 54-55: The CI step named "Install cargo-audit" currently installs
the latest cargo-audit; pin it to a specific version (e.g., 0.22.1) to ensure
deterministic runs by updating the install command used in that step (the cargo
install invocation for cargo-audit) to include a --version argument with the
chosen version while keeping --locked and --quiet.
In `@src/features/chat/ui/ChatImageController.ts`:
- Around line 467-472: The title/data-tooltip/aria-label synchronization in
ChatImageController._syncImageActionLabel is duplicated in
ChatTranslationRefresher; extract a shared helper (e.g.,
setTooltipAndAriaLabel(element: HTMLElement, label: string)) into a common
UI/util module, have it set title, dataset['tooltip'], and aria-label, then
replace the body of _syncImageActionLabel and the corresponding method in
ChatTranslationRefresher to call this new helper after obtaining the label via
this._deps.translate (keep using the same translate key/fallback pattern).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 1b270415-1595-4776-b825-0dd22c34b7c4
⛔ Files ignored due to path filters (2)
src/assets/fonts/Cubic_11.zh-subset.woff2is excluded by!**/*.woff2src/package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (59)
.coderabbit.yaml.github/CODEOWNERS.github/ISSUE_TEMPLATE/bug_report.yml.github/ISSUE_TEMPLATE/config.yml.github/ISSUE_TEMPLATE/feature_request.yml.github/PULL_REQUEST_TEMPLATE.md.github/workflows/codeql.yml.github/workflows/dependency-review.yml.github/workflows/security-audit.yml.nvmrc.vscode/extensions.json.vscode/settings.jsonCONTRIBUTING.mdREADME.mdSECURITY.mdSUPPORT.mddocs/en/CURRENT_STATE.mddocs/en/DEVELOPMENT_WORKFLOW.mddocs/en/RELEASES.mdsrc-tauri/src/api/ai/mod.rssrc-tauri/src/api/engine/mod.rssrc-tauri/src/api/system/logs.rssrc-tauri/src/domain/ai/provider_http.rssrc-tauri/src/domain/engine/detector.rssrc-tauri/src/domain/modules/github_release_selection.rssrc-tauri/src/domain/modules/github_releases.rssrc-tauri/src/domain/system/hardware_probe.rssrc-tauri/src/lib.rssrc/app/CoreEntry.tssrc/features/ai/services/AIBridge.tssrc/features/ai/services/AIBridgeMessageController.tssrc/features/ai/services/AIChatTransport.tssrc/features/chat/chat.tssrc/features/chat/controllers/ChatGenerationController.tssrc/features/chat/controllers/ChatHistoryController.tssrc/features/chat/services/ChatActivationCoordinator.tssrc/features/chat/ui/ChatImageController.tssrc/features/chat/ui/ChatUI.test.tssrc/features/console/services/ConsoleLogService.tssrc/features/console/ui/ConsoleFilterControlHelper.tssrc/features/console/ui/ConsoleUI.tssrc/features/settings/ui/ModuleSettingsEngineFieldSupport.tssrc/package.jsonsrc/scripts/check-size.jssrc/scripts/subset-cubic11-zh.jssrc/shared/services/ModulePlatformService.test.tssrc/shared/services/ModulePlatformService.tssrc/shared/shell/AppUI.test.tssrc/shared/shell/AppUI.tssrc/shared/shell/ui/AppUiCardActionFlow.test.tssrc/shared/shell/ui/AppUiCardActionFlow.tssrc/shared/shell/ui/AppUiModuleFlow.test.tssrc/shared/shell/ui/AppUiModuleFlow.tssrc/shared/shell/ui/ModalManager.test.tssrc/shared/shell/ui/ModalManager.tssrc/shared/shell/ui/ModuleCardRenderer.test.tssrc/shared/shell/ui/ModuleCardRenderer.tssrc/shared/types/bindings.tssrc/styles/base/design-tokens.css
✅ Files skipped from review due to trivial changes (13)
- .github/CODEOWNERS
- .nvmrc
- .github/ISSUE_TEMPLATE/bug_report.yml
- .vscode/extensions.json
- SUPPORT.md
- .github/PULL_REQUEST_TEMPLATE.md
- .github/ISSUE_TEMPLATE/feature_request.yml
- .vscode/settings.json
- SECURITY.md
- README.md
- .github/ISSUE_TEMPLATE/config.yml
- docs/en/CURRENT_STATE.md
- src/features/chat/chat.ts
🚧 Files skipped from review as they are similar to previous changes (8)
- src/features/chat/services/ChatActivationCoordinator.ts
- src-tauri/src/domain/ai/provider_http.rs
- src/features/console/ui/ConsoleFilterControlHelper.ts
- src/features/console/ui/ConsoleUI.ts
- src-tauri/src/lib.rs
- src/features/chat/controllers/ChatHistoryController.ts
- src-tauri/src/domain/modules/github_releases.rs
- src-tauri/src/domain/modules/github_release_selection.rs
|
@coderabbitai review\n\nPushed commit 5c56d58 with the requested runtime/backend hardening and review fixes. Please re-check. |
Dependency ReviewThe following issues were found:
|
|
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Actionable comments posted: 17
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
src-tauri/src/domain/ai/streaming.rs (1)
391-410:⚠️ Potential issue | 🟠 MajorDon't treat unterminated streams as successful by default.
Line 391 now converts any non-empty stream that ends without
[DONE]or terminal metadata into a successful reply. That makes a mid-stream disconnect indistinguishable from a real completion, so truncated assistant output can be persisted as if it were complete. Please keep the error path by default and only relax it behind an explicit provider capability for backends that are known not to emit terminal markers.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/domain/ai/streaming.rs` around lines 391 - 410, Currently only empty streams (state.full_content.trim().is_empty()) are treated as an error when no completion marker is seen; change this so any stream that ended without saw_done and state.saw_terminal_chunk is considered an unterminated error by default: replace the first conditional to return the ChatResponse error whenever !saw_done && !state.saw_terminal_chunk (remove the full_content emptiness check), and if you must support backends that legitimately omit terminal markers add and check an explicit capability flag on the request (e.g., req.provider_allows_missing_terminal or similar) so the relaxation only occurs when that capability is present; keep the existing tracing::warn block and preserve fields like message_id, request_id, state.chunks_emitted and state.final_usage in the error response.src/app/init.ts (1)
46-52:⚠️ Potential issue | 🟠 MajorDon't mark the core initialized after
destroy()wins the race.If
destroy()runs whileinit()is awaiting_runInit(), Line 49 flips_isInitializedback totrueafter Lines 61-64 have already torn the instance down. That leaves the object in a destroyed-but-initialized state, and laterinit()calls will short-circuit incorrectly.Suggested fix
this._initPromise = this._runInit(); try { await this._initPromise; - this._isInitialized = true; + if (!this._isDestroyed) { + this._isInitialized = true; + } } finally { this._initPromise = null; }Also applies to: 59-64
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/init.ts` around lines 46 - 52, The code sets this._isInitialized = true unconditionally after awaiting this._initPromise, which can race with destroy(); capture the current promise into a local (e.g., const initPromise = this._initPromise), await that local, and only set this._isInitialized = true if this._initPromise === initPromise and the instance hasn't been torn down (check whatever teardown flag or state your class uses, e.g., !this._isDestroyed). Apply the same guard to the other init path referenced (the block around lines 59-64) so both places verify the promise identity and teardown state before flipping _isInitialized.src-tauri/src/domain/integration_api.rs (2)
644-650:⚠️ Potential issue | 🟠 MajorKeep image settings keyed by the UI provider, not the backend alias.
Line 649 falls back to
provider.clone(), so UI-only ids likeopenrouter-custom-imageget merged into the sharedgpt-imagesettings bucket. Backend dispatch should useprovider, but persisted settings should stay scoped toui_provider.Suggested fix
let image_request = ImageGenerationRequest { provider: provider.clone(), prompt: payload.prompt.clone(), original_prompt: Some(payload.prompt), model: model.clone(), - settings_key: payload.settings_key.or_else(|| Some(provider.clone())), + settings_key: payload.settings_key.or_else(|| Some(ui_provider.clone())), session_id, steps: payload.steps, cfg_scale: payload.cfg_scale,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/domain/integration_api.rs` around lines 644 - 650, The settings_key fallback currently uses provider.clone(), which merges UI-only ids into backend buckets; change the ImageGenerationRequest construction so settings_key uses payload.settings_key.or_else(|| Some(payload.ui_provider.clone())) (or similarly prefer payload.ui_provider when provided) instead of falling back to provider.clone(), keeping backend dispatch using provider but persisting settings scoped to ui_provider.
463-465:⚠️ Potential issue | 🟠 MajorValidate
module_idon control routes as well.
/statusand/stagenow sanitizemodule_id, but/start|stop|restartstill forwards the raw path segment intomodule_controller::control. That leaves the most privileged module route outside the new validation guard.Suggested fix
("POST", ["v1", "modules", module_id, action]) => { + crate::domain::modules::downloader::validate_module_id(module_id)?; let action = parse_module_action(action)?; let response = module_controller::control(context.app, module_id, action).await?; Ok(json_response( 200, json!({ "ok": response.success, "response": response }),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/domain/integration_api.rs` around lines 463 - 465, The POST control branch currently forwards the raw path segment module_id into module_controller::control; update the ("POST", ["v1", "modules", module_id, action]) arm to first run the same module_id validation/sanitization used by the /status and /stage routes (reuse the existing sanitize/validate helper used there), then call parse_module_action(action)? and pass the sanitized module_id into module_controller::control(context.app, sanitized_module_id, action). Ensure you propagate/return the validation error the same way other routes do so malicious or invalid module ids are rejected before calling module_controller::control.src-tauri/src/domain/modules/settings_ui_protocol.rs (1)
347-358:⚠️ Potential issue | 🟠 MajorReject empty
module_idsegments before validation.
splitn(3, ':')returnsSome("")for labels likemodule-settings::nonce, so this still falls through tovalidate_module_id(""). That turns an owned-webview permission failure into a validation/400 response instead of the intended 403.Suggested fix
- if prefix != Some(MODULE_SETTINGS_LABEL_PREFIX) || module_id.is_none() || nonce.is_none() { + if prefix != Some(MODULE_SETTINGS_LABEL_PREFIX) + || module_id.is_none() + || module_id == Some("") + || nonce.is_none() + { return Err(AppError::PermissionDenied( "Module settings route is only available to owned settings webviews".to_string(), )); } - let Some(module_id) = module_id else { - return Err(AppError::PermissionDenied( - "Module settings route is only available to owned settings webviews".to_string(), - )); - }; + let module_id = module_id.expect("checked above");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/domain/modules/settings_ui_protocol.rs` around lines 347 - 358, The current guard lets through an empty module_id (e.g., Some("")) so validate_module_id("") runs and yields a 400 instead of the intended 403; update the initial permission check to reject empty module_id segments (check module_id.is_some() && module_id.as_ref().map(|s| !s.is_empty()).unwrap_or(false) or use module_id.as_deref().filter(|s| !s.is_empty()) ) so the function returns the PermissionDenied path, then only unwrap/pattern-match the non-empty module_id before calling crate::domain::modules::downloader::validate_module_id(module_id); ensure nonce is still checked as before and remove the redundant let Some(module_id) else block if you consolidate the checks.src-tauri/src/domain/modules/controller/lifecycle.rs (1)
193-205:⚠️ Potential issue | 🟠 MajorDon't report success after failing to publish
module.pid.Both paths leave the module running but undiscoverable on disk. After a launcher restart, that process can no longer be reconciled or stopped reliably. This should return an
AppError(or tear the child down) instead of just logging and continuing.Also applies to: 226-234
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/domain/modules/controller/lifecycle.rs` around lines 193 - 205, The current PID publish logic writes to temp_pid_file then renames to module.pid but only logs failures; instead make the function bail out with an AppError (or tear down the spawned child) when either std::fs::write(&temp_pid_file, ...) or std::fs::rename(&temp_pid_file, &pid_file) fails: replace the logging-only branches for the write and rename errors with code that (1) attempts to cleanly stop the launched process (e.g., call child.kill() / child.wait() or the equivalent shutdown routine you have) and (2) returns an Err(AppError::... ) propagating the failure (include module_id, temp and target paths and underlying error in the AppError message). Apply the same change to the duplicate block around lines 226-234 so a failed PID publish does not leave the process running undiscoverable.
♻️ Duplicate comments (5)
src-tauri/src/domain/engine/engine_args.rs (2)
29-47:⚠️ Potential issue | 🟠 MajorDrop paired SD.cpp option values too.
This still removes only the unsupported flag token. For two-token forms like
--preview vaeor--preview-path <file>, the next token is forwarded as a bare positional arg, which can break SD.cpp CLI parsing.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/domain/engine/engine_args.rs` around lines 29 - 47, The sdcpp_extra_args function currently only filters out unsupported flag tokens but leaves their separate value tokens (two-token forms) in place; update sdcpp_extra_args to iterate through config.extra_args with a peekable iterator (or index loop) and when you encounter an arg that exactly equals any entry in SDCPP_UNSUPPORTED_FLAGS or SDCPP_SERVER_UNSUPPORTED_FLAGS, consume and skip the next token as well (in addition to skipping prefix/=`value` forms which you already handle), ensuring both the flag and its paired value are not included in the returned Vec<String>.
50-58:⚠️ Potential issue | 🟠 MajorPreview is still hard-disabled here.
Both helpers always return
None/false, so any code that relies on these checks can never resolve or surface an SD.cpp preview image anymore.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/domain/engine/engine_args.rs` around lines 50 - 58, Both helpers are stubbed out; implement real parsing of extra_args in resolve_sdcpp_preview_path and sdcpp_preview_enabled: in resolve_sdcpp_preview_path, scan the slice for either a combined argument like "--sdcpp-preview=/path/to/file" or a flag "--sdcpp-preview" followed by a separate path token, and return Some(PathBuf::from(path)) when found (otherwise None); in sdcpp_preview_enabled, return true if the preview flag is present or if resolve_sdcpp_preview_path(extra_args) yields Some, ensuring you reference the functions resolve_sdcpp_preview_path and sdcpp_preview_enabled to locate the code.src-tauri/src/domain/engine/manager.rs (3)
509-527:⚠️ Potential issue | 🔴 CriticalGuard
stop_slot_after_error()against hot-swap races.This still snapshots the engine id, emits the error, and then stops whatever currently occupies
capability. If engine A crashes after engine B replaces it in the same slot, Line 521 can kill B. Take the lifecycle lock and remove+kill under the same critical section, or re-check that the current slot'sdefinition.idstill matches the captured id before stopping.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/domain/engine/manager.rs` around lines 509 - 527, The current stop_slot_after_error captures an engine_id from self.slots, emits the error, then calls stop_slot(capability) which may now affect a different engine if a hot-swap occurred; modify stop_slot_after_error to hold the lifecycle critical section or re-check the slot id before stopping: acquire the same lock used to guard slot lifecycle (self.slots.lock().await), lookup the slot for the given capability and confirm its definition.id matches the previously captured engine_id, only then call stop_slot(capability) (or remove+kill the slot under that lock); if the ids differ, skip stopping and just emit the error for the original id via self.emitter.emit_error to avoid killing a replaced engine.
601-621:⚠️ Potential issue | 🟠 Major
prune_dead_slots()makes the same unsafetry_wait()assumption.On a status-query error this path marks the slot dead, removes it, and emits
"Local engine process exited.". That can orphan a still-running child and report a false exit. Handle this the same way as the stale-slot path: keep the slot until an explicit stop succeeds, or kill before removing/emitting.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/domain/engine/manager.rs` around lines 601 - 621, prune_dead_slots() currently treats any error from try_wait() as a definitive exit by pushing the slot into dead, removing it from slots, and calling self.emitter.emit_error with "Local engine process exited."; instead, change the flow so that when try_wait() returns Err(...) you do not immediately remove the slot or emit the exit: either attempt an explicit stop/terminate of the child (and only remove+emit upon successful stop or confirmed exit) or explicitly kill the process before removing/emitting; update the branch handling Err(error) in prune_dead_slots() to keep the slot in slots and invoke the explicit stop/kill logic (and only then call slots.remove(capability) and self.emitter.emit_error(&engine_id, ...)) so a transient status-query error cannot orphan a running engine or produce a false exit event.
228-245:⚠️ Potential issue | 🟠 MajorDon't free the slot on a
try_wait()error.
try_wait()failing does not prove the child is gone. This branch removesprimary_capand continues, which can detach a still-running engine from the manager while it keeps holding the port/resources. Keep the slot until an explicit kill succeeds, or remove it only afterSelf::kill_engine(...)returnsOk(()).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/domain/engine/manager.rs` around lines 228 - 245, The Err branch of existing.process.try_wait() must not unconditionally call slots.remove(&primary_cap) because a failed try_wait() doesn't prove the child exited; instead, on Err run the same explicit shutdown path used elsewhere (call Self::kill_engine(...) for that engine) and only remove the slot from slots when that call returns Ok(()); do not remove the slot on try_wait() error alone. Locate the match arm handling Err(error) for existing.process.try_wait(), remove the unconditional slots.remove(&primary_cap) there, invoke Self::kill_engine(...) (passing the same engine id / slot info) and call slots.remove(&primary_cap) only if kill_engine returns Ok(()).
🧹 Nitpick comments (2)
src-tauri/src/domain/ai/image_service.rs (1)
60-79: Drop the local workload guard before session persistence.
_local_workload_guardstays alive until the function returns, so local capacity remains reserved while Lines 95-109 do session mutation and disk I/O. That unnecessarily widens the critical section and can block unrelated local work. Scope the guard to just the provider call, or explicitly drop it right afterimagesis computed.♻️ Narrow the guard lifetime
- let _local_workload_guard = if is_cloud_image_provider(&request.provider) { - None - } else { - Some(engine_manager.acquire_local_workload().await) - }; - - let images = if request.provider == "comfyui" { - stop_conflicting_local_engine(engine_manager, Capability::Image).await?; - process_comfyui_request(&request, image_generation_state, settings_service).await? - } else if is_cloud_image_provider(&request.provider) { - process_cloud_image_request(&request).await? - } else { - process_local_image_request( - &request, - engine_manager, - image_generation_state, - local_engine_access, - ) - .await? - }; + let images = { + let _local_workload_guard = if is_cloud_image_provider(&request.provider) { + None + } else { + Some(engine_manager.acquire_local_workload().await) + }; + + if request.provider == "comfyui" { + stop_conflicting_local_engine(engine_manager, Capability::Image).await?; + process_comfyui_request(&request, image_generation_state, settings_service).await? + } else if is_cloud_image_provider(&request.provider) { + process_cloud_image_request(&request).await? + } else { + process_local_image_request( + &request, + engine_manager, + image_generation_state, + local_engine_access, + ) + .await? + } + };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/domain/ai/image_service.rs` around lines 60 - 79, The local workload guard (_local_workload_guard), created via engine_manager.acquire_local_workload(), is kept alive across session persistence and disk I/O; narrow its lifetime by scoping or dropping it immediately after computing images: ensure the guard only surrounds the provider-specific call (the condition that chooses process_comfyui_request/process_cloud_image_request/process_local_image_request) and then explicitly drop(_local_workload_guard) (or wrap the provider branch in its own block) before any session mutation or disk operations that follow; reference _local_workload_guard, is_cloud_image_provider, engine_manager.acquire_local_workload(), and the images variable when making the change.src/features/ai/services/AIBridge.test.ts (1)
287-297: Also guard againststop_enginehere.This test only proves
stop_engine_slotis untouched. If the local-provider path regresses to callingstop_engine, it would still pass while shutting down the active engine. Adding the second assertion keeps the intended “no shutdown on select” behavior covered.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/ai/services/AIBridge.test.ts` around lines 287 - 297, The test in AIBridge.test.ts currently asserts mockInvoke was not called with 'stop_engine_slot' when selecting a local provider; add a parallel assertion to also guard against calls to 'stop_engine' so the test fails if the code shuts down the active engine; specifically, update the test around aiBridge.startProvider('llamacpp') to include expect(mockInvoke).not.toHaveBeenCalledWith('stop_engine', expect.any(Object)) in addition to the existing 'stop_engine_slot' assertion.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src-tauri/src/api/system/logs.rs`:
- Around line 229-235: The open_console_log_target function currently uses the
raw view_id when resolving the filesystem path, causing mismatch with
get_console_logs() and clear_console_logs(); call
canonical_console_view_id(view_id) at the start of open_console_log_target() and
use that canonicalized id when constructing the log directory/path and any
subsequent lookups, so opening the folder targets the same canonical engine:id
directory that get_console_logs() and clear_console_logs() operate on.
In `@src-tauri/src/domain/ai/ai_dispatch.rs`:
- Around line 253-258: In the RequireRunning branch you rebuild messages_context
from request.messages.clone(), discarding the merged session history produced by
prepare_chat_dispatch; instead, use the merged/prepared messages produced by
prepare_chat_dispatch (the variable it returns or sets, e.g., the merged
messages/messages_context prepared earlier) when calling
local_messages_with_system_prompt so follow-up turns include the full merged
session history; update the RequireRunning path to pass that prepared merged
history into local_messages_with_system_prompt rather than
request.messages.clone().
In `@src-tauri/src/domain/ai/image_service.rs`:
- Line 109: The call sessions.force_save().await? is converting a post-save
failure into an overall generation failure; change it to a best-effort
persistence so the already-successful provider response and images are returned
even if saving fails. Replace the fallible short-circuit
(sessions.force_save().await?) with explicit error handling around
sessions.force_save().await that captures Err(e), logs/warns the failure
(including error and context such as the images/transcript id), and continues to
return the successful images result (or return a tuple/result that includes a
persistence warning flag) instead of propagating the error; keep references to
sessions.force_save(), images, and the enclosing generation function when
implementing this change.
In `@src-tauri/src/domain/ai/session.rs`:
- Around line 124-132: The snapshot is currently taken before acquiring the
save_lock, allowing overlapping saves to flush out of order; move the HashMap
snapshot creation so it's done while holding the Arc<Mutex/Lock> referenced by
save_lock (i.e., inside the critical section that calls
Self::flush_snapshot_locked) so that spawn_blocking receives the
already-locked-and-captured state in order; update all similar usages (the other
spawn_blocking calls that call Self::flush_snapshot_locked with sessions) to
build the snapshot within the lock before calling tokio::task::spawn_blocking to
ensure persisted state follows lock acquisition order.
In `@src-tauri/src/domain/ai/streaming.rs`:
- Around line 558-564: The code only treats a chunk as terminal when
json.get("stop") is true, but Ollama uses a top-level "done" boolean; update the
terminal-detection in the chunk-handling logic (the block that currently checks
json.get("stop").and_then(...).unwrap_or(false)) to also consider
json.get("done").and_then(serde_json::Value::as_bool).unwrap_or(false) (i.e.,
set state.saw_terminal_chunk = true if either "stop" or "done" is true) so that
generate_stream and state.saw_terminal_chunk correctly recognize Ollama
completions; ensure the check still uses the same serde_json::Value utilities as
the existing code path.
In `@src-tauri/src/domain/modules/controller/lifecycle.rs`:
- Around line 429-446: The helper find_matching_script_processes currently
swallows spawn_blocking failures and returns an empty Vec which hides scan
errors; change its signature from async fn find_matching_script_processes(&self,
entry_path: &Path) -> Vec<usize> to return a Result<Vec<usize>, anyhow::Error>
(or your crate's error type), map the tokio::task::spawn_blocking().await Err
branch into an Err(anyhow::Error::from(error)) while logging context with
tracing::error including self.module_id and the underlying error, and update
callers (start/stop) to propagate/handle the Result (fail fast or return the
error) instead of assuming an empty list so scan-task failures no longer
masquerade as “no matches”; keep references to self.module_path, self.module_id
and process::find_script_module_processes and preserve the spawn_blocking call.
In `@src-tauri/src/domain/modules/downloader_transfer.rs`:
- Around line 193-203: The current match around load_partial_metadata treats
every Err as corrupt and deletes the metadata; change it to distinguish
parse/validation errors from IO/system errors by matching the specific error
variant returned by load_partial_metadata (e.g., Parse/Corrupt variant vs
Io/Read variant): if the error indicates malformed metadata (parse/validation),
log the warning and call remove_partial_metadata(task.dest_path).await? and set
loaded_resume_metadata = None; if the error is an IO/transient error, propagate
or return the error instead of deleting the file so valid partial downloads
aren't discarded. Update the match arms that reference load_partial_metadata,
loaded_resume_metadata, and remove_partial_metadata accordingly.
- Around line 150-151: The current conditional uses
crate::domain::license::storage::load_license()? which propagates any read/parse
error and aborts the download; change it so license read errors are non-fatal:
call load_license() and convert Err to None (e.g., via match, .ok(), or
map_err->None) so only a successful Ok(Some(license)) with a non-empty
license.key causes X-Axelate-License injection; if load_license() returns Err,
swallow or log the error and continue the download flow. Ensure the code path
referencing load_license() and the X-Axelate-License header injection only acts
on a valid license value and does not use the ? operator.
In `@src-tauri/src/infrastructure/config/engine_settings.rs`:
- Around line 66-85: The current fallback in the save/publish logic (the
tokio::fs::remove_file(path) + tokio::fs::rename(&tmp, path) sequence inside the
error branch that logs "Atomic engine config rename failed") creates a crash
window where the destination file can be missing; replace this unsafe
remove-then-rename pattern with a true atomic replace operation: stop deleting
path and instead persist the temp file atomically (e.g., use tempfile's
NamedTempFile/NamedTempFile::persist or an equivalent atomic replace API) when
the initial tokio::fs::rename(&tmp, path) fails, propagate and wrap errors using
the existing AppError::Io path, and keep cleanup_engine_config_tmp(&tmp) for
rollback on persistent failures; update error messages that reference
rename_error and error to reflect the new persist/replace operation and ensure
no remove_file(path) is used.
In `@src-tauri/src/infrastructure/filesystem/local_file_service.rs`:
- Around line 42-59: The current logic unconditionally attempts to remove the
destination file after any fs::rename(&tmp, path) failure; instead, only attempt
to remove the destination when the initial rename failed for existence-related
reasons. Modify the failure branch around fs::rename(&tmp, path).await so that
if first_error.kind() is std::io::ErrorKind::AlreadyExists (or other
existence-specific kinds you deem appropriate) you proceed to try
fs::remove_file(path).await and then retry fs::rename(&tmp, path).await
(cleaning up tmp on subsequent errors), but if first_error is not an
existence-related error immediately return Err(AppError::Io(...)) including
first_error and do not remove path; keep using the same variables tmp, path,
first_error, remove_error, and second_error for messaging.
In `@src/app/CoreComposition.test.ts`:
- Around line 11-13: The global clearTimeout spy (clearTimeoutSpy) is not always
restored if assertions fail; wrap the test assertions that rely on the spy in a
try/finally and call clearTimeoutSpy.mockRestore() in the finally block (or move
the restore to a shared afterEach) so the mocked global is always restored even
on failures; look for the vi.spyOn(globalThis, 'clearTimeout') usage in
CoreComposition.test (and the related 46-51 section) and ensure mockRestore() is
executed unconditionally.
In `@src/app/CoreLifecycleController.ts`:
- Around line 134-145: The current runInit calls runCoreBootstrap even when the
core is already destroyed; change runInit so it checks
this._deps.state.isDestroyed() immediately and returns before invoking
runCoreBootstrap (move the isDestroyed guard to before the await
runCoreBootstrap(...) call), ensuring no startup side-effects from
runCoreBootstrap occur when the controller is already destroyed; you can keep
the existing post-bootstrap check as well for safety.
- Around line 177-188: The call to _selectedModuleChangedUnlisten in destroy()
can throw and abort the rest of teardown; wrap the invocation of
this._selectedModuleChangedUnlisten() in its own try/catch so any exception is
caught and does not prevent calling destroyCoreResources({
deferredChatInitTimer: this._deferredChatInitTimer, ...this._deps.disposables })
and clearing this._deferredChatInitTimer; on catch, log or swallow the error
(using the existing logger or console) and continue to the protected
cleanup/finally block so disposables are always destroyed.
In `@src/features/ai/services/AIChatTransport.ts`:
- Around line 169-174: sendSilent currently strips the caller's session_id by
destructuring it away before building requestWithId, which breaks conversation
continuity; change the construction in sendSilent (around _generateRequestId and
requestWithId) to preserve the original session_id (include session_id:
request.session_id on the IChatRequest you build) instead of removing it,
ensuring the new request contains request_id and the original session_id so the
backend continues the same conversation.
- Around line 123-129: The send() path waits on streamDone but only chatChannel
currently resolves completion; update thoughtChannel.onmessage (inside
AIChatTransport) to detect the terminal/done envelope for the same request
(reuse _isPayloadForRequest(payload, requestId)) — i.e., treat payload.kind ===
'done' (or whatever terminal envelope marker is used) the same as chatChannel's
done handling — and call the same completion/resolution logic that sets/resolves
streamDone (or invoke the shared finalizeStream/resolveStreamDone logic) while
guarding against double-resolution if both channels emit done.
In `@src/features/chat/controllers/ChatSendController.ts`:
- Around line 215-223: The leading-whitespace guard is ineffective because
ensureStreamingHandle() is called before attaching the onChunk listener, so
streamingHandle is never null; update the onChunk callback to check the actual
streamingHandle variable (not call ensureStreamingHandle()) and only ignore
whitespace-only chunks when streamingHandle is still null/just-created, or
alternatively remove the initial premature ensureStreamingHandle() and defer
creating/setting status until the first non-empty chunk; refer to
ensureStreamingHandle(), streamingHandle, this._options.aiBridge.onChunk
listener, handle.setStatus and ensureStreamingHandle().update to locate where to
change the check.
---
Outside diff comments:
In `@src-tauri/src/domain/ai/streaming.rs`:
- Around line 391-410: Currently only empty streams
(state.full_content.trim().is_empty()) are treated as an error when no
completion marker is seen; change this so any stream that ended without saw_done
and state.saw_terminal_chunk is considered an unterminated error by default:
replace the first conditional to return the ChatResponse error whenever
!saw_done && !state.saw_terminal_chunk (remove the full_content emptiness
check), and if you must support backends that legitimately omit terminal markers
add and check an explicit capability flag on the request (e.g.,
req.provider_allows_missing_terminal or similar) so the relaxation only occurs
when that capability is present; keep the existing tracing::warn block and
preserve fields like message_id, request_id, state.chunks_emitted and
state.final_usage in the error response.
In `@src-tauri/src/domain/integration_api.rs`:
- Around line 644-650: The settings_key fallback currently uses
provider.clone(), which merges UI-only ids into backend buckets; change the
ImageGenerationRequest construction so settings_key uses
payload.settings_key.or_else(|| Some(payload.ui_provider.clone())) (or similarly
prefer payload.ui_provider when provided) instead of falling back to
provider.clone(), keeping backend dispatch using provider but persisting
settings scoped to ui_provider.
- Around line 463-465: The POST control branch currently forwards the raw path
segment module_id into module_controller::control; update the ("POST", ["v1",
"modules", module_id, action]) arm to first run the same module_id
validation/sanitization used by the /status and /stage routes (reuse the
existing sanitize/validate helper used there), then call
parse_module_action(action)? and pass the sanitized module_id into
module_controller::control(context.app, sanitized_module_id, action). Ensure you
propagate/return the validation error the same way other routes do so malicious
or invalid module ids are rejected before calling module_controller::control.
In `@src-tauri/src/domain/modules/controller/lifecycle.rs`:
- Around line 193-205: The current PID publish logic writes to temp_pid_file
then renames to module.pid but only logs failures; instead make the function
bail out with an AppError (or tear down the spawned child) when either
std::fs::write(&temp_pid_file, ...) or std::fs::rename(&temp_pid_file,
&pid_file) fails: replace the logging-only branches for the write and rename
errors with code that (1) attempts to cleanly stop the launched process (e.g.,
call child.kill() / child.wait() or the equivalent shutdown routine you have)
and (2) returns an Err(AppError::... ) propagating the failure (include
module_id, temp and target paths and underlying error in the AppError message).
Apply the same change to the duplicate block around lines 226-234 so a failed
PID publish does not leave the process running undiscoverable.
In `@src-tauri/src/domain/modules/settings_ui_protocol.rs`:
- Around line 347-358: The current guard lets through an empty module_id (e.g.,
Some("")) so validate_module_id("") runs and yields a 400 instead of the
intended 403; update the initial permission check to reject empty module_id
segments (check module_id.is_some() && module_id.as_ref().map(|s|
!s.is_empty()).unwrap_or(false) or use module_id.as_deref().filter(|s|
!s.is_empty()) ) so the function returns the PermissionDenied path, then only
unwrap/pattern-match the non-empty module_id before calling
crate::domain::modules::downloader::validate_module_id(module_id); ensure nonce
is still checked as before and remove the redundant let Some(module_id) else
block if you consolidate the checks.
In `@src/app/init.ts`:
- Around line 46-52: The code sets this._isInitialized = true unconditionally
after awaiting this._initPromise, which can race with destroy(); capture the
current promise into a local (e.g., const initPromise = this._initPromise),
await that local, and only set this._isInitialized = true if this._initPromise
=== initPromise and the instance hasn't been torn down (check whatever teardown
flag or state your class uses, e.g., !this._isDestroyed). Apply the same guard
to the other init path referenced (the block around lines 59-64) so both places
verify the promise identity and teardown state before flipping _isInitialized.
---
Duplicate comments:
In `@src-tauri/src/domain/engine/engine_args.rs`:
- Around line 29-47: The sdcpp_extra_args function currently only filters out
unsupported flag tokens but leaves their separate value tokens (two-token forms)
in place; update sdcpp_extra_args to iterate through config.extra_args with a
peekable iterator (or index loop) and when you encounter an arg that exactly
equals any entry in SDCPP_UNSUPPORTED_FLAGS or SDCPP_SERVER_UNSUPPORTED_FLAGS,
consume and skip the next token as well (in addition to skipping prefix/=`value`
forms which you already handle), ensuring both the flag and its paired value are
not included in the returned Vec<String>.
- Around line 50-58: Both helpers are stubbed out; implement real parsing of
extra_args in resolve_sdcpp_preview_path and sdcpp_preview_enabled: in
resolve_sdcpp_preview_path, scan the slice for either a combined argument like
"--sdcpp-preview=/path/to/file" or a flag "--sdcpp-preview" followed by a
separate path token, and return Some(PathBuf::from(path)) when found (otherwise
None); in sdcpp_preview_enabled, return true if the preview flag is present or
if resolve_sdcpp_preview_path(extra_args) yields Some, ensuring you reference
the functions resolve_sdcpp_preview_path and sdcpp_preview_enabled to locate the
code.
In `@src-tauri/src/domain/engine/manager.rs`:
- Around line 509-527: The current stop_slot_after_error captures an engine_id
from self.slots, emits the error, then calls stop_slot(capability) which may now
affect a different engine if a hot-swap occurred; modify stop_slot_after_error
to hold the lifecycle critical section or re-check the slot id before stopping:
acquire the same lock used to guard slot lifecycle (self.slots.lock().await),
lookup the slot for the given capability and confirm its definition.id matches
the previously captured engine_id, only then call stop_slot(capability) (or
remove+kill the slot under that lock); if the ids differ, skip stopping and just
emit the error for the original id via self.emitter.emit_error to avoid killing
a replaced engine.
- Around line 601-621: prune_dead_slots() currently treats any error from
try_wait() as a definitive exit by pushing the slot into dead, removing it from
slots, and calling self.emitter.emit_error with "Local engine process exited.";
instead, change the flow so that when try_wait() returns Err(...) you do not
immediately remove the slot or emit the exit: either attempt an explicit
stop/terminate of the child (and only remove+emit upon successful stop or
confirmed exit) or explicitly kill the process before removing/emitting; update
the branch handling Err(error) in prune_dead_slots() to keep the slot in slots
and invoke the explicit stop/kill logic (and only then call
slots.remove(capability) and self.emitter.emit_error(&engine_id, ...)) so a
transient status-query error cannot orphan a running engine or produce a false
exit event.
- Around line 228-245: The Err branch of existing.process.try_wait() must not
unconditionally call slots.remove(&primary_cap) because a failed try_wait()
doesn't prove the child exited; instead, on Err run the same explicit shutdown
path used elsewhere (call Self::kill_engine(...) for that engine) and only
remove the slot from slots when that call returns Ok(()); do not remove the slot
on try_wait() error alone. Locate the match arm handling Err(error) for
existing.process.try_wait(), remove the unconditional slots.remove(&primary_cap)
there, invoke Self::kill_engine(...) (passing the same engine id / slot info)
and call slots.remove(&primary_cap) only if kill_engine returns Ok(()).
---
Nitpick comments:
In `@src-tauri/src/domain/ai/image_service.rs`:
- Around line 60-79: The local workload guard (_local_workload_guard), created
via engine_manager.acquire_local_workload(), is kept alive across session
persistence and disk I/O; narrow its lifetime by scoping or dropping it
immediately after computing images: ensure the guard only surrounds the
provider-specific call (the condition that chooses
process_comfyui_request/process_cloud_image_request/process_local_image_request)
and then explicitly drop(_local_workload_guard) (or wrap the provider branch in
its own block) before any session mutation or disk operations that follow;
reference _local_workload_guard, is_cloud_image_provider,
engine_manager.acquire_local_workload(), and the images variable when making the
change.
In `@src/features/ai/services/AIBridge.test.ts`:
- Around line 287-297: The test in AIBridge.test.ts currently asserts mockInvoke
was not called with 'stop_engine_slot' when selecting a local provider; add a
parallel assertion to also guard against calls to 'stop_engine' so the test
fails if the code shuts down the active engine; specifically, update the test
around aiBridge.startProvider('llamacpp') to include
expect(mockInvoke).not.toHaveBeenCalledWith('stop_engine', expect.any(Object))
in addition to the existing 'stop_engine_slot' assertion.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: a4a4f333-a315-412b-a55f-de573285c730
⛔ Files ignored due to path filters (2)
src/assets/fonts/Cubic_11.zh-subset.woff2is excluded by!**/*.woff2src/package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (102)
.github/workflows/dependency-review.ymlsrc-tauri/resources/locales/en.jsonsrc-tauri/resources/locales/ru.jsonsrc-tauri/resources/locales/zh.jsonsrc-tauri/src/api/ai/mod.rssrc-tauri/src/api/engine/mod.rssrc-tauri/src/api/license/mod.rssrc-tauri/src/api/modules/downloader.rssrc-tauri/src/api/settings/mod.rssrc-tauri/src/api/settings/window_settings.rssrc-tauri/src/api/system/bootstrap.rssrc-tauri/src/api/system/logs.rssrc-tauri/src/app/tray.rssrc-tauri/src/domain/ai/ai_dispatch.rssrc-tauri/src/domain/ai/ai_service.rssrc-tauri/src/domain/ai/custom_model_service.rssrc-tauri/src/domain/ai/image_service.rssrc-tauri/src/domain/ai/session.rssrc-tauri/src/domain/ai/streaming.rssrc-tauri/src/domain/engine/engine_args.rssrc-tauri/src/domain/engine/engine_runtime.rssrc-tauri/src/domain/engine/manager.rssrc-tauri/src/domain/integration_api.rssrc-tauri/src/domain/license/storage.rssrc-tauri/src/domain/license/verifier.rssrc-tauri/src/domain/modules/controller/lifecycle.rssrc-tauri/src/domain/modules/controller/mod.rssrc-tauri/src/domain/modules/downloader.rssrc-tauri/src/domain/modules/downloader_install.rssrc-tauri/src/domain/modules/downloader_progress.rssrc-tauri/src/domain/modules/downloader_service.rssrc-tauri/src/domain/modules/downloader_support.rssrc-tauri/src/domain/modules/downloader_transfer.rssrc-tauri/src/domain/modules/github_releases.rssrc-tauri/src/domain/modules/settings_ui_protocol.rssrc-tauri/src/domain/system/hardware_probe.rssrc-tauri/src/infrastructure/config/engine_settings.rssrc-tauri/src/infrastructure/config/settings.rssrc-tauri/src/infrastructure/config/translations.rssrc-tauri/src/infrastructure/crypto/secure_storage.rssrc-tauri/src/infrastructure/engine/tauri_emitter.rssrc-tauri/src/infrastructure/filesystem/local_file_service.rssrc-tauri/src/infrastructure/logging/logger.rssrc-tauri/src/infrastructure/monitoring/tauri_emitter.rssrc-tauri/src/infrastructure/persistence/json_store.rssrc-tauri/src/infrastructure/system/startup.rssrc-tauri/src/lib.rssrc/app/CoreComposition.test.tssrc/app/CoreComposition.tssrc/app/CoreEntry.tssrc/app/CoreLifecycleController.test.tssrc/app/CoreLifecycleController.tssrc/app/init.tssrc/features/ai/services/AIBridge.test.tssrc/features/ai/services/AIBridge.tssrc/features/ai/services/AIBridgeRuntime.test.tssrc/features/ai/services/AIBridgeRuntime.tssrc/features/ai/services/AIChatTransport.test.tssrc/features/ai/services/AIChatTransport.tssrc/features/chat/controllers/ChatHistoryController.test.tssrc/features/chat/controllers/ChatHistoryController.tssrc/features/chat/controllers/ChatSendController.test.tssrc/features/chat/controllers/ChatSendController.tssrc/features/console/services/ConsoleLogService.tssrc/features/monitoring/services/MonitoringService.test.tssrc/features/monitoring/services/MonitoringService.tssrc/features/settings/services/SettingsService.test.tssrc/features/settings/services/SettingsService.tssrc/features/settings/ui/ModuleSettingsCustomUiController.test.tssrc/features/settings/ui/ModuleSettingsCustomUiController.tssrc/features/settings/ui/ModuleSettingsEngineFieldCatalog.tssrc/features/settings/ui/ModuleSettingsEngineHtmlBuilder.tssrc/features/settings/ui/ModuleSettingsEngineRenderFlow.tssrc/features/settings/ui/ModuleSettingsEngineRenderer.tssrc/features/settings/ui/SettingsUI.test.tssrc/infrastructure/tauri/TauriProvider.test.tssrc/infrastructure/tauri/TauriProvider.tssrc/package.jsonsrc/shared/api/invoke.test.tssrc/shared/api/invoke.tssrc/shared/services/ModulePlatformService.test.tssrc/shared/services/ModulePlatformService.tssrc/shared/services/ModuleService.test.tssrc/shared/services/ModuleService.tssrc/shared/services/StateManager.test.tssrc/shared/services/StateManager.tssrc/shared/services/WindowService.test.tssrc/shared/services/WindowService.tssrc/shared/services/WindowServicePersistence.tssrc/shared/services/state/UiStateStore.test.tssrc/shared/services/state/UiStateStore.tssrc/shared/shell/ui/AppUiCardActionFlow.test.tssrc/shared/shell/ui/AppUiCardActionFlow.tssrc/shared/shell/ui/AppUiModuleFlow.test.tssrc/shared/shell/ui/AppUiModuleFlow.tssrc/shared/shell/ui/AppUiModuleLifecycle.test.tssrc/shared/shell/ui/AppUiModuleLifecycle.tssrc/shared/shell/ui/DownloadSelectionDialog.tssrc/shared/types/bindings.tssrc/shared/types/coreTypes.tssrc/styles/features/ai-module-settings.csssrc/styles/features/module-selection-modal.css
✅ Files skipped from review due to trivial changes (1)
- .github/workflows/dependency-review.yml
🚧 Files skipped from review as they are similar to previous changes (9)
- src-tauri/src/domain/engine/engine_runtime.rs
- src/app/CoreEntry.ts
- src-tauri/src/domain/ai/ai_service.rs
- src-tauri/src/domain/system/hardware_probe.rs
- src/features/ai/services/AIChatTransport.test.ts
- src-tauri/src/domain/modules/github_releases.rs
- src-tauri/src/infrastructure/logging/logger.rs
- src-tauri/resources/locales/ru.json
- src-tauri/src/api/ai/mod.rs
|
@coderabbitai addressed the review follow-ups in commits 369d881 and df2b92e, resolved the review threads, and pushed them to this PR.\n\nLocal verification passed:\n- npm --prefix src run build\n- npm --prefix src run check-size\n- npm --prefix src run lint\n- npm --prefix src run format:check\n- npm --prefix src run test\n- cargo fmt --manifest-path src-tauri\Cargo.toml --check\n- cargo clippy --manifest-path src-tauri\Cargo.toml --all-targets -- -D warnings\n- cargo test --manifest-path src-tauri\Cargo.toml\n- git diff --check\n\nPlease re-check. |
|
✅ Actions performedFull review triggered. |
|
@coderabbitai please re-check the latest pushed commit c91d01e as well. All review threads were resolved after df2b92e, and this last commit only removes workspace editor settings. |
|
✅ Actions performedFull review triggered. |
|
@coderabbitai please run a full re-review of the entire PR, not only the latest commit. Re-check all changed files and previously addressed review areas end-to-end. |
|
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src-tauri/src/domain/integration_api.rs (1)
544-578:⚠️ Potential issue | 🟠 Major | ⚡ Quick winUse the UI provider id when reading provider-scoped text options.
After
ui_provideris mapped to backendprovider, these lookups use"gpt"/"gpt-image"instead of the selected UI id. That breaks parity withresolve_model_id(), which still checks the UI provider first, and it means custom-provider selections can lose their saved thinking-level and web-search settings. Passui_providerinto these helpers first, then fall back to the backend id if needed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/domain/integration_api.rs` around lines 544 - 578, The selected_thinking_level and selected_web_search lookups are using the backend provider id (provider) instead of the UI provider id (ui_provider), causing provider-scoped options to be read incorrectly; change the calls to pass ui_provider first and let the helper fall back to the backend id when needed—for example replace selected_thinking_level(&context.ui_state_service, &provider).await? and selected_web_search(&context.ui_state_service, &provider).await? with calls that pass ui_provider first (e.g., selected_thinking_level(&context.ui_state_service, &ui_provider, &provider).await? and selected_web_search(&context.ui_state_service, &ui_provider, &provider).await?) or otherwise adjust the helpers so they prefer ui_provider before using provider; keep resolve_model_id usage unchanged.src-tauri/src/domain/ai/session.rs (1)
147-158:⚠️ Potential issue | 🟠 Major | ⚡ Quick winDon't clear
dirtyafter the flush finishes.
mark_dirty()can run whilespawn_blocking(...flush_sessions_locked...)is in flight. The unconditionalself.dirty.store(false, ...)then drops that newer save request, so later chat mutations can stay memory-only until some future edit happens to mark dirty again. Clear the flag before starting the flush and restore it only on failure, or otherwise preserve concurrent dirty writes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/domain/ai/session.rs` around lines 147 - 158, The code currently clears self.dirty after the background flush, which can drop concurrent mark_dirty() events; change force_save so it clears self.dirty (self.dirty.store(false, Ordering::Relaxed)) before launching the blocking flush (tokio::task::spawn_blocking(... Self::flush_sessions_locked(&save_lock, &sessions) ...)), and if the flush fails (either the join Err or the inner Result Err) restore the dirty flag (set it back to true) so that concurrent writes aren’t lost; keep the calls to ensure_persistence_available(), reuse the same save_lock and sessions clones, and only clear-on-success semantics by re-setting dirty on any failure paths.
🧹 Nitpick comments (1)
src/features/ai/services/AIChatTransport.ts (1)
273-280: 💤 Low valueConsider defensive check for
response.imagesbefore accessing.length.If
IImageGenerationResponsecan haveok: truewithimagesundefined or empty, the direct access at line 276 could throw. Using optional chaining provides safer handling.🛡️ Proposed defensive improvement
.then((response) => { - if (response.ok && response.images.length > 0) { + if (response.ok && response.images?.length > 0) { return { ok: true, images: response.images }; } return { ok: false, error: response.error ?? 'Failed to generate image' }; });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/ai/services/AIChatTransport.ts` around lines 273 - 280, The code in AIChatTransport that handles this._context.tauriProvider.invoke('generate_image', ...) directly accesses response.images.length and can throw if images is undefined; change the check to safely verify images is an array and has elements (e.g., Array.isArray(response.images) && response.images.length > 0 or use optional chaining response.images?.length > 0) and return the same ok/ error shape when images is missing or empty; update the then handler around the invoke call to use this defensive check.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src-tauri/src/domain/ai/ai_dispatch.rs`:
- Around line 33-37: The current request path calls sessions.force_save().await?
after merging messages (when request.session_id is Some and request.messages not
empty), which turns a recoverable persistence failure into a request error and
prevents the in-memory fallback from working; change the behavior so force_save
is best-effort: call sessions.force_save().await but do not propagate the
error—handle it locally (e.g., log a warning or record a metric) and continue
processing so merge_request_messages and ChatSessionManager::new()'s in-memory
fallback can be used without failing the request; keep merge_request_messages,
request.session_id, request.messages and the save invocation location the same,
only change error handling for sessions.force_save().
In `@src-tauri/src/domain/ai/session.rs`:
- Around line 462-479: The current fallback removes the existing file before the
second rename, risking data loss; change the fallback in the rename error
handling (the block that references tmp_path, path, error and maps a possible
second_error to AppError::Io) to perform an atomic swap via a backup: first
rename the existing path to a backup name (treat NotFound as OK), then rename
tmp_path to path, and if that second rename fails attempt to restore the
original by renaming the backup back to path; finally remove the backup on
success. Update the error messages returned by AppError::Io to include the
original error and any subsequent failures (second_error or restore errors) so
failures are fully reported.
In `@src-tauri/src/domain/engine/engine_args.rs`:
- Around line 98-108: build_sdcpp_args currently ignores
EngineConfig.compute_mode so SD.cpp is always launched with the same flags;
update the arg construction to honor the compute mode by reading
config.compute_mode (or delegate to sdcpp_extra_args) and appending the
corresponding SD.cpp flags for CPU vs GPU (e.g., the SD.cpp option(s) your
runtime expects for forcing CPU or enabling GPU). Modify build_sdcpp_args (or
sdcpp_extra_args) to branch on config.compute_mode (e.g., ComputeMode::Cpu vs
ComputeMode::Gpu) and push the appropriate device-related arguments into the
args Vec<String> so CPU selection actually changes the command line. Ensure you
reference EngineConfig.compute_mode, build_sdcpp_args, and sdcpp_extra_args when
making the change.
In `@src-tauri/src/domain/engine/manager.rs`:
- Around line 559-565: The manager currently drains/removes slot entries
(self.slots.lock().await.drain()) and drops the RunningEngine before calling
kill_engine(), which discards the engine handle if kill() or wait() fails;
modify the flow in stop(), stop_slot(), and the hot-swap paths to retain the
RunningEngine on failure: call Self::kill_engine(&mut running_engine) or
otherwise attempt kill()/wait() while holding the RunningEngine value, and on
any Err reinsert or keep the RunningEngine back into self.slots (or a
pending/failed map) instead of dropping it so the manager continues to own the
process and port; update kill_engine, stop(), stop_slot(), and the code paths
around lines referencing kill()/wait() so they operate on the retained
RunningEngine and only remove it from self.slots after successful shutdown.
In `@src-tauri/src/domain/modules/controller/lifecycle.rs`:
- Around line 194-201: When handling a failed child.kill().await in the
lifecycle code (the block using controller_registry.remove(&module_id) that
binds (_, mut child)), call child.try_wait() in the error path to reap the
process; i.e., after logging the kill error add let _ = child.try_wait(); to
ensure the child handle is reaped. Apply the same change to the other identical
error path (the later block around lines 215–229) where child.kill().await error
currently drops the child without reaping.
In `@src-tauri/src/domain/modules/downloader_transfer.rs`:
- Around line 133-139: The current exists() check + remove_dir_all can race and
return an error if the directory vanishes; replace the pre-check with a single
call to tokio::fs::remove_dir_all(&git_dir).await and map the error such that if
error.kind() == std::io::ErrorKind::NotFound you treat it as success (ignore
it), otherwise convert to AppError::Io with the same descriptive message;
reference the tokio::fs::remove_dir_all(&git_dir).await invocation and
AppError::Io handling around git_dir.
In `@src-tauri/src/infrastructure/filesystem/local_file_service.rs`:
- Around line 42-71: The fallback path currently does remove_file(path) before
renaming tmp into place which can leave path missing on crash; instead implement
a safe swap: when the first fs::rename(&tmp, path) fails with retryable_replace,
rename path to a temporary backup (e.g., let backup =
path.with_extension("..old") and fs::rename(path, &backup)), then rename &tmp to
path, and only after that remove the backup with fs::remove_file(&backup); on
any failure during the swap attempt, try to restore the backup by renaming
backup back to path and clean up tmp, returning an Io error if restore fails;
update the logic around fs::rename, fs::remove_file, tmp, path, and the new
backup variable to ensure no data-loss window.
---
Outside diff comments:
In `@src-tauri/src/domain/ai/session.rs`:
- Around line 147-158: The code currently clears self.dirty after the background
flush, which can drop concurrent mark_dirty() events; change force_save so it
clears self.dirty (self.dirty.store(false, Ordering::Relaxed)) before launching
the blocking flush (tokio::task::spawn_blocking(...
Self::flush_sessions_locked(&save_lock, &sessions) ...)), and if the flush fails
(either the join Err or the inner Result Err) restore the dirty flag (set it
back to true) so that concurrent writes aren’t lost; keep the calls to
ensure_persistence_available(), reuse the same save_lock and sessions clones,
and only clear-on-success semantics by re-setting dirty on any failure paths.
In `@src-tauri/src/domain/integration_api.rs`:
- Around line 544-578: The selected_thinking_level and selected_web_search
lookups are using the backend provider id (provider) instead of the UI provider
id (ui_provider), causing provider-scoped options to be read incorrectly; change
the calls to pass ui_provider first and let the helper fall back to the backend
id when needed—for example replace
selected_thinking_level(&context.ui_state_service, &provider).await? and
selected_web_search(&context.ui_state_service, &provider).await? with calls that
pass ui_provider first (e.g., selected_thinking_level(&context.ui_state_service,
&ui_provider, &provider).await? and
selected_web_search(&context.ui_state_service, &ui_provider, &provider).await?)
or otherwise adjust the helpers so they prefer ui_provider before using
provider; keep resolve_model_id usage unchanged.
---
Nitpick comments:
In `@src/features/ai/services/AIChatTransport.ts`:
- Around line 273-280: The code in AIChatTransport that handles
this._context.tauriProvider.invoke('generate_image', ...) directly accesses
response.images.length and can throw if images is undefined; change the check to
safely verify images is an array and has elements (e.g.,
Array.isArray(response.images) && response.images.length > 0 or use optional
chaining response.images?.length > 0) and return the same ok/ error shape when
images is missing or empty; update the then handler around the invoke call to
use this defensive check.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 874fd0cb-1078-45e3-bcd7-047d3aec49de
📒 Files selected for processing (18)
src-tauri/src/domain/ai/ai_dispatch.rssrc-tauri/src/domain/ai/image_service.rssrc-tauri/src/domain/ai/session.rssrc-tauri/src/domain/ai/streaming.rssrc-tauri/src/domain/engine/engine_args.rssrc-tauri/src/domain/engine/manager.rssrc-tauri/src/domain/integration_api.rssrc-tauri/src/domain/modules/controller/lifecycle.rssrc-tauri/src/domain/modules/downloader_transfer.rssrc-tauri/src/domain/modules/settings_ui_protocol.rssrc-tauri/src/infrastructure/config/engine_settings.rssrc-tauri/src/infrastructure/filesystem/local_file_service.rssrc/app/CoreComposition.test.tssrc/app/CoreLifecycleController.tssrc/features/ai/services/AIBridge.test.tssrc/features/ai/services/AIChatTransport.tssrc/features/chat/controllers/ChatSendController.tssrc/scripts/check-size.js
🚧 Files skipped from review as they are similar to previous changes (3)
- src/app/CoreComposition.test.ts
- src/features/ai/services/AIBridge.test.ts
- src/features/chat/controllers/ChatSendController.ts
|
@coderabbitai please review only this file batch 1/2 for PR #79. GitHub API currently reports 267 changed files, so I am splitting the review under your 150-file limit. Review these 150 files end-to-end and leave findings only for this batch: |
|
@coderabbitai please review only this file batch 2/2 for PR #79. This is the remaining batch under your 150-file limit. Review these 117 files end-to-end and leave findings only for this batch: |
|
Review rate limit: 0/1 reviews remaining, refill in 32 minutes and 38 seconds. |
|
Tip For best results, initiate chat on the files or code changes.
Could you try one of the following?
Let me know how you'd like to proceed! |
|
Review rate limit: 0/1 reviews remaining, refill in 32 minutes and 15 seconds. |
|
Tip For best results, initiate chat on the files or code changes.
Could you try one of the following?
Let me know how you'd like to proceed! |
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (6)
src-tauri/src/domain/modules/controller/lifecycle.rs (1)
194-200:⚠️ Potential issue | 🟠 Major | ⚡ Quick winReap the child when
kill().awaitfails.These branches log a failed kill and then drop the
Childhandle immediately. If the process already exited,try_wait()is still needed to reap it; otherwise the error path can leak a zombie on Unix.Representative fix
if let Err(kill_error) = child.kill().await { tracing::warn!( module_id = %module_id, "Failed to kill module after status polling failed: {kill_error}" ); + let _ = child.try_wait(); }In tokio::process::Child, if `kill().await` returns an error because the process may already have exited, do we still need `try_wait()` or `wait()` to reap the child handle?Also applies to: 215-229, 314-319, 425-435
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/domain/modules/controller/lifecycle.rs` around lines 194 - 200, When kill().await on the removed Child (from controller_registry.remove(&module_id)) fails, ensure you still attempt to reap the process instead of dropping the Child immediately: call child.try_wait().await and if it returns None call child.wait().await to collect the exit status (or at minimum try_wait and if Err/None fallback to wait) before dropping the Child; apply the same pattern for the other branches handling child.kill().await failures (the locations around the existing controller_registry.remove(&module_id) usages and the Child variable named child).src-tauri/src/domain/engine/manager.rs (1)
313-325:⚠️ Potential issue | 🟠 Major | 🏗️ Heavy liftStop/hot-swap can still forget a live engine on shutdown failure.
These paths remove
RunningEnginefromslotsbefore shutdown is confirmed. Ifkill_engine()fails, the handle has already been dropped, so the manager can treat the slot as free while the old process may still own the port/resources.stop_slot_after_error()already uses the safer retain-and-reinsert pattern.Also applies to: 557-593
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/domain/engine/manager.rs` around lines 313 - 325, The current hot-swap drains self.slots (removing RunningEngine handles) before calling Self::kill_engine, so if kill_engine fails the manager thinks the slot is free while the process may still hold resources; change the logic to use the same retain-and-reinsert pattern as stop_slot_after_error: do not drain/remove entries up-front—take a snapshot (e.g., clone or collect a list of (Capability, RunningEngine) without altering self.slots), attempt Self::kill_engine(old) for each snapshot entry, and only remove the slot from self.slots when kill succeeds; if kill_engine fails, reinsert or retain the original RunningEngine in self.slots so the manager accurately reflects the live process. Ensure you update both the hot-swap block around Self::kill_engine and the similar block at the second location (the one referenced at lines 557–593) to follow this pattern.src-tauri/src/domain/ai/ai_dispatch.rs (1)
33-37:⚠️ Potential issue | 🟠 Major | ⚡ Quick winSession persistence errors still fail the chat request path.
ChatSessionManagernow explicitly supports an in-memory fallback when disk persistence is unavailable, but both of theseforce_save().await?calls convert that degraded mode back into request failures. This can reject a chat before dispatch, or after a successful model reply, purely because history could not be written.Best-effort pattern
- sessions.force_save().await?; + if let Err(error) = sessions.force_save().await { + tracing::warn!("Skipping chat-history save during request preparation: {error}"); + } ... - sessions.force_save().await?; + if let Err(error) = sessions.force_save().await { + tracing::warn!("Skipping chat-history save after successful response: {error}"); + }Also applies to: 95-102
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/domain/ai/ai_dispatch.rs` around lines 33 - 37, The session persistence calls currently use force_save().await? which turns persistence failures into request failures; instead treat persistence as best-effort: after merge_request_messages (in the branch checking request.session_id and request.messages) and the similar block around lines 95-102, call sessions.force_save().await but do not propagate errors — catch the Result, log/debug/warn the error (including the error details) and continue processing (i.e., return Ok path) so an in-memory fallback doesn’t turn into a chat request failure; update the uses in ChatSessionManager call sites (sessions.force_save()) accordingly and ensure logs include the session id/context and the error.src-tauri/src/domain/engine/engine_args.rs (1)
98-108:⚠️ Potential issue | 🟠 Major | ⚡ Quick win
compute_modestill has no effect for SD.cpp.
build_sdcpp_args()never readsconfig.compute_mode, so Stable Diffusion launches still ignore the new CPU/GPU setting and drift from the shared engine-config contract.What CLI flags does the current stable-diffusion.cpp `sd-server` use to force CPU-only execution or otherwise control GPU usage?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/domain/engine/engine_args.rs` around lines 98 - 108, build_sdcpp_args currently ignores EngineConfig.compute_mode so SD.cpp is never told to use CPU/GPU; update build_sdcpp_args to read config.compute_mode (from EngineConfig) and append the appropriate sd-server CLI flag(s) (e.g., push the CPU-only flag when compute_mode == ComputeMode::Cpu, or the GPU flag/omit CPU flag for GPU modes) before returning args; use the same mapping logic as in sdcpp_extra_args or add a small match inside build_sdcpp_args to push the correct flag(s) so the launched sd-server respects the configured compute_mode.src-tauri/src/infrastructure/filesystem/local_file_service.rs (1)
42-71:⚠️ Potential issue | 🔴 Critical | 🏗️ Heavy liftThe replace fallback still breaks atomicity on failure.
After the first
rename()fails, this branch deletespathbefore the secondrename()succeeds. A crash or power loss in that gap leaves no destination file at all, sowrite_atomic()is no longer atomic on its error path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/infrastructure/filesystem/local_file_service.rs` around lines 42 - 71, The current fallback deletes `path` before the second `fs::rename(&tmp, path)`, which breaks atomicity; instead, implement a safe replace-by-rename sequence: when `fs::rename(&tmp, path)` returns a retryable error (see `first_error`/`retryable_replace`), rename the existing `path` to a backup name (e.g., `path.with_extension("..bak")`) using `fs::rename(path, &backup)` without removing `path`, then attempt `fs::rename(&tmp, path)`; if that second rename fails (the `second_error` case), rename the backup back to `path` to restore the original file, and always remove `tmp` and the backup as cleanup; update error messages referencing `first_error`, `remove_error`, and `second_error` accordingly and remove the code path that unconditionally deletes `path` before the second rename.src-tauri/src/domain/ai/session.rs (1)
462-479:⚠️ Potential issue | 🔴 Critical | 🏗️ Heavy liftThe chat-history fallback still has a data-loss window.
If the first rename fails, this branch removes the last good
history.jsonbefore the replacement rename succeeds. A crash in that gap loses both the old history and the new snapshot.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/domain/ai/session.rs` around lines 462 - 479, The current fallback removes the original file before the second rename, creating a data-loss window; instead, when the initial rename of tmp_path to path fails, move the existing path to a backup (e.g., path.with_extension(".bak" or timestamp) using std::fs::rename) before attempting to rename tmp_path to path, then on success remove the backup, and on failure attempt to restore the backup by renaming it back to path and return an Io error; reference tmp_path, path, error, remove_error and second_error to locate the rename/remove logic and ensure all renames are mapped to recoverable steps so a crash never leaves both old and new history missing.
🧹 Nitpick comments (6)
src-tauri/src/domain/ai/image_service.rs (1)
97-97: 💤 Low valueConsider logging merge failures for observability consistency.
The
let _ =pattern silently discards any error frommerge_request_messages, whileforce_save()logs on failure. For diagnostic consistency, consider logging here as well.💡 Optional: Add warning log for merge failure
- let _ = sessions.merge_request_messages(session_id, &[user_message]); + if let Err(error) = sessions.merge_request_messages(session_id, &[user_message]) { + tracing::warn!(session_id, "Failed to merge image request message: {error}"); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/domain/ai/image_service.rs` at line 97, The call to sessions.merge_request_messages(session_id, &[user_message]) currently swallows errors with `let _ =`—replace this with an explicit check that logs failures for observability (e.g., use an if let Err(e) / match on the Result from merge_request_messages and call the same logger used by force_save to emit a warning or error that includes the session_id and error details), referencing the sessions.merge_request_messages call and the surrounding session_id and user_message variables so failures are not silently ignored.src/features/chat/controllers/ChatSendController.ts (1)
317-330: 💤 Low valueHardcoded prompt engineering text may cause localization and maintenance issues.
The image prompt rewrite instructions are embedded as a hardcoded English string. Consider extracting this to a constant or configuration to facilitate future updates and potential localization of the instruction set.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/chat/controllers/ChatSendController.ts` around lines 317 - 330, The hardcoded English instruction block inside _buildImagePromptRewriteRequest should be extracted into a reusable constant or configuration (e.g., IMAGE_PROMPT_REWRITE_TEMPLATE or a localized resource) so it can be maintained and localized easily; replace the inline array with a call that injects the user prompt into that constant/template (preserving the exact rules, formatting and the `${prompt}` placeholder semantics), and update any callers of _buildImagePromptRewriteRequest only if the signature changes, ensuring tests and lints still pass.src/features/chat/chat.ts (1)
560-570: 💤 Low valueConsider using
AbortControlleror a WeakRef for the close listener cleanup.The
setTimeout(..., 0)trick to defer adding the mousedown listener works but creates a subtle timing dependency. If the menu is removed synchronously before the timeout fires, the listener won't be added. This is likely fine in practice, but a more robust pattern would be to track the listener cleanup in the component state.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/chat/chat.ts` around lines 560 - 570, The current deferred addEventListener via setTimeout causes a timing dependency; replace it by creating an AbortController (or similar disposable token) in the same scope as the close handler, call document.addEventListener('mousedown', close, { capture: true, signal: controller.signal }) immediately (remove the setTimeout), and when you remove the menu in the close handler invoke controller.abort() (or call controller.abort() from any synchronous path that removes the menu) so the listener is always cleaned up; reference the existing close function, menu and button variables, and replace the setTimeout(...) call with the AbortController-based registration and abort on removal.src-tauri/src/domain/modules/github_release_selection.rs (2)
111-115: 💤 Low valueRedundant conditional - both branches return
None.The
if has_cuda_maincheck doesn't affect the outcome since both theifbody and the following line returnNone. This could be simplified to a singlereturn None;.♻️ Suggested simplification
- if has_cuda_main { - return None; - } - - return None; + return None;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/domain/modules/github_release_selection.rs` around lines 111 - 115, The check against has_cuda_main is redundant because both the if branch and subsequent code return None; remove the conditional and replace the whole block with a single return None; to simplify logic—locate the block referencing has_cuda_main in github_release_selection (the branch that currently does `if has_cuda_main { return None; } return None;`) and collapse it to a single unconditional return None;.
499-508: 💤 Low valueConsider adding a comment explaining the < 100 heuristic.
The dual-interpretation logic (driver versions vs CUDA versions) is clever but non-obvious. Since NVIDIA driver majors are always >= 100 (525, 550, 580) and CUDA majors are small integers (12, 13), the threshold works correctly. A brief inline comment would help future maintainers understand this design choice.
📝 Suggested documentation
fn supports_cuda_track(&self, track: CudaTrack) -> bool { match self.cuda_driver_major { + // Values < 100 are actual CUDA versions (e.g., 12, 13); + // values >= 100 are NVIDIA driver versions (e.g., 525, 580). Some(driver_major) if driver_major < 100 => match track { CudaTrack::Cuda12 => driver_major >= 12, CudaTrack::Cuda13 => driver_major >= 13, }, Some(driver_major) => driver_major >= track.min_driver_major(), + // Unknown driver: conservatively assume only CUDA 12 is supported. None => track == CudaTrack::Cuda12, } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/domain/modules/github_release_selection.rs` around lines 499 - 508, Add an inline comment in the supports_cuda_track method explaining the "< 100" heuristic: clarify that cuda_driver_major can represent either a small CUDA major (e.g., 12, 13) or an NVIDIA driver major encoded as large numbers (e.g., 525, 550, 580), so the code treats values < 100 as CUDA major numbers and values >= 100 as driver majors; reference the CudaTrack enum and cuda_driver_major field and note why the threshold 100 safely distinguishes the two interpretations.src-tauri/src/domain/modules/github_releases.rs (1)
460-466: 💤 Low valueMinor inefficiency: double lowercase conversion.
is_cpu_asset_namepasses the already-lowercaselowertois_gpu_asset_name, which converts to lowercase again. Consider extracting a shared helper or passing the lowercase string directly.♻️ Suggested optimization
+fn is_gpu_asset_name_lower(lower: &str) -> bool { + lower.contains("cuda") + || lower.contains("cu12") + || lower.contains("cu13") + || lower.contains("vulkan") + || lower.contains("hip") + || lower.contains("rocm") + || lower.contains("sycl") + || lower.contains("openvino") + || lower.contains("nvidia") + || lower.contains("amd") +} + fn is_gpu_asset_name(name: &str) -> bool { - let lower = name.to_ascii_lowercase(); - lower.contains("cuda") - || lower.contains("cu12") - // ... + is_gpu_asset_name_lower(&name.to_ascii_lowercase()) } fn is_cpu_asset_name(name: &str) -> bool { let lower = name.to_ascii_lowercase(); lower.contains("cpu") || lower.contains("avx") || lower.contains("noavx") - || !is_gpu_asset_name(&lower) + || !is_gpu_asset_name_lower(&lower) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/domain/modules/github_releases.rs` around lines 460 - 466, The function is_cpu_asset_name currently lowercases the input into lower and then calls is_gpu_asset_name(&lower), which causes is_gpu_asset_name to lowercase again; fix this by adding or modifying an auxiliary function that accepts a pre-lowercased &str (e.g., is_gpu_asset_name_lower or change is_gpu_asset_name signature to take a flag/assume lowercase) and update is_cpu_asset_name to pass the already-lowercased string to that lowercase-free GPU check (ensure callers of the GPU check are adjusted accordingly so no double to_ascii_lowercase occurs).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src-tauri/src/domain/ai/streaming.rs`:
- Around line 584-609: The extraction chain for streaming text (the `content`
let-binding) misses the Ollama-style nested `message.content`, so add a fallback
branch that checks `json.get("message").and_then(|m|
m.get("content")).and_then(provider_response::extract_stream_text)` in the same
or_else chain after the existing `json.get("response")`/`json.get("token")`
fallbacks; use the same `provider_response::extract_stream_text` helper and
preserve the order of fallbacks, and add a regression test that supplies a
streaming payload with non-empty `message.content` to verify `reply.text` /
emitted `ChatChunk`s are populated.
In `@src-tauri/src/domain/engine/manager.rs`:
- Around line 695-705: The canonicalization currently only maps the exact
"stable-diffusion" string; broaden it by normalizing input first (lowercase,
replace '.' and '_' with '-', optionally strip a ".cpp" suffix or treat ".cpp"
as part of normalization, and collapse repeated hyphens) and then match variants
(e.g., strings that equal or start_with "stable-diffusion" after normalization)
to return "sdcpp". Update canonical_engine_id to perform the normalization and
mapping, and keep canonical_engine_log_id delegating to canonical_engine_id so
all lookups/slot matching use the same normalized form.
In `@src-tauri/src/domain/integration_api.rs`:
- Around line 571-578: The lookup is using the backend provider id (variable
provider) which was already remapped from custom keys, causing UI state keys to
miss custom-provider entries; update the two calls to selected_thinking_level
and selected_web_search to pass the UI provider key instead of the backend
id—e.g., use the ui_provider value carried by provider (provider.ui_provider or
the accessor used elsewhere like resolve_model_id's ui_provider) when calling
selected_thinking_level(&context.ui_state_service, &ui_provider).await? and
selected_web_search(&context.ui_state_service, &ui_provider).await? so the
UI-specific keys are honored.
- Around line 533-560: The deserialization rejects payloads that omit prompt
because IntegrationTextRequest.prompt is a required String; make prompt optional
by changing IntegrationTextRequest.prompt to Option<String> (so parse_json_body
will accept messages-only requests), and update usages in this handler: replace
direct payload.prompt access with payload.prompt.as_deref().unwrap_or("") (or
equivalent) when checking trim() and when passing prompt onward; keep the
existing validation that requires either messages or a non-empty prompt and
ensure any downstream calls that expect a prompt handle Option accordingly.
---
Duplicate comments:
In `@src-tauri/src/domain/ai/ai_dispatch.rs`:
- Around line 33-37: The session persistence calls currently use
force_save().await? which turns persistence failures into request failures;
instead treat persistence as best-effort: after merge_request_messages (in the
branch checking request.session_id and request.messages) and the similar block
around lines 95-102, call sessions.force_save().await but do not propagate
errors — catch the Result, log/debug/warn the error (including the error
details) and continue processing (i.e., return Ok path) so an in-memory fallback
doesn’t turn into a chat request failure; update the uses in ChatSessionManager
call sites (sessions.force_save()) accordingly and ensure logs include the
session id/context and the error.
In `@src-tauri/src/domain/ai/session.rs`:
- Around line 462-479: The current fallback removes the original file before the
second rename, creating a data-loss window; instead, when the initial rename of
tmp_path to path fails, move the existing path to a backup (e.g.,
path.with_extension(".bak" or timestamp) using std::fs::rename) before
attempting to rename tmp_path to path, then on success remove the backup, and on
failure attempt to restore the backup by renaming it back to path and return an
Io error; reference tmp_path, path, error, remove_error and second_error to
locate the rename/remove logic and ensure all renames are mapped to recoverable
steps so a crash never leaves both old and new history missing.
In `@src-tauri/src/domain/engine/engine_args.rs`:
- Around line 98-108: build_sdcpp_args currently ignores
EngineConfig.compute_mode so SD.cpp is never told to use CPU/GPU; update
build_sdcpp_args to read config.compute_mode (from EngineConfig) and append the
appropriate sd-server CLI flag(s) (e.g., push the CPU-only flag when
compute_mode == ComputeMode::Cpu, or the GPU flag/omit CPU flag for GPU modes)
before returning args; use the same mapping logic as in sdcpp_extra_args or add
a small match inside build_sdcpp_args to push the correct flag(s) so the
launched sd-server respects the configured compute_mode.
In `@src-tauri/src/domain/engine/manager.rs`:
- Around line 313-325: The current hot-swap drains self.slots (removing
RunningEngine handles) before calling Self::kill_engine, so if kill_engine fails
the manager thinks the slot is free while the process may still hold resources;
change the logic to use the same retain-and-reinsert pattern as
stop_slot_after_error: do not drain/remove entries up-front—take a snapshot
(e.g., clone or collect a list of (Capability, RunningEngine) without altering
self.slots), attempt Self::kill_engine(old) for each snapshot entry, and only
remove the slot from self.slots when kill succeeds; if kill_engine fails,
reinsert or retain the original RunningEngine in self.slots so the manager
accurately reflects the live process. Ensure you update both the hot-swap block
around Self::kill_engine and the similar block at the second location (the one
referenced at lines 557–593) to follow this pattern.
In `@src-tauri/src/domain/modules/controller/lifecycle.rs`:
- Around line 194-200: When kill().await on the removed Child (from
controller_registry.remove(&module_id)) fails, ensure you still attempt to reap
the process instead of dropping the Child immediately: call
child.try_wait().await and if it returns None call child.wait().await to collect
the exit status (or at minimum try_wait and if Err/None fallback to wait) before
dropping the Child; apply the same pattern for the other branches handling
child.kill().await failures (the locations around the existing
controller_registry.remove(&module_id) usages and the Child variable named
child).
In `@src-tauri/src/infrastructure/filesystem/local_file_service.rs`:
- Around line 42-71: The current fallback deletes `path` before the second
`fs::rename(&tmp, path)`, which breaks atomicity; instead, implement a safe
replace-by-rename sequence: when `fs::rename(&tmp, path)` returns a retryable
error (see `first_error`/`retryable_replace`), rename the existing `path` to a
backup name (e.g., `path.with_extension("..bak")`) using `fs::rename(path,
&backup)` without removing `path`, then attempt `fs::rename(&tmp, path)`; if
that second rename fails (the `second_error` case), rename the backup back to
`path` to restore the original file, and always remove `tmp` and the backup as
cleanup; update error messages referencing `first_error`, `remove_error`, and
`second_error` accordingly and remove the code path that unconditionally deletes
`path` before the second rename.
---
Nitpick comments:
In `@src-tauri/src/domain/ai/image_service.rs`:
- Line 97: The call to sessions.merge_request_messages(session_id,
&[user_message]) currently swallows errors with `let _ =`—replace this with an
explicit check that logs failures for observability (e.g., use an if let Err(e)
/ match on the Result from merge_request_messages and call the same logger used
by force_save to emit a warning or error that includes the session_id and error
details), referencing the sessions.merge_request_messages call and the
surrounding session_id and user_message variables so failures are not silently
ignored.
In `@src-tauri/src/domain/modules/github_release_selection.rs`:
- Around line 111-115: The check against has_cuda_main is redundant because both
the if branch and subsequent code return None; remove the conditional and
replace the whole block with a single return None; to simplify logic—locate the
block referencing has_cuda_main in github_release_selection (the branch that
currently does `if has_cuda_main { return None; } return None;`) and collapse it
to a single unconditional return None;.
- Around line 499-508: Add an inline comment in the supports_cuda_track method
explaining the "< 100" heuristic: clarify that cuda_driver_major can represent
either a small CUDA major (e.g., 12, 13) or an NVIDIA driver major encoded as
large numbers (e.g., 525, 550, 580), so the code treats values < 100 as CUDA
major numbers and values >= 100 as driver majors; reference the CudaTrack enum
and cuda_driver_major field and note why the threshold 100 safely distinguishes
the two interpretations.
In `@src-tauri/src/domain/modules/github_releases.rs`:
- Around line 460-466: The function is_cpu_asset_name currently lowercases the
input into lower and then calls is_gpu_asset_name(&lower), which causes
is_gpu_asset_name to lowercase again; fix this by adding or modifying an
auxiliary function that accepts a pre-lowercased &str (e.g.,
is_gpu_asset_name_lower or change is_gpu_asset_name signature to take a
flag/assume lowercase) and update is_cpu_asset_name to pass the
already-lowercased string to that lowercase-free GPU check (ensure callers of
the GPU check are adjusted accordingly so no double to_ascii_lowercase occurs).
In `@src/features/chat/chat.ts`:
- Around line 560-570: The current deferred addEventListener via setTimeout
causes a timing dependency; replace it by creating an AbortController (or
similar disposable token) in the same scope as the close handler, call
document.addEventListener('mousedown', close, { capture: true, signal:
controller.signal }) immediately (remove the setTimeout), and when you remove
the menu in the close handler invoke controller.abort() (or call
controller.abort() from any synchronous path that removes the menu) so the
listener is always cleaned up; reference the existing close function, menu and
button variables, and replace the setTimeout(...) call with the
AbortController-based registration and abort on removal.
In `@src/features/chat/controllers/ChatSendController.ts`:
- Around line 317-330: The hardcoded English instruction block inside
_buildImagePromptRewriteRequest should be extracted into a reusable constant or
configuration (e.g., IMAGE_PROMPT_REWRITE_TEMPLATE or a localized resource) so
it can be maintained and localized easily; replace the inline array with a call
that injects the user prompt into that constant/template (preserving the exact
rules, formatting and the `${prompt}` placeholder semantics), and update any
callers of _buildImagePromptRewriteRequest only if the signature changes,
ensuring tests and lints still pass.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 8f40f3be-42d2-4e69-bdd8-b007dbef003b
📒 Files selected for processing (38)
src-tauri/src/domain/ai/ai_dispatch.rssrc-tauri/src/domain/ai/image_service.rssrc-tauri/src/domain/ai/session.rssrc-tauri/src/domain/ai/streaming.rssrc-tauri/src/domain/engine/engine_args.rssrc-tauri/src/domain/engine/manager.rssrc-tauri/src/domain/integration_api.rssrc-tauri/src/domain/modules/controller/lifecycle.rssrc-tauri/src/domain/modules/downloader.rssrc-tauri/src/domain/modules/downloader_transfer.rssrc-tauri/src/domain/modules/github_release_selection.rssrc-tauri/src/domain/modules/github_releases.rssrc-tauri/src/domain/modules/settings_ui_protocol.rssrc-tauri/src/infrastructure/config/engine_settings.rssrc-tauri/src/infrastructure/filesystem/local_file_service.rssrc/app/CoreAssembly.tssrc/app/CoreChatFactory.tssrc/app/CoreComposition.test.tssrc/app/CoreComposition.tssrc/app/CoreLifecycleController.test.tssrc/app/CoreLifecycleController.tssrc/features/ai/services/AIBridge.test.tssrc/features/ai/services/AIChatTransport.tssrc/features/chat/chat.test.tssrc/features/chat/chat.tssrc/features/chat/controllers/ChatSendController.tssrc/features/chat/services/ChatControllerFactory.tssrc/features/chat/ui/ChatImageGenerationMessage.tssrc/features/chat/ui/ChatUI.test.tssrc/features/chat/ui/ChatUI.tssrc/features/chat/ui/ChatViewportController.tssrc/features/downloads/ui/DownloadUI.test.tssrc/features/downloads/ui/DownloadUI.tssrc/scripts/check-size.jssrc/shared/services/ModuleService.tssrc/shared/shell/GlobalTextContextMenu.test.tssrc/shared/shell/GlobalTextContextMenu.tssrc/shared/shell/ui/DownloadSelectionDialog.ts
🚧 Files skipped from review as they are similar to previous changes (4)
- src/app/CoreComposition.test.ts
- src/app/CoreChatFactory.ts
- src-tauri/src/domain/modules/downloader.rs
- src-tauri/src/domain/modules/downloader_transfer.rs
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src-tauri/src/api/ai/mod.rs (1)
703-704:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winInclude active-job fallback when preview file is missing
Line 703 only checks
has_status. If an"sdcpp"job is active but has no status snapshot and the preview file is temporarily absent, this still returnsNoneinstead of an in-progress placeholder.💡 Suggested patch
- } else if has_status { + } else if has_status || has_active_job { preview = Some(ImageGenerationPreview { data_url: String::new(), updated_at_ms: log_progress .as_ref() .map_or_else(current_time_ms_f64, |snapshot| snapshot.updated_at_ms),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/api/ai/mod.rs` around lines 703 - 704, The current branch that sets preview when has_status is true should also fallback to an in-progress placeholder when the relevant preview file is missing but there is an active "sdcpp" job; update the logic around has_status/preview in src-tauri/src/api/ai/mod.rs to detect an active job (e.g., check the active_job or job.type == "sdcpp" / job.is_active) and, if no status snapshot/preview file exists, set preview = Some(ImageGenerationPreview { ... }) with an appropriate in-progress state/placeholder values instead of leaving preview as None; ensure you use the existing ImageGenerationPreview fields and keep the has_status path intact while adding this active-job fallback.
♻️ Duplicate comments (2)
src/features/chat/chat.ts (1)
367-378:⚠️ Potential issue | 🟠 Major | ⚡ Quick win
cancelTextGenerationignores the provider id from send flowLine 367 defines
cancelTextGenerationwithout theproviderIdparameter, so the id tracked inChatSendControlleris dropped. If provider switches during send, cancellation can still hit the wrong backend.💡 Suggested patch
- cancelTextGeneration: async () => { - const providerId = - this._state.currentGenerationProviderId ?? - this._aiBridge.getState().activeProviderId; + cancelTextGeneration: async (providerIdFromSend) => { + const providerId = + providerIdFromSend ?? + this._state.currentGenerationProviderId ?? + this._aiBridge.getState().activeProviderId; if (this._generationController.isImageProvider(providerId)) { this._generationController.stopImagePreviewPolling(); await this._aiBridge.cancelImageGeneration(); return true; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/chat/chat.ts` around lines 367 - 378, The cancelTextGeneration function currently drops the provider id tracked by ChatSendController; update cancelTextGeneration (in the ChatSendController API) to accept an optional providerId parameter and use that providerId when checking _generationController.isImageProvider(...) and when calling _aiBridge.cancelImageGeneration() or _aiBridge.cancelTextGeneration(), falling back to this._state.currentGenerationProviderId ?? this._aiBridge.getState().activeProviderId only if the parameter is undefined so cancellations target the correct backend when provider switches during send.src-tauri/src/api/ai/mod.rs (1)
511-513:⚠️ Potential issue | 🟠 Major | 🏗️ Heavy lift
Doneevent still uses request id instead of streamed assistant message idLine 512 uses
request_idasmessage_idon cancel. If earlier chunks were emitted with a real assistantmessage_id, the terminal event won’t correlate and the UI can leave that message pending.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/api/ai/mod.rs` around lines 511 - 513, The Done event currently emits the request_id as message_id (cancel_sink.emit(StreamEvent::Done { message_id: request_id.clone(), ... })) which breaks correlation when earlier chunks used the assistant's streamed message id; update the cancel path to emit the actual assistant message id used for chunk events (e.g., use the variable holding the streamed assistant message id such as assistant_message_id or message_id that you set when emitting chunks) instead of request_id, ensuring cancel_sink.emit(StreamEvent::Done { message_id: assistant_message_id.clone(), usage: None, ... }) so the terminal event matches prior chunk events.
🧹 Nitpick comments (1)
src/features/chat/controllers/ChatSendController.test.ts (1)
211-238: ⚡ Quick winAssert provider id in cancellation test to lock behavior
Please also assert
cancelTextGenerationreceives the active provider id. This guards against regressions where cancellation targets the wrong backend.💡 Suggested assertion
expect(options.cancelTextGeneration).toHaveBeenCalledOnce(); + expect(options.cancelTextGeneration).toHaveBeenCalledWith('gpt'); expect(streamingHandle.cancel).toHaveBeenCalledOnce();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/chat/controllers/ChatSendController.test.ts` around lines 211 - 238, Add an assertion that options.cancelTextGeneration was called with the active provider id to lock cancellation behavior: after awaiting sendPromise, assert that options.cancelTextGeneration was invoked with the provider id exposed by the test harness (use the active provider id available on the test objects — e.g. controller.activeProviderId or options.providerId depending on what createController() exposes) by checking the first argument of the call (e.g. expect(options.cancelTextGeneration).toHaveBeenCalledWith(controller.activeProviderId)).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/features/chat/controllers/ChatSendController.ts`:
- Around line 130-132: In cancelActiveSend() ensure the external cancellation
call is guarded: set this._cancelRequested = true as before, then call
this._options.cancelTextGeneration(this._activeProviderId) inside a try/catch so
any rejection is caught and does not propagate to the UI; on error, swallow or
log the error (e.g., via existing logger) but do not rethrow so cancellation
remains best-effort and safe for the UI flow.
---
Outside diff comments:
In `@src-tauri/src/api/ai/mod.rs`:
- Around line 703-704: The current branch that sets preview when has_status is
true should also fallback to an in-progress placeholder when the relevant
preview file is missing but there is an active "sdcpp" job; update the logic
around has_status/preview in src-tauri/src/api/ai/mod.rs to detect an active job
(e.g., check the active_job or job.type == "sdcpp" / job.is_active) and, if no
status snapshot/preview file exists, set preview = Some(ImageGenerationPreview {
... }) with an appropriate in-progress state/placeholder values instead of
leaving preview as None; ensure you use the existing ImageGenerationPreview
fields and keep the has_status path intact while adding this active-job
fallback.
---
Duplicate comments:
In `@src-tauri/src/api/ai/mod.rs`:
- Around line 511-513: The Done event currently emits the request_id as
message_id (cancel_sink.emit(StreamEvent::Done { message_id: request_id.clone(),
... })) which breaks correlation when earlier chunks used the assistant's
streamed message id; update the cancel path to emit the actual assistant message
id used for chunk events (e.g., use the variable holding the streamed assistant
message id such as assistant_message_id or message_id that you set when emitting
chunks) instead of request_id, ensuring cancel_sink.emit(StreamEvent::Done {
message_id: assistant_message_id.clone(), usage: None, ... }) so the terminal
event matches prior chunk events.
In `@src/features/chat/chat.ts`:
- Around line 367-378: The cancelTextGeneration function currently drops the
provider id tracked by ChatSendController; update cancelTextGeneration (in the
ChatSendController API) to accept an optional providerId parameter and use that
providerId when checking _generationController.isImageProvider(...) and when
calling _aiBridge.cancelImageGeneration() or _aiBridge.cancelTextGeneration(),
falling back to this._state.currentGenerationProviderId ??
this._aiBridge.getState().activeProviderId only if the parameter is undefined so
cancellations target the correct backend when provider switches during send.
---
Nitpick comments:
In `@src/features/chat/controllers/ChatSendController.test.ts`:
- Around line 211-238: Add an assertion that options.cancelTextGeneration was
called with the active provider id to lock cancellation behavior: after awaiting
sendPromise, assert that options.cancelTextGeneration was invoked with the
provider id exposed by the test harness (use the active provider id available on
the test objects — e.g. controller.activeProviderId or options.providerId
depending on what createController() exposes) by checking the first argument of
the call (e.g.
expect(options.cancelTextGeneration).toHaveBeenCalledWith(controller.activeProviderId)).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 3352c4e9-8711-48e9-adbf-357affb90715
📒 Files selected for processing (18)
src-tauri/src/api/ai/mod.rssrc/features/chat/chat.test.tssrc/features/chat/chat.tssrc/features/chat/controllers/ChatHistoryController.test.tssrc/features/chat/controllers/ChatHistoryController.tssrc/features/chat/controllers/ChatSendController.test.tssrc/features/chat/controllers/ChatSendController.tssrc/features/chat/services/ChatControllerFactory.tssrc/features/chat/services/ChatFileHandler.test.tssrc/features/chat/services/ChatFileHandler.tssrc/features/chat/services/ChatSendFlow.tssrc/features/chat/ui/ChatAttachmentRenderer.tssrc/features/chat/ui/ChatImageGenerationMessage.tssrc/features/chat/ui/ChatImagePayload.tssrc/features/chat/ui/ChatMessageRenderer.tssrc/features/chat/ui/ChatStreamingMessage.tssrc/features/chat/ui/ChatUI.test.tssrc/features/chat/ui/ChatUiRetryStatusListener.ts
💤 Files with no reviewable changes (2)
- src/features/chat/services/ChatFileHandler.test.ts
- src/features/chat/services/ChatFileHandler.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- src/features/chat/controllers/ChatHistoryController.test.ts
- src/features/chat/services/ChatSendFlow.ts
ae765b4 to
4e5eb2c
Compare
Summary
This PR is a broad launcher refactor that prepares Axelate for a more reliable
nightlybaseline. It touches the core boot flow, backend contracts, chat runtime, module/download lifecycle, integrations, docs, CI, and dependency/security hygiene. The goal is not a visual-only pass; it is a stabilization pass across the launcher foundation.What Changed
Core Runtime and App Lifecycle
Backend API and Typed Contracts
Chat, AI, and Provider Flow
Providers, API Keys, and Custom Models
Engines, Modules, and Downloads
422pagination cases.Integration API and Developer Experience
Content-Lengthrejection, body length validation, route ownership, and stable JSON error responses.UI, I18n, and Frontend Consistency
en,ru, andzhlocale resources and fixed hardcoded/missing launcher settings translations.Security, CI, and Repository Hygiene
nightlyabsorbed Dependabot PRs.glibalert as upstream Tauri/wry Linux GTK stack risk: current Axelate uses the latest available Tauri/wry versions, and the Windows target does not include that dependency..prettierignore, root workflow scripts, Node/GitHub Actions hygiene, and Rust coverage/test scripts.Validation
npm --prefix src run typecheckpassed.cargo check --manifest-path src-tauri/Cargo.toml --all-targetspassed.Review Notes
nightly, then this branch was synchronized with that updated base.