fix/optimizations: Expo SDK 55 upgrade, Tailscale discovery, ESPHome static IP#331
fix/optimizations: Expo SDK 55 upgrade, Tailscale discovery, ESPHome static IP#331AnkushMalaker wants to merge 26 commits into
Conversation
- Updated `useAudioStreamer` to accept an options parameter for token refresh handling. - Implemented automatic re-login on token expiration, updating the WebSocket connection with a new token. - Enhanced error handling in WebSocket authentication to provide specific failure messages. - Modified audio retrieval endpoints to support both Opus and WAV formats, improving efficiency and flexibility. - Updated Docker configurations to replace Qdrant with Neo4j, adjusting environment variables and service dependencies accordingly. - Cleaned up unused code and improved logging for better traceability.
Symptom: iPhone app worked for a day after install, then failed to launch after backgrounding. No iOS .ips crash reports were produced. Strong signal it's a JS-side startup hang (no native crash) caused by expo-updates loading a bad/partial downloaded bundle at boot — reinstall clears the cache and restores it for another day. - Disable expo-updates via updates.enabled=false in app.json (no code calls Updates.*, so OTA was not in active use) - Add src/utils/logger.ts: file logger at FileSystem.documentDirectory/chronicle-logs/chronicle-log.txt, 1 MB rotation, session header with updates state, ErrorUtils global handler, unhandledrejection listener - Add src/components/ErrorBoundary.tsx: root JS error boundary with share/retry UI - _layout.tsx: initialize logger + wrap in ErrorBoundary - ConnectionLogContext: mirror every event to file logger - diagnostics.tsx: Share Log File / Clear File buttons - eas.json: add ascAppId for non-interactive TestFlight submits - .easignore (root + app): exclude node_modules, android/app/build, ios/Pods, build artifacts (project tarball 2.1 GB -> 154 MB)
- expo@~55.0.15 - @siteed/expo-audio-studio@^2.18.6 (2.18.1 pinned old expo-modules-core@~2.4.0, incompatible with SDK 55) - @types/react@~19.2.0, typescript@~5.9.2 - Regenerated ios/ via 'expo prebuild --clean' - Removed newArchEnabled from app.json (default in SDK 55) - Removed android.usesCleartextTraffic (duplicated by expo-build-properties plugin) - eas.json: pin testflight ios.image to macos-sequoia-15.6-xcode-26.2 to satisfy Apple's Apr 28 2026 Xcode 26 upload requirement
EAS Build runs strict 'npm ci' which rejected the lock file produced by earlier 'npm install --legacy-peer-deps' commands (react-dom@19.2.5 pulled transitively by expo-router's radix-ui stack requires react@^19.2.5, but root had react@19.2.0).
expo-router in SDK 55 imports @expo/metro-runtime transitively from node_modules/expo-router/node_modules/. With disableHierarchicalLookup=true Metro can only resolve from the paths we explicitly listed, causing 'Unable to resolve module @expo/metro-runtime' during EAGER_BUNDLE. This was the SDK 53-era monorepo recommendation; SDK 55's expo-router deps require hierarchical lookup.
1) index.tsx: onDeviceConnect/onDeviceDisconnect referenced 'orchestrator' and 'autoReconnect' which are declared later in the function (TDZ). Accessing 'orchestrator.handleStartAudioListeningAndStreaming' in the deps array caused 'Cannot read property ... of undefined' at render. Introduce orchestratorRef/autoReconnectRef, assign them after the hooks are declared. Pre-existing latent bug that became reproducible on SDK 55. 2) logger.ts: Expo SDK 55 removed the default 'expo-file-system' runtime in favour of 'expo-file-system/legacy' for the classic API (or the new File/Directory classes). Calls to getInfoAsync etc. were throwing on every write, so the file logger silently produced nothing. Switch to 'expo-file-system/legacy' and add expo-file-system as a direct dep so TS can resolve its types (it was only transitively installed under expo/node_modules/).
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
📝 WalkthroughWalkthroughThis comprehensive update introduces Expo ignore files, upgrades app dependencies (Expo Changes
Sequence Diagram(s)sequenceDiagram
participant App as Expo App
participant Streamer as useAudioStreamer
participant WebSocket as WebSocket
participant Backend as Backend API
participant Redis as Redis
App->>Streamer: useAudioStreamer({onTokenRefreshed})
activate Streamer
Streamer->>Streamer: startStreaming()
Streamer->>WebSocket: connect(url with token)
activate WebSocket
loop Normal Operation
WebSocket->>Backend: streaming audio
Backend->>WebSocket: process audio
end
Backend->>WebSocket: {type: "token_expired"}
WebSocket->>Streamer: onmessage detected token_expired
Streamer->>Streamer: authFailedRef.current = true
deactivate WebSocket
WebSocket->>Streamer: onclose
Streamer->>Streamer: attemptReLogin()
Streamer->>Backend: POST /login (saved credentials)
Backend->>Redis: validate & issue new token
Redis-->>Streamer: new token
Streamer->>Streamer: updateURL(newToken)
Streamer->>Streamer: onTokenRefreshed(newToken)
App->>App: settings.handleAuthStatusChange()
Streamer->>WebSocket: reconnect(url with new token)
activate WebSocket
WebSocket->>Backend: resume streaming
deactivate WebSocket
deactivate Streamer
sequenceDiagram
participant Client as Client/Browser
participant Installer as install.sh
participant Discovery as minidisc Discovery
participant Edge as edge-agent
participant Tailnet as Tailscale Network
participant Backend as Chronicle Backend
participant Dashboard as WebUI Dashboard
Client->>Installer: ./install.sh <service-name> --branch
activate Installer
Installer->>Installer: validate Tailscale connected
Installer->>Installer: resolve CHRONICLE_HOME
Installer->>Installer: clone/update Chronicle repo
Installer->>Discovery: uv run discover_service(CHRONICLE_BACKEND)
activate Discovery
Discovery->>Tailnet: query minidisc for CHRONICLE_BACKEND
Tailnet-->>Discovery: {endpoint, port}
Discovery-->>Installer: backend_url
deactivate Discovery
Installer->>Installer: run service config (init.py/setup.sh)
Installer->>Installer: docker compose up --build (with edge profile)
activate Edge
Edge->>Edge: read ADVERTISE env (service:port)
Edge->>Tailnet: minidisc.advertise_service(port, name, labels)
Tailnet->>Tailnet: register service in Tailnet
deactivate Edge
Client->>Dashboard: open Network page (admin only)
activate Dashboard
Dashboard->>Backend: GET /api/system/network
Backend->>Tailnet: list_all_services()
Tailnet->>Backend: [all chronicle-* services]
Backend->>Backend: compute_hybrid_scores + health checks
Backend-->>Dashboard: {tailscale_available, advertising, discovered_services, connected_devices}
Dashboard->>Dashboard: render Network status + service cards
deactivate Dashboard
deactivate Installer
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes ✨ Finishing Touches🧪 Generate unit tests (beta)
|
|
| Metric | Count |
|---|---|
| ✅ Passed | 109 |
| ❌ Failed | 3 |
| 📊 Total | 112 |
📊 View Reports
GitHub Pages (Live Reports):
Download Artifacts:
- robot-test-reports-html-no-api - HTML reports
- robot-test-results-xml-no-api - XML output
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 2
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (8)
backends/advanced/src/advanced_omi_backend/openai_factory.py (1)
20-51:⚠️ Potential issue | 🟠 MajorCached
AsyncOpenAIclients will fail when accessed across different event loops.
openai.AsyncOpenAIlazily binds its internalhttpx.AsyncClientto the event loop active at first use. The codebase runs async code in multiple event loop contexts:
- Main server: FastAPI/uvicorn with a single long-lived event loop
- Cron scheduler (
cron.py): Usesasyncio.run(run_scheduler())which creates a separate, short-lived event loop- Worker processes: May create their own event loops
Since
_client_cacheis module-level and keyed only by(api_key, base_url, is_async), anAsyncOpenAIclient cached after first use in the cron loop (e.g., bysurface_error_suggestions()→async_generate()→create_openai_client(..., is_async=True)) will be reused and fail if called from the main server loop withRuntimeError: Event loop is closedor similar errors.Fix: Key the async cache by both config and the running event loop identity, e.g.,
cache_key = (api_key, base_url, True, id(asyncio.get_running_loop())), or skip caching for async clients if cross-loop access is unavoidable.Secondary issues in the same function:
- The check-then-set pattern is not atomic under concurrent first-use; threads may construct two clients. Add
threading.Lockif concurrent cold-start matters.- The cache is unbounded; rotating keys (e.g., per-user or per-request) will grow without bound and pin socket resources.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/src/advanced_omi_backend/openai_factory.py` around lines 20 - 51, The caching mechanism for async OpenAI clients in create_openai_client uses only (api_key, base_url, is_async) as the key, which causes reuse of async clients across different event loops, leading to runtime errors. To fix this, update the cache key for async clients to include the current event loop identity by adding id(asyncio.get_running_loop()) to the key tuple. For example, change the cache key to (api_key, base_url, True, id(asyncio.get_running_loop())) when is_async is True. Additionally, consider adding a threading.Lock around the cache access in create_openai_client to avoid race conditions during concurrent client creation, and think about bounding the cache size to avoid unbounded growth from varied keys.config/README.md (1)
8-8:⚠️ Potential issue | 🟡 MinorStale reference to "vector store" in model registry bullet.
Line 61 removed
vector_storefrom the documentedmodel_typevalues, but line 8 still describes the model registry as including "vector store". Keep the two in sync.📝 Proposed fix
- - Contains model registry (LLM, STT, TTS, embeddings, vector store) + - Contains model registry (LLM, STT, TTS, embeddings)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@config/README.md` at line 8, The README's model registry description still lists "vector store" while the documented model_type values (model_type) no longer include vector_store; update the top-level bullet that currently reads "Contains model registry (LLM, STT, TTS, embeddings, vector store)" to match the model_type list (e.g., remove "vector store" or replace it with the current term used in model_type) so both the model registry description and the model_type values are consistent.backends/advanced/src/advanced_omi_backend/auth.py (1)
25-29:⚠️ Potential issue | 🔴 CriticalDuplicate
JWT_LIFETIME_SECONDSassignment silently discards the env value.Line 26 reads
JWT_LIFETIME_SECONDSfrom the environment, but line 29 immediately reassigns it to a hardcoded86400, so the env var has no effect. Operators setting a shorter (or longer) JWT lifetime will get the default regardless, and this value drives bothcookie_max_ageand theJWTStrategylifetime.🐛 Proposed fix
load_dotenv() JWT_LIFETIME_SECONDS = int(os.getenv("JWT_LIFETIME_SECONDS", "86400")) - -# JWT configuration -JWT_LIFETIME_SECONDS = 86400 # 24 hours🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/src/advanced_omi_backend/auth.py` around lines 25 - 29, Remove the duplicate hardcoded assignment so the environment value is honored: keep the existing env-based initialization of JWT_LIFETIME_SECONDS (JWT_LIFETIME_SECONDS = int(os.getenv("JWT_LIFETIME_SECONDS", "86400"))) and delete the later line that reassigns JWT_LIFETIME_SECONDS = 86400; ensure any usages such as cookie_max_age and the JWTStrategy lifetime reference the single env-derived JWT_LIFETIME_SECONDS constant.backends/advanced/webui/src/pages/Conversations.tsx (1)
1686-1688:⚠️ Potential issue | 🔴 CriticalReplace the global
JSXnamespace reference for React 19 compatibility.React 19 removes the global
JSXnamespace, soJSX.Element[]will fail TypeScript compilation after upgrade. ImportReactElementfrom React and use that type instead.🛠️ Proposed local fix
-import { useState, useEffect, useRef, useCallback, useMemo } from 'react' +import { useState, useEffect, useRef, useCallback, useMemo, type ReactElement } from 'react'- const renderedSegments: JSX.Element[] = [] + const renderedSegments: ReactElement[] = []🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/webui/src/pages/Conversations.tsx` around lines 1686 - 1688, The file uses the global JSX namespace type for renderedSegments which will break under React 19; import ReactElement from 'react' and change the type of renderedSegments from JSX.Element[] to ReactElement[] (locate the declaration for renderedSegments in the Conversations component/Conversations.tsx and update its type and add the React import if missing).backends/advanced/webui/src/pages/ConversationDetail.tsx (1)
350-359:⚠️ Potential issue | 🟡 MinorUse the selected audio extension for downloads.
When
AUDIO_FORMAT === 'opus', this downloads OGG/Opus bytes with a.wavfilename, which can confuse players and users.🐛 Proposed fix
const blob = await resp.blob() const url = URL.createObjectURL(blob) const a = document.createElement('a') + const extension = AUDIO_FORMAT === 'opus' ? 'ogg' : 'wav' a.href = url - a.download = `${conversation?.title || id}.wav` + a.download = `${conversation?.title || id}.${extension}` a.click()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/webui/src/pages/ConversationDetail.tsx` around lines 350 - 359, The download filename always uses ".wav" even when AUDIO_FORMAT is 'opus' (which returns OGG/Opus bytes); change the code in ConversationDetail.tsx where the anchor download is set so it chooses the proper extension based on AUDIO_FORMAT (e.g., ext = AUDIO_FORMAT === 'opus' ? '.ogg' : '.wav' or use a small mapping for other formats), then set a.download = `${conversation?.title || id}${ext}` before a.click() so the downloaded file extension matches the actual audio bytes.backends/advanced/src/advanced_omi_backend/routers/modules/conversation_routes.py (1)
312-383:⚠️ Potential issue | 🟡 MinorConstrain the audio format query parameter to prevent silent fallthrough and resolve builtin shadowing.
The
formatparameter currently accepts any string value. Invalid formats (e.g.,format=mp3) silently fall through to WAV decoding with no validation. Additionally,formatshadows the Python builtin, triggering Ruff A002. Use aLiteraltype with an aliased parameter name instead.Proposed fix
-from typing import Optional +from typing import Literal, Optional ... - format: str = Query(default="opus", description="Audio format: opus or wav"), + audio_format: Literal["opus", "wav"] = Query( + default="opus", + alias="format", + description="Audio format: opus or wav", + ), ... - if format == "opus": + if audio_format == "opus": try: opus_data = await get_opus_for_time_range( conversation_id=conversation_id, start_time=start, end_time=end ) except ValueError as e: - raise HTTPException(status_code=404, detail=str(e)) + raise HTTPException(status_code=404, detail=str(e)) from e🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/src/advanced_omi_backend/routers/modules/conversation_routes.py` around lines 312 - 383, The query parameter named format in the conversation audio route should be constrained and renamed to avoid builtin shadowing: change the signature to use a Literal type (Literal["opus","wav"]) and rename the parameter to e.g. audio_format with Query(..., alias="format", description=...) so incoming requests still use ?format= but the local name doesn't shadow builtins; then update all local uses (e.g., the conditional that checks if format == "opus" and the media_type/filename construction) to reference audio_format and validate only those two values so invalid formats return a 400 instead of silently falling through to WAV decoding.app/app.json (1)
37-56:⚠️ Potential issue | 🟠 MajorAndroid permissions are duplicated — every entry appears twice.
Lines 38–46 add the nine permissions, but lines 47–55 contain the exact same nine entries again. Looks like a copy/paste or bad merge during the SDK 55 refactor. Expo/Android will dedupe at build time, but this is confusing in source and will trip any tooling that reads/iterates this array.
🔧 Proposed fix — remove the duplicate block
"permissions": [ "android.permission.BLUETOOTH", "android.permission.BLUETOOTH_ADMIN", "android.permission.BLUETOOTH_CONNECT", "android.permission.ACCESS_NETWORK_STATE", "android.permission.FOREGROUND_SERVICE", "android.permission.FOREGROUND_SERVICE_DATA_SYNC", "android.permission.POST_NOTIFICATIONS", "android.permission.RECORD_AUDIO", - "android.permission.CAMERA", - "android.permission.BLUETOOTH", - "android.permission.BLUETOOTH_ADMIN", - "android.permission.BLUETOOTH_CONNECT", - "android.permission.ACCESS_NETWORK_STATE", - "android.permission.FOREGROUND_SERVICE", - "android.permission.FOREGROUND_SERVICE_DATA_SYNC", - "android.permission.POST_NOTIFICATIONS", - "android.permission.RECORD_AUDIO", "android.permission.CAMERA" ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/app.json` around lines 37 - 56, The permissions array in app.json contains duplicate entries (each Android permission like "android.permission.BLUETOOTH", "android.permission.BLUETOOTH_ADMIN", "android.permission.BLUETOOTH_CONNECT", "android.permission.ACCESS_NETWORK_STATE", "android.permission.FOREGROUND_SERVICE", "android.permission.FOREGROUND_SERVICE_DATA_SYNC", "android.permission.POST_NOTIFICATIONS", "android.permission.RECORD_AUDIO", "android.permission.CAMERA" appears twice); remove the duplicated block so the "permissions" array contains each permission exactly once, leaving a single list of the nine unique permission strings.app/src/hooks/useAudioStreamer.ts (1)
292-308:⚠️ Potential issue | 🟠 MajorStop the existing socket before marking the new session active.
stopStreaming()setsmanuallyStoppedRef.current = trueand stops the foreground service. Because it runs after the new session is initialized, a replacement stream can lose reconnect/auth recovery and background protection.🐛 Proposed fix
- currentUrlRef.current = trimmed; - manuallyStoppedRef.current = false; - authFailedRef.current = false; + if (websocketRef.current) { + await stopStreaming(); // close any existing session before initializing the next one + } + + currentUrlRef.current = trimmed; + manuallyStoppedRef.current = false; + authFailedRef.current = false; @@ - console.log(`[AudioStreamer] Initializing WebSocket: ${trimmed}`); - if (websocketRef.current) await stopStreaming(); // close any existing + console.log(`[AudioStreamer] Initializing WebSocket: ${trimmed}`);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/src/hooks/useAudioStreamer.ts` around lines 292 - 308, The code marks a new session active before closing the old socket, but stopStreaming() sets manuallyStoppedRef.current = true and stops the foreground service which breaks reconnect/auth recovery; fix by calling await stopStreaming() (the existing function) before setting currentUrlRef.current and before calling startForegroundServiceNotification, then only set manuallyStoppedRef.current = false and authFailedRef.current = false after stopStreaming() completes so the new session isn't poisoned by the prior stop.
🟡 Minor comments (18)
backends/advanced/init.py-1233-1238 (1)
1233-1238:⚠️ Potential issue | 🟡 MinorVITE_ values are only written on the successful Caddyfile-generation branch — intentional?*
If the Caddyfile template is missing, or Caddyfile exists as a directory, or an exception occurs,
HTTPS_ENABLEDis forced back to"false"but the prior.envmay still hold staleVITE_BACKEND_URL/VITE_HMR_PORT/VITE_ALLOWED_HOSTSfrom a previous successful HTTPS run (sincegenerate_env_filenow preserves previous values). That can leave the webui-dev container pointed at an HTTPS assumption while the stack is actually HTTP. Consider explicitly clearing or resetting these keys in the HTTPS-disabled branches.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/init.py` around lines 1233 - 1238, The VITE_* entries are only set when Caddyfile generation succeeded, leaving stale values when HTTPS_ENABLED is later forced to "false"; update the branches that disable HTTPS (the paths that set self.config["HTTPS_ENABLED"]="false" when the Caddyfile is missing, a directory, or an exception) to explicitly clear or reset self.config["VITE_BACKEND_URL"], self.config["VITE_HMR_PORT"], and self.config["VITE_ALLOWED_HOSTS"] (e.g., set to "" or to HTTP-appropriate defaults) and then call generate_env_file so the .env is updated; locate uses of generate_env_file and the config assignments to modify those failure/disabled-HTTPS branches accordingly.backends/advanced/src/advanced_omi_backend/services/memory/vault_manager.py-35-39 (1)
35-39:⚠️ Potential issue | 🟡 MinorHarden
_safe_pathagainst degenerate inputs.
Path(...).namecorrectly strips traversal components (Path("../../etc").name == "etc"), but some inputs sanitize to empty strings:
Path(".").name == ""Path("..").name == ""Path("").name == ""Path("/").name == ""An empty
safe_uidorsafe_cidcollapses the path — e.g.,base_dir / "" / "foo.md"becomesbase_dir/foo.md, which can cross user boundaries or overwrite sibling docs. User IDs are normally ObjectIds so this is unlikely in practice, but this is the security boundary for per-user isolation, so an explicit check is worth it.🛡️ Proposed hardening
def _safe_path(self, user_id: str, conv_id: str) -> Path: """Get the .md path with path-traversal guard.""" safe_uid = Path(user_id).name safe_cid = Path(conv_id).name + if not safe_uid or not safe_cid: + raise ValueError( + f"Invalid vault path components: user_id={user_id!r}, conv_id={conv_id!r}" + ) return self._base_dir / safe_uid / f"{safe_cid}.md"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/src/advanced_omi_backend/services/memory/vault_manager.py` around lines 35 - 39, The _safe_path method can produce empty safe_uid or safe_cid for degenerate inputs (".", "..", "", "/") which collapses paths and breaks per-user isolation; update advanced_omi_backend.services.memory.vault_manager.VaultManager._safe_path to validate safe_uid and safe_cid after computing Path(...).name and reject or sanitize empty results (e.g., raise a ValueError with a clear message) so you never return self._base_dir / "" / ...; keep the existing Path(...).name logic for traversal stripping but add the explicit emptiness check and a descriptive error to prevent accidental cross-user file access.backends/advanced/src/advanced_omi_backend/services/memory/neo4j_utils.py-90-93 (1)
90-93:⚠️ Potential issue | 🟡 MinorDon’t drop valid short section bodies.
len(stripped) > 5removes legitimate short content such asOK,Done, or a short name/list item. The placeholder set already covers empty/none cases.Proposed fix
def _section_has_content(text: str) -> bool: """Return False for placeholder text like '- None' or empty.""" stripped = text.strip().lower() - return stripped not in _EMPTY_PATTERNS and len(stripped) > 5 + return stripped not in _EMPTY_PATTERNS🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/src/advanced_omi_backend/services/memory/neo4j_utils.py` around lines 90 - 93, The helper _section_has_content currently rejects any short but valid section text by requiring len(stripped) > 5; remove this arbitrary length check and instead rely solely on comparing stripped to the placeholder set _EMPTY_PATTERNS (and still ensure non-empty if desired). Update _section_has_content to return True when stripped is not in _EMPTY_PATTERNS (and optionally when stripped != ""), referencing the function name _section_has_content and the placeholder constant _EMPTY_PATTERNS so callers keep valid short values like "OK" or "Done".edge/agent.py-55-64 (1)
55-64:⚠️ Potential issue | 🟡 MinorValidate advertised service names and port ranges.
Entries like
:8767,chronicle-asr:0, orchronicle-asr:99999currently pass parsing and fail later during advertisement.Proposed fix
name, port_str = entry.rsplit(":", 1) try: - _add(name.strip(), int(port_str.strip())) + name = name.strip() + port = int(port_str.strip()) + if not name or not (1 <= port <= 65535): + raise ValueError + _add(name, port) except ValueError: logger.warning("Skipping invalid ADVERTISE entry (bad port): %s", entry)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@edge/agent.py` around lines 55 - 64, Validate each parsed ADVERTISE entry before calling _add: ensure the service name (variable name from entry.rsplit) is non-empty after strip and does not start with ":" (i.e., has characters), and ensure the parsed port (port_str) is an integer within the valid range 1..65535; if validation fails log the same warning patterns (use logger.warning with descriptive messages like "Skipping invalid ADVERTISE entry (empty name): %s" or "Skipping invalid ADVERTISE entry (bad port): %s") and continue without calling _add. Use the existing variables entry, name, port_str and the _add function to locate where to insert these checks in edge/agent.py.discovery.py-89-102 (1)
89-102:⚠️ Potential issue | 🟡 MinorRemove the unused
timeoutparameter.
discover_service(..., timeout=...)accepts a timeout argument but never passes it to the underlyingminidisc.find_service()call. The minidisc-python library does not support timeout configuration in its API, so this parameter creates a misleading contract with callers who expect their timeout to be honored.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@discovery.py` around lines 89 - 102, The discover_service function currently declares a timeout parameter that is unused and misleading; remove the timeout parameter from the function signature and any default value, delete or update any references to timeout in the docstring, and ensure the implementation continues to call minidisc.find_service(name, labels or {}) unchanged; update any callers if present to stop passing timeout (or adjust call sites) and run tests to verify no breakage.backends/advanced/src/advanced_omi_backend/workers/finetuning_jobs.py-438-452 (1)
438-452:⚠️ Potential issue | 🟡 MinorPreserve the “recent memories” window.
This now calls
get_all_memories()without applyingMEMORY_LOOKBACK_SECONDS, so jargon extraction can drift to stale memories depending on provider ordering. Keep the provider-agnostic interface, but pass/apply a since filter or sort/filter before sending text to the LLM.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/src/advanced_omi_backend/workers/finetuning_jobs.py` around lines 438 - 452, _extract_jargon_for_user currently calls memory_service.get_all_memories without applying the recent-window filter, allowing stale memories to be used; update the call to respect MEMORY_LOOKBACK_SECONDS by either passing a since/timestamp parameter to memory_service.get_all_memories (if the API supports it) or, after fetching, filter/sort the returned memories by their timestamp and drop any older than now - MEMORY_LOOKBACK_SECONDS (while still capping with MAX_RECENT_MEMORIES) before assembling text for the LLM.app/app/diagnostics.tsx-98-100 (1)
98-100:⚠️ Potential issue | 🟡 MinorNormalize the iOS file URL before sharing.
getLogPath()returns afile://URI from Expo'sFileSystem.documentDirectory. Line 99 prefixes it again withfile://, creating a malformedfile://file:///...URL that causes iOS sharing to fail. Normalize the URL before passing it toShare.share().🐛 Proposed fix
} if (Platform.OS === 'ios') { - await Share.share({ url: `file://${getLogPath()}`, message: contents.slice(-4000) }); + const logPath = getLogPath(); + const logUrl = logPath.startsWith('file://') ? logPath : `file://${logPath}`; + await Share.share({ url: logUrl, message: contents.slice(-4000) }); } else { await Share.share({ message: contents.slice(-4000) }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/app/diagnostics.tsx` around lines 98 - 100, The iOS sharing code is prepending "file://" to a path that may already include the scheme, producing "file://file://..." and breaking Share.share; update the branch that runs when Platform.OS === 'ios' (the Share.share call) to normalize the URL from getLogPath() first—if getLogPath() already starts with "file://" strip any duplicate scheme and then ensure a single "file://" prefix before passing it into Share.share (keep using getLogPath() and Share.share to locate and share the file).tests/integration/integration_test.robot-69-82 (1)
69-82:⚠️ Potential issue | 🟡 MinorLoosen content-type assertions to allow valid parameters.
Content-Typemay include parameters likeaudio/ogg; codecs=opusper HTTP standards (RFC 5334). Exact string equality makes this test brittle even when the endpoint returns valid audio.🧪 Proposed fix
# Verify default audio format is opus/ogg ${audio_response}= GET On Session api /api/audio/get_audio/${conversation_id} expected_status=200 - Should Be Equal As Strings ${audio_response.headers}[content-type] audio/ogg + ${audio_content_type}= Convert To Lower Case ${audio_response.headers}[content-type] + Should Start With ${audio_content_type} audio/ogg ${original_audio_size}= Get Length ${audio_response.content} Should Be True ${original_audio_size} > 1000 Original audio file too small: ${original_audio_size} bytes Log Original audio (opus) accessible: ${original_audio_size} bytes INFO # Verify explicit wav format returns audio/wav ${wav_params}= Create Dictionary format=wav ${wav_response}= GET On Session api /api/audio/get_audio/${conversation_id} params=${wav_params} expected_status=200 - Should Be Equal As Strings ${wav_response.headers}[content-type] audio/wav + ${wav_content_type}= Convert To Lower Case ${wav_response.headers}[content-type] + Should Start With ${wav_content_type} audio/wav ${wav_audio_size}= Get Length ${wav_response.content}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/integration_test.robot` around lines 69 - 82, The test currently asserts exact equality of the Content-Type header which is brittle; update the checks on ${audio_response.headers}[content-type] and ${wav_response.headers}[content-type] (used with GET On Session responses) to allow parameters by replacing the two Should Be Equal As Strings assertions with a looser check such as Should Start With ${audio_response.headers}[content-type] audio/ogg and Should Start With ${wav_response.headers}[content-type] audio/wav or use Should Match Regexp with a pattern like ^audio/ogg(\s*;.*)?$ and ^audio/wav(\s*;.*)?$ so content-type parameters (e.g. ; codecs=opus) are accepted.backends/advanced/src/advanced_omi_backend/services/memory/base.py-17-17 (1)
17-17:⚠️ Potential issue | 🟡 MinorSort
__all__to satisfy Ruff.Ruff reports RUF022 on this line.
🧹 Proposed fix
-__all__ = ["MemoryEntry", "MemoryServiceBase", "LLMProviderBase"] +__all__ = ["LLMProviderBase", "MemoryEntry", "MemoryServiceBase"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/src/advanced_omi_backend/services/memory/base.py` at line 17, The __all__ list is not alphabetically sorted, triggering Ruff RUF022; update the __all__ definition in base.py to list the exported names in sorted order: "LLMProviderBase", "MemoryEntry", "MemoryServiceBase" (replace the current __all__ = ["MemoryEntry", "MemoryServiceBase", "LLMProviderBase"] with the sorted variant) so the symbol export order satisfies the linter.edge/stop.sh-9-13 (1)
9-13:⚠️ Potential issue | 🟡 MinorReject extra arguments instead of ignoring them.
The usage accepts exactly one service name, but
-lt 1silently ignores trailing arguments.🛠️ Proposed fix
-if [[ $# -lt 1 ]]; then +if [[ $# -ne 1 ]]; then echo "Usage: $0 <service-name>" echo "Example: $0 speaker-recognition" exit 1 fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@edge/stop.sh` around lines 9 - 13, The script currently checks if [[ $# -lt 1 ]] which allows extra arguments; change the argument validation to require exactly one argument (use [[ $# -ne 1 ]] or equivalent) so trailing arguments are rejected, print the existing usage/Example message when the check fails, and exit with status 1; update the conditional that contains the echo "Usage: $0 <service-name>" and echo "Example: $0 speaker-recognition" to use this exact-one-argument check so extra args are not ignored.extras/havpe-relay/init.py-243-249 (1)
243-249:⚠️ Potential issue | 🟡 MinorAllow clearing an existing ESPHome IP.
With
existingused as the prompt default, pressing Enter keeps the old IP even though the prompt says blank means auto-detect.🐛 Proposed fix
existing = self.read_existing_env_value("ESPHOME_DEVICE_IP") + if existing: + self.console.print(f"[blue][INFO][/blue] Current ESPHome device IP: {existing}") + if Confirm.ask("Keep existing ESPHome device IP?", default=True): + self.config["ESPHOME_DEVICE_IP"] = existing + return + esphome_ip = self.prompt_value( "ESPHome device IP (leave blank to auto-detect from TCP client)", - existing or "", + "", ) - if esphome_ip: - self.config["ESPHOME_DEVICE_IP"] = esphome_ip + if esphome_ip.strip(): + self.config["ESPHOME_DEVICE_IP"] = esphome_ip.strip()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@extras/havpe-relay/init.py` around lines 243 - 249, The prompt currently uses the existing ESPHome IP as the default so pressing Enter keeps the old value; change the behavior so the prompt default is an empty string (so Enter produces an empty value) and then handle an empty response by removing/clearing the stored value. Concretely: in the block that calls self.read_existing_env_value("ESPHOME_DEVICE_IP") and then self.prompt_value(...), pass "" (not existing) as the default, and after getting esphome_ip, if esphome_ip is truthy set self.config["ESPHOME_DEVICE_IP"] = esphome_ip, otherwise ensure self.config does not contain "ESPHOME_DEVICE_IP" (delete/clear it) so blank means auto-detect.edge/README.md-15-18 (1)
15-18:⚠️ Potential issue | 🟡 MinorPoint install commands at a branch that contains this script.
Line 16 fetches
main, but this PR targetsdev; users following the new docs before promotion tomaincan download a missing or staleedge/install.sh. Line 38 also references a feature branch that looks stale.📝 Suggested documentation adjustment
curl -sSL https://raw.githubusercontent.com/SimpleOpenSoftware/chronicle/main/edge/install.sh \ | bash -s -- <service-name>-curl -sSL https://raw.githubusercontent.com/SimpleOpenSoftware/chronicle/main/edge/install.sh \ +curl -sSL https://raw.githubusercontent.com/SimpleOpenSoftware/chronicle/dev/edge/install.sh \ | bash -s -- <service-name># Deploy from a specific branch -curl -sSL https://raw.githubusercontent.com/SimpleOpenSoftware/chronicle/feat/tailscale-discovery/edge/install.sh \ - | bash -s -- havpe-relay --branch feat/tailscale-discovery +curl -sSL https://raw.githubusercontent.com/SimpleOpenSoftware/chronicle/<branch-name>/edge/install.sh \ + | bash -s -- havpe-relay --branch <branch-name>Also applies to: 37-39
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@edge/README.md` around lines 15 - 18, The install curl in the README currently hardcodes the "main" branch for edge/install.sh (the curl snippet starting with "curl -sSL https://raw.githubusercontent.com/.../main/edge/install.sh"), and there's another hardcoded feature branch referenced around lines 37-39; update both to point to the branch that this PR/doc targets (use "dev" or a branch substitution variable) so users fetching the script get the correct version before promotion to main. Locate the curl snippet for edge/install.sh and the feature branch reference and replace "main" (and the stale feature branch name) with the correct branch name or a parameterized branch placeholder so the docs always reference the intended branch.backends/advanced/webui/src/pages/Memories.tsx-111-125 (1)
111-125:⚠️ Potential issue | 🟡 MinorDead ternary in person-grouping branch.
person.name ? doc : docalways evaluates todoc, so the ternary is a no-op and the trailing comment is misleading. Just pushdoc.🔧 Proposed fix
} else { // Group by person for (const doc of filteredDocs) { if (doc.people.length === 0) { const key = 'No People' if (!groups[key]) groups[key] = [] groups[key].push(doc) } else { for (const person of doc.people) { if (!groups[person.name]) groups[person.name] = [] - groups[person.name].push(person.name ? doc : doc) // same doc in multiple groups + groups[person.name].push(doc) // same doc may appear in multiple person groups } } } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/webui/src/pages/Memories.tsx` around lines 111 - 125, In the person-grouping branch inside the loop over filteredDocs, remove the redundant ternary and simply push the document into the group: replace groups[person.name].push(person.name ? doc : doc) with a direct push of doc; ensure you update the logic around variables groups, filteredDocs, doc, and doc.people so each person.name group receives the same doc (no conditional) and remove the misleading comment about multiple groups.services.py-653-671 (1)
653-671:⚠️ Potential issue | 🟡 MinorAdd advertised-services cleanup on agent startup failure and resolve
uvexecutable path before launching.If
Popenfails, the advertised-services.json written earlier remains on disk, potentially exposing stale service information. Additionally, passing"uv"as a partial executable path to subprocess relies on PATH lookup; explicitly resolve it usingshutil.which()first.Proposed fix
+ import shutil + _write_advertised_services(pairs) advertise = ",".join(f"{name}:{port}" for name, port, _label in pairs) env = dict(os.environ) env["ADVERTISE"] = advertise + + uv_bin = shutil.which("uv") + if not uv_bin: + console.print("[red]❌ uv not found; cannot start discovery agent[/red]") + _remove_advertised_services() + return False log_file = open(_DISCOVERY_LOG, "a") try: proc = subprocess.Popen( - ["uv", "run", "--with", "minidisc-python", "python", str(agent_script)], + [uv_bin, "run", "--with", "minidisc-python", "python", str(agent_script)], env=env, stdout=log_file, stderr=subprocess.STDOUT, start_new_session=True, ) except Exception as e: console.print(f"[red]❌ Failed to start discovery agent: {e}[/red]") log_file.close() + _remove_advertised_services() return False🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services.py` around lines 653 - 671, The call to subprocess.Popen in the startup sequence (where _write_advertised_services(pairs) is called and the Popen(...) for ["uv", "run", ...] is created) must first resolve the "uv" executable with shutil.which() and use that full path (or fail fast if not found), and on any exception from subprocess.Popen ensure cleanup: close the open log_file and remove the advertised-services file written by _write_advertised_services (referencing _DISCOVERY_LOG and the file written by _write_advertised_services) before returning False; make these changes in the same block that currently catches Exception e so the advertised-services state cannot remain stale and resources are closed.backends/advanced/docker-compose.yml-312-314 (1)
312-314:⚠️ Potential issue | 🟡 MinorUnused
ollama_datavolume.The
ollamaservice that consumed this volume is commented out (lines 289–302), so the named volume is declared but never attached. Either uncomment the service or drop the volume to avoid confusion.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/docker-compose.yml` around lines 312 - 314, The declared Docker volume "ollama_data" is unused because the "ollama" service is currently commented out; either remove the "ollama_data" volume from the volumes section to eliminate dead config, or restore the "ollama" service block so it mounts "ollama_data" (ensure the service's volumes entry references "ollama_data"). Locate the "ollama" service and the top-level volumes declaration (look for the symbols "ollama" and "ollama_data") and apply one of those two fixes to keep the compose file consistent.backends/advanced/src/advanced_omi_backend/services/memory/providers/chronicle.py-732-736 (1)
732-736:⚠️ Potential issue | 🟡 MinorRecursive fallback on error will loop on persistent failures.
If
add_memoryraises for a deterministic reason (bad transcript, LLM outage, Neo4j down), theexceptbranch simply retriesself.add_memory(...)— which will hit the same failure, re-enter nothing catches it here (re-raised fromadd_memory), so the net effect is one retry without backoff. More importantly, ifadd_memoryraises after partially writing (e.g., vault file written, Neo4j delete succeeded but insert failed), the retry may double-write. Consider either removing the retry or adding a bounded backoff + idempotency note.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/src/advanced_omi_backend/services/memory/providers/chronicle.py` around lines 732 - 736, The except block currently retries by calling self.add_memory(...) unboundedly which can loop or double-write on deterministic or partial-failure cases; replace this recursive retry with a bounded, idempotent retry strategy or remove the retry: e.g., stop calling self.add_memory directly from the except, or implement a controlled retry loop with a max_attempts and exponential backoff inside the caller that calls add_memory (use a unique idempotency key for add_memory so repeated attempts won't double-write), and ensure memory_logger.error logs the failure and the chosen action (no-retry or retry-attempts exhausted) instead of invoking add_memory again; reference the existing symbols self.add_memory and memory_logger when making the change.backends/advanced/src/advanced_omi_backend/services/knowledge_graph/service.py-574-599 (1)
574-599:⚠️ Potential issue | 🟡 MinorPeople dedup is case-sensitive — may leave duplicates.
seen_namesuses the rawnamestring, so"Alice"and"alice"(or"Alice ") both pass through. Given names come from the LLM-generated conversation doc, casing/whitespace variants are realistic. Consider normalizing the dedup key:♻️ Suggested
- seen_names = set() - unique_people = [] - for p in people: - if p["name"] not in seen_names: - seen_names.add(p["name"]) - unique_people.append(p) + seen_names = set() + unique_people = [] + for p in people: + key = (p["name"] or "").strip().lower() + if key and key not in seen_names: + seen_names.add(key) + unique_people.append(p)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/src/advanced_omi_backend/services/knowledge_graph/service.py` around lines 574 - 599, The dedup logic is case-sensitive and doesn't trim whitespace, so replace the raw name check with a normalized key: for each person in the loop that builds unique_people, compute key = (p.get("name") or "").strip().lower() and use that key for seen_names membership/addition while still appending the original p to unique_people; ensure null/empty names are skipped as currently done (the people list comprehension can remain), and update references to seen_names/unique_people in the loop accordingly.backends/advanced/docker-compose.yml-189-191 (1)
189-191:⚠️ Potential issue | 🟡 Minor
VITE_BACKEND_URLfallback breaks whenHOST_IPis unset.
${VITE_BACKEND_URL-http://${HOST_IP}:${BACKEND_PUBLIC_PORT:-8000}}uses-(unset only), not:-(unset or empty). IfVITE_BACKEND_URLis exported as an empty string (common in CI or when a.envcarriesVITE_BACKEND_URL=), the fallback is skipped and the dev server gets"". Also, whenHOST_IPitself is unset, the URL becomeshttp://:8000, which the browser will refuse.♻️ Suggested
- - VITE_BACKEND_URL=${VITE_BACKEND_URL-http://${HOST_IP}:${BACKEND_PUBLIC_PORT:-8000}} + - VITE_BACKEND_URL=${VITE_BACKEND_URL:-http://${HOST_IP:-localhost}:${BACKEND_PUBLIC_PORT:-8000}}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backends/advanced/docker-compose.yml` around lines 189 - 191, Update the VITE_BACKEND_URL environment expansion to use the ":-" fallback operator so an exported empty VITE_BACKEND_URL will fall back, and also provide a fallback for HOST_IP so the composed URL never has a missing host; adjust the expression that references VITE_BACKEND_URL, HOST_IP, and BACKEND_PUBLIC_PORT (the VITE_BACKEND_URL assignment in docker-compose.yml) to use ":-" for VITE_BACKEND_URL and a default for HOST_IP.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 4359b506-90a4-4b0e-a104-4c21812d4557
⛔ Files ignored due to path filters (2)
app/package-lock.jsonis excluded by!**/package-lock.jsonbackends/advanced/uv.lockis excluded by!**/*.lock
📒 Files selected for processing (99)
.easignore.gitignoreapp/.easignoreapp/app.jsonapp/app/_layout.tsxapp/app/diagnostics.tsxapp/app/index.tsxapp/eas.jsonapp/metro.config.jsapp/package.jsonapp/src/components/ErrorBoundary.tsxapp/src/contexts/ConnectionLogContext.tsxapp/src/hooks/useAudioStreamer.tsapp/src/utils/logger.tsbackends/advanced/Dockerfilebackends/advanced/docker-compose-test.ymlbackends/advanced/docker-compose.ymlbackends/advanced/init.pybackends/advanced/pyproject.tomlbackends/advanced/src/advanced_omi_backend/app_config.pybackends/advanced/src/advanced_omi_backend/app_factory.pybackends/advanced/src/advanced_omi_backend/auth.pybackends/advanced/src/advanced_omi_backend/controllers/system_controller.pybackends/advanced/src/advanced_omi_backend/controllers/websocket_controller.pybackends/advanced/src/advanced_omi_backend/middleware/app_middleware.pybackends/advanced/src/advanced_omi_backend/openai_factory.pybackends/advanced/src/advanced_omi_backend/prompt_defaults.pybackends/advanced/src/advanced_omi_backend/routers/modules/audio_routes.pybackends/advanced/src/advanced_omi_backend/routers/modules/conversation_routes.pybackends/advanced/src/advanced_omi_backend/routers/modules/health_routes.pybackends/advanced/src/advanced_omi_backend/routers/modules/knowledge_graph_routes.pybackends/advanced/src/advanced_omi_backend/routers/modules/system_routes.pybackends/advanced/src/advanced_omi_backend/services/knowledge_graph/__init__.pybackends/advanced/src/advanced_omi_backend/services/knowledge_graph/entity_extractor.pybackends/advanced/src/advanced_omi_backend/services/knowledge_graph/models.pybackends/advanced/src/advanced_omi_backend/services/knowledge_graph/queries.pybackends/advanced/src/advanced_omi_backend/services/knowledge_graph/service.pybackends/advanced/src/advanced_omi_backend/services/memory/README.mdbackends/advanced/src/advanced_omi_backend/services/memory/__init__.pybackends/advanced/src/advanced_omi_backend/services/memory/base.pybackends/advanced/src/advanced_omi_backend/services/memory/config.pybackends/advanced/src/advanced_omi_backend/services/memory/neo4j_utils.pybackends/advanced/src/advanced_omi_backend/services/memory/prompts.pybackends/advanced/src/advanced_omi_backend/services/memory/providers/__init__.pybackends/advanced/src/advanced_omi_backend/services/memory/providers/chronicle.pybackends/advanced/src/advanced_omi_backend/services/memory/providers/llm_providers.pybackends/advanced/src/advanced_omi_backend/services/memory/providers/vector_stores.pybackends/advanced/src/advanced_omi_backend/services/memory/vault_manager.pybackends/advanced/src/advanced_omi_backend/services/transcription/streaming_consumer.pybackends/advanced/src/advanced_omi_backend/utils/audio_chunk_utils.pybackends/advanced/src/advanced_omi_backend/workers/conversation_jobs.pybackends/advanced/src/advanced_omi_backend/workers/finetuning_jobs.pybackends/advanced/src/advanced_omi_backend/workers/memory_jobs.pybackends/advanced/src/advanced_omi_backend/workers/waveform_jobs.pybackends/advanced/webui/src/App.tsxbackends/advanced/webui/src/components/knowledge-graph/PromisesList.tsxbackends/advanced/webui/src/components/knowledge-graph/index.tsbackends/advanced/webui/src/components/layout/Layout.tsxbackends/advanced/webui/src/pages/ConversationDetail.tsxbackends/advanced/webui/src/pages/Conversations.tsxbackends/advanced/webui/src/pages/Memories.tsxbackends/advanced/webui/src/pages/Network.tsxbackends/advanced/webui/src/services/api.tsconfig/README.mdconfig/advertised-services.jsonconfig/config.yml.templateconfig/defaults.ymldiscovery.pyedge/Dockerfileedge/README.mdedge/agent.pyedge/install.shedge/run-on-rpi.shedge/status.shedge/stop.shextras/asr-services/docker-compose.ymlextras/havpe-relay/.env.templateextras/havpe-relay/docker-compose.ymlextras/havpe-relay/init.pyextras/havpe-relay/main.pyextras/havpe-relay/pyproject.tomlextras/havpe-relay/relay_core.pyextras/llm-services/docker-compose.ymlextras/ml-training/README.mdextras/speaker-recognition/docker-compose.ymlplugins/email_summarizer/plugin.pyplugins/homeassistant/plugin.pyplugins/hourly_recap/plugin.pyservices.pytests/configs/README.mdtests/configs/deepgram-openai.ymltests/configs/mock-services.ymltests/configs/mock-transcription-failure.ymltests/configs/mock-vibevoice.ymltests/configs/parakeet-ollama.ymltests/configs/parakeet-openai.ymltests/endpoints/health_tests.robottests/endpoints/memory_tests.robottests/integration/integration_test.robot
💤 Files with no reviewable changes (15)
- tests/configs/README.md
- backends/advanced/src/advanced_omi_backend/services/memory/README.md
- backends/advanced/src/advanced_omi_backend/services/memory/providers/llm_providers.py
- backends/advanced/src/advanced_omi_backend/app_config.py
- tests/configs/parakeet-openai.yml
- backends/advanced/webui/src/components/knowledge-graph/index.ts
- config/config.yml.template
- tests/configs/mock-services.yml
- tests/configs/parakeet-ollama.yml
- tests/configs/mock-transcription-failure.yml
- tests/configs/mock-vibevoice.yml
- tests/configs/deepgram-openai.yml
- backends/advanced/webui/src/components/knowledge-graph/PromisesList.tsx
- config/defaults.yml
- backends/advanced/src/advanced_omi_backend/services/memory/providers/vector_stores.py
| response = await client.chat.completions.create( | ||
| model=model, | ||
| messages=[ | ||
| {"role": "system", "content": system_prompt}, | ||
| {"role": "user", "content": f"Transcript:\n{transcript}"}, | ||
| ], | ||
| temperature=0.2, | ||
| ) | ||
| doc_md = response.choices[0].message.content.strip() | ||
|
|
||
| Generates an embedding for the query and searches the vector store | ||
| for similar memories belonging to the specified user. | ||
| # Fallback: if LLM returns non-markdown, store transcript as single chunk | ||
| if not doc_md or "###" not in doc_md: | ||
| memory_logger.warning( | ||
| f"LLM returned non-markdown for {source_id}, using fallback" | ||
| ) | ||
| doc_md = ( | ||
| f"---\nconversation_id: {source_id}\n" | ||
| f"date: {datetime.now(timezone.utc).isoformat()}\n---\n\n" | ||
| f"## Conversation\n\n### Summary\n{transcript[:500]}\n" | ||
| ) | ||
|
|
||
| Args: | ||
| query: Search query text | ||
| user_id: User identifier to filter memories | ||
| limit: Maximum number of results to return | ||
| score_threshold: Minimum similarity score (0.0 = no threshold) | ||
| return doc_md |
There was a problem hiding this comment.
Guard against None content and add a timeout to LLM doc generation.
Two issues in _generate_conversation_doc:
response.choices[0].message.contentcan beNone(e.g., finish_reason=length, content-filter, tool-call response), which will raiseAttributeError: 'NoneType' object has no attribute 'strip'and fail the whole memory job — the fallback branch is unreachable because it's after the.strip().- Unlike the embeddings call (which is wrapped in
asyncio.wait_for(..., timeout=self.config.timeout_seconds)), this chat completion has no timeout. A hung LLM call will indefinitely block the memory worker processing this conversation.
🛡️ Proposed fix
- response = await client.chat.completions.create(
- model=model,
- messages=[
- {"role": "system", "content": system_prompt},
- {"role": "user", "content": f"Transcript:\n{transcript}"},
- ],
- temperature=0.2,
- )
- doc_md = response.choices[0].message.content.strip()
+ response = await asyncio.wait_for(
+ client.chat.completions.create(
+ model=model,
+ messages=[
+ {"role": "system", "content": system_prompt},
+ {"role": "user", "content": f"Transcript:\n{transcript}"},
+ ],
+ temperature=0.2,
+ ),
+ timeout=self.config.timeout_seconds,
+ )
+ raw_content = response.choices[0].message.content if response.choices else None
+ doc_md = (raw_content or "").strip()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@backends/advanced/src/advanced_omi_backend/services/memory/providers/chronicle.py`
around lines 265 - 286, The call to client.chat.completions.create in
_generate_conversation_doc can return a response with
response.choices[0].message.content == None and there is no timeout; fix by
wrapping the await client.chat.completions.create(...) call in
asyncio.wait_for(..., timeout=self.config.timeout_seconds) and catch
asyncio.TimeoutError and general exceptions, logging via memory_logger and
falling back to the transcript; after the awaited call, safely extract content
by checking response.choices[0].message and response.choices[0].message.content
for None before calling .strip() (e.g., assign content =
response.choices[0].message.content or "" then strip) so the existing fallback
branch (using memory_logger.warning and the transcript-based doc_md) is
reachable when content is None or empty.
| ### `autoresearch-asr/` | ||
|
|
||
| Autonomous LoRA fine-tuning loop for VibeVoice-ASR, adapted from [karpathy/autoresearch](https://github.com/karpathy/autoresearch). Give an AI agent the training setup and let it experiment overnight on Google Colab. | ||
|
|
||
| - `prepare.py` - Fixed data loading, model caching, train/val/test split (DO NOT MODIFY) | ||
| - `evaluate.py` - Fixed evaluation harness: WER + SWER + boundary MAE (DO NOT MODIFY) | ||
| - `train.py` - The file the agent modifies: LoRA config, hyperparams, curriculum | ||
| - `program.md` - Agent instructions for the autonomous experiment loop | ||
| - `export_data.py` - Export training data from Chronicle API to VibeVoice format | ||
|
|
||
| See `autoresearch-asr/program.md` for full usage. | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Verify that autoresearch-asr/ directory and referenced files exist
# Check if the directory exists
if [ -d "extras/ml-training/autoresearch-asr" ]; then
echo "✓ Directory exists: extras/ml-training/autoresearch-asr/"
# Check for each referenced file
cd extras/ml-training/autoresearch-asr
for file in prepare.py evaluate.py train.py program.md export_data.py; do
if [ -f "$file" ]; then
echo "✓ Found: $file"
else
echo "✗ Missing: $file"
fi
done
else
echo "✗ Directory not found: extras/ml-training/autoresearch-asr/"
fiRepository: SimpleOpenSoftware/chronicle
Length of output: 131
🏁 Script executed:
# Search for the referenced files anywhere in the repository
echo "=== Searching for referenced files ==="
fd -t f "prepare\.py|evaluate\.py|train\.py|program\.md|export_data\.py" .
echo -e "\n=== Checking extras/ml-training/ directory structure ==="
ls -la extras/ml-training/ 2>/dev/null || echo "Directory not found"
echo -e "\n=== Searching for autoresearch-asr directory ==="
fd -t d "autoresearch-asr" .
echo -e "\n=== Full structure of extras/ml-training/ ==="
find extras/ml-training -type f -o -type d 2>/dev/null | head -50Repository: SimpleOpenSoftware/chronicle
Length of output: 1325
Remove the autoresearch-asr/ section or add the missing directory and files.
The documentation references a non-existent autoresearch-asr/ directory and five files (prepare.py, evaluate.py, train.py, program.md, export_data.py) that do not exist in the repository. The repository currently contains only event-detection/ and whisper-adapter-finetuning/ directories under extras/ml-training/. Either implement this feature and commit the missing files, or remove this section from the README until the implementation is ready.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@extras/ml-training/README.md` around lines 27 - 38, The README references a
missing autoresearch-asr/ module; either remove the entire `autoresearch-asr/`
section from extras/ml-training/README.md or add the missing directory
`autoresearch-asr/` with the five files `prepare.py`, `evaluate.py`, `train.py`,
`program.md`, and `export_data.py` (implementations or stubs) and update the
README to reflect the actual state; ensure filenames match exactly
(`prepare.py`, `evaluate.py`, `train.py`, `program.md`, `export_data.py`) and
that `program.md` contains the usage note referenced at the end of the current
section.
- Updated environment variables and configuration files to replace Neo4j settings with FalkorDB. - Modified Docker Compose files to use FalkorDB services instead of Neo4j. - Adjusted application logic and routes to accommodate FalkorDB for entity management and knowledge graph functionalities. - Introduced new graph client utilities for FalkorDB integration. - Updated documentation to reflect changes in the architecture and service dependencies.
|
| Metric | Count |
|---|---|
| ✅ Passed | 123 |
| ❌ Failed | 3 |
| 📊 Total | 126 |
📊 View Reports
GitHub Pages (Live Reports):
Download Artifacts:
- robot-test-reports-html-no-api - HTML reports
- robot-test-results-xml-no-api - XML output
Summary
disableHierarchicalLookupremoved), disabledexpo-updates, added file-based crash logging, fixed index.tsx crash loopCommits (25)
Test plan
🤖 Generated with Claude Code
Summary by CodeRabbit
Release Notes
New Features
Improvements
Backend Changes