The AI chat system has been enhanced to automatically use the new lexical search engine for context retrieval. This provides more accurate, relevant responses by leveraging your existing notes.
- sendMessage(): Now automatically uses
aiChatWithContext()for all regular chat messages - sendCommand(): Intelligently routes questions to search-powered AI vs. tool-based processing
- No user intervention required: Context retrieval happens transparently
Commands are automatically categorized:
Search-Powered (Context-aware):
- Questions: "What did I write about...?", "How do I...?", "Explain..."
- Information requests: "Tell me about...", "Summarize...", "Find..."
- Research queries: "Show me...", "Where did I mention...?"
Tool-Based (Action-oriented):
- Creation: "Create a note...", "Add a tag...", "Generate..."
- Updates: "Update...", "Modify...", "Change..."
- Management: "Delete...", "Remove...", "Extract todos..."
- Context-aware badge: Shows when search was used
- Token usage tracking: Displays estimated token consumption
- Provider identification: Shows which AI model was used
- Timestamp support: Optional message timing
- Cached results: 5-minute TTL for repeated queries
- Debounced updates: Smooth streaming with minimal re-renders
- Token estimation: Tracks usage even for search-powered responses
// New command endpoints
ai_chat_with_context(query, search_query?, tags?) -> String
get_relevant_context(query, tags?, max_chunks?) -> QueryContext// Enhanced AI store methods
sendMessage(content: string, explicitContext?: string)
sendCommand(command: string, explicitContext?: string)
// Search API integration
import { aiChatWithContext, getRelevantContext } from '@/api/search'- Notes are automatically indexed on create/update/delete
- Search service initializes with all existing notes
- Incremental updates maintain index consistency
// User asks: "What are my thoughts on React performance?"
// System automatically:
// 1. Searches notes for relevant content about React performance
// 2. Retrieves top-ranked chunks using BM25
// 3. Provides AI response with context from your notes// Question → Search-powered
await sendCommand("How do I implement authentication?")
// → Uses aiChatWithContext() for contextual answer
// Action → Tool-based
await sendCommand("Create a note about authentication")
// → Uses original tool system for note creation// Override automatic behavior with explicit context
await sendMessage("Explain this code", explicitCodeContext)
await sendCommand("Analyze this", explicitAnalysisContext)- Max context tokens: 4000 (configurable)
- Cache TTL: 5 minutes
- Index chunk size: 1000 tokens with 100 token overlap
- BM25 parameters: k1=1.2, b=0.75
- Automatic context: Enabled by default
- Token estimation: 4 characters ≈ 1 token
- Fallback behavior: Original API for explicit context
- Error handling: Graceful degradation to non-context mode
- AISearchChat: Full-featured search + AI interface
- SearchIntegrationStatus: Shows search system status
- EnhancedAIChatMessage: Message component with context indicators
- aiStore.ts: Updated with search integration
- notesStore.ts: Automatic indexing on CRUD operations
- Contextual Accuracy: Responses based on your actual notes and knowledge
- Transparent Operation: Works automatically without user configuration
- Performance: Fast lexical search with BM25 ranking
- Caching: Repeated queries served from cache
- Flexibility: Maintains original functionality for explicit contexts
- Visual Feedback: Clear indicators when search is used
The integration is backward compatible:
- Existing code continues to work unchanged
- New search functionality is opt-in via automatic detection
- Manual context provision still supported
- All original AI features remain functional
- Tag-based filtering: Automatic tag detection in queries
- Date range filtering: Time-aware context retrieval
- Conversation memory: Multi-turn context awareness
- Performance analytics: Search timing and relevance metrics
- User feedback loop: Learn from response quality ratings