Skip to content

Commit c65492d

Browse files
Alexandre Oliveiraclaude
andcommitted
docs(turing): add Chat page with full interface documentation
- New chat.md: layout diagram (header, message area, input, context bar), header controls table, context bar colour thresholds (yellow >60%, red >80%), Tab 1 direct LLM chat (attachments, SSE streaming, 7 tools table), Tab 2 Semantic Navigation (4 tools + MCP Servers), Tab 3 AI Agents (dynamic tabs, per-agent config table), rich content rendering table (Markdown/code/D2/HTML sandbox/generated files), session history (IndexedDB, auto-title, model badge, restore/delete), context window management with compact tip, file attachments reference, REST API endpoints - sidebars-turing.ts: add chat between genai-llm and token-usage - genai-llm.md: cross-reference Chat page from AI Agents section Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
1 parent 08d7b62 commit c65492d

3 files changed

Lines changed: 240 additions & 1 deletion

File tree

docs-turing/chat.md

Lines changed: 238 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,238 @@
1+
---
2+
sidebar_position: 3
3+
title: Chat
4+
description: Use the Turing ES Chat interface to interact with LLMs, AI Agents, and indexed content.
5+
---
6+
7+
# Chat
8+
9+
The **Chat** interface is the primary way users interact with the AI capabilities of Turing ES. It is organized into three tabs: a direct **LLM chat**, a **Semantic Navigation** chat for searching indexed sites, and dynamic **AI Agent** tabs — one per configured and enabled agent.
10+
11+
:::info LLM required
12+
The Chat interface is only available when at least one LLM Instance is configured and enabled. See [Generative AI & LLM Configuration — LLM Providers](./genai-llm.md#llm-providers) to set one up.
13+
:::
14+
15+
---
16+
17+
## Layout
18+
19+
```mermaid
20+
graph TD
21+
A[Header] --> B[Tab navigation — Chat · Semantic Nav · AI Agents]
22+
A --> C[LLM model selector]
23+
A --> D[New Chat button]
24+
A --> E[Dark mode toggle]
25+
A --> F[Session history button]
26+
G[Message area] --> H[User messages with avatar]
27+
G --> I[Assistant messages with avatar + copy button]
28+
G --> J[Rich content — Markdown, code, D2 diagrams, HTML sandbox]
29+
K[Input area] --> L[Multiline textarea]
30+
K --> M[File drag-and-drop]
31+
K --> N[Send button — Enter to submit]
32+
O[Context Bar] --> P[Token counter]
33+
O --> Q[Context window % bar]
34+
O --> R[Compact button]
35+
```
36+
37+
**Header controls:**
38+
39+
| Control | Description |
40+
|---|---|
41+
| **Tab navigation** | Switch between Chat, Semantic Navigation, and AI Agent tabs |
42+
| **LLM model selector** | Choose which configured LLM instance to use for the session |
43+
| **New Chat** | Start a fresh session (saves the current one to history automatically) |
44+
| **Dark mode toggle** | Switch between light and dark themes — code highlighting adapts accordingly |
45+
| **Session history** | Opens the sessions sidebar to browse, restore, or delete previous conversations |
46+
47+
**Context Bar** — displayed below the message area:
48+
49+
| Indicator | Behaviour |
50+
|---|---|
51+
| **Token counter** | Running estimate of tokens used in the current context (~4 chars per token) |
52+
| **% bar** | Visual fill showing context window usage; turns **yellow above 60%** and **red above 80%** |
53+
| **Compact button** | Summarises the conversation to free up context space |
54+
55+
---
56+
57+
## Tab 1 — Chat (Direct LLM)
58+
59+
A general-purpose chat with the selected LLM. This tab provides the most direct access to the underlying model, with a set of optional tools the user can enable per conversation.
60+
61+
### File Attachments
62+
63+
Files can be added to the conversation via drag-and-drop or the file picker:
64+
65+
| File type | How it's handled |
66+
|---|---|
67+
| **Documents** (PDF, DOCX, XLSX, PPTX, HTML, TXT, …) | Text extracted via **Apache Tika** and included in the prompt as context |
68+
| **Images** (PNG, JPEG, WebP, GIF, …) | Sent directly as media to models with vision capability |
69+
70+
Attached files are displayed as badges on the message they are sent with.
71+
72+
### Streaming
73+
74+
Responses are streamed in real time using **Server-Sent Events (SSE)**, so content appears progressively as the model generates it — no waiting for the full response.
75+
76+
### Available Tools
77+
78+
The following tools can be enabled for the Chat tab. The LLM invokes them autonomously during a conversation when it determines they are needed:
79+
80+
| Tool | Description |
81+
|---|---|
82+
| **Code Interpreter** | Executes Python code in a sandboxed environment. Supports Matplotlib for charts. Timeout: 30 seconds. Generated files (e.g., charts, CSVs) are returned as download links. |
83+
| **Web Crawler** | Fetches and extracts content from a web page. Max 12,000 characters per page, up to 30 links extracted. |
84+
| **Image Search** | Searches for images via DuckDuckGo / Bing. Returns up to 8 results. |
85+
| **Weather** | Returns weather forecasts for 1–7 days using [Open-Meteo](https://open-meteo.com). |
86+
| **Finance** | Retrieves stock quotes and historical price data via Yahoo Finance. |
87+
| **Date / Time** | Returns the current date and time for any given timezone. |
88+
| **RAG Search** | Searches the Knowledge Base (vector store) by semantic similarity. Also provides: knowledge base statistics, file listing with optional keyword filter, and full file content retrieval. |
89+
90+
---
91+
92+
## Tab 2 — Semantic Navigation
93+
94+
A chat interface backed by the indexed content of **Semantic Navigation Sites**. Instead of querying the LLM's parametric knowledge, this tab sends the user's question through site-specific search tools.
95+
96+
The system prompt for this tab includes locale instructions and available facets for each configured site.
97+
98+
**Tools available in this tab:**
99+
100+
| Tool | Description |
101+
|---|---|
102+
| `list_sites` | Lists all available Semantic Navigation sites and their locales |
103+
| `get_site_fields` | Returns available fields and facets for a specific site |
104+
| `get_valid_filter_values` | Returns valid values for a filter or facet field |
105+
| `search_site` | Performs a semantic search within a site and returns results |
106+
107+
Any **MCP Servers** configured in Administration are also available in this tab, extending the tool set with external capabilities.
108+
109+
:::tip
110+
Use this tab when you want answers grounded exclusively in your indexed enterprise content, rather than the LLM's general knowledge.
111+
:::
112+
113+
---
114+
115+
## Tab 3 — AI Agents (Dynamic Tabs)
116+
117+
Each **AI Agent** configured and enabled in **Administration → AI Agents** appears as its own tab in the Chat interface. Agents are completely independent from each other — each has its own LLM, system prompt, tool set, and visual identity.
118+
119+
| Per-agent setting | Description |
120+
|---|---|
121+
| **Name** | The tab label and agent display name |
122+
| **Avatar** | Image shown in the chat alongside agent messages |
123+
| **System Prompt** | The agent's persona, purpose, and behavioural instructions |
124+
| **LLM Instance** | The specific language model powering this agent (must be valid and enabled) |
125+
| **Native Tools** | A selection from the 27 native tool callings (code interpreter, search, weather, finance, etc.) |
126+
| **MCP Servers** | External tool servers connected specifically to this agent |
127+
128+
For full configuration details — composing agents, tool selection, and MCP Server registration — see [Generative AI & LLM Configuration — AI Agents](./genai-llm.md#ai-agents).
129+
130+
---
131+
132+
## Rich Content Rendering
133+
134+
Chat responses are rendered with full media-type awareness:
135+
136+
| Content type | Rendering |
137+
|---|---|
138+
| **Markdown** | Full GitHub Flavored Markdown — tables, strikethrough, task lists, inline code, blockquotes |
139+
| **Code blocks** | Syntax highlighting via highlight.js with automatic light/dark theme switching |
140+
| **D2 diagrams** | Rendered to SVG via WASM; falls back to a dev server in development mode |
141+
| **HTML** | Sandboxed preview in an isolated iframe — toggle between rendered view and source, with fullscreen option |
142+
| **Generated files** | Files created by the Code Interpreter (charts, processed data, etc.) are shown as download links inline in the response |
143+
144+
---
145+
146+
## Session History
147+
148+
Chat sessions are stored locally in the browser's **IndexedDB** — they are not sent to the server. This means:
149+
150+
- Sessions are **per browser and per device** — clearing browser data removes them
151+
- No user authentication is required to access past sessions
152+
- Session data never leaves the user's machine
153+
154+
**Session sidebar features:**
155+
156+
| Feature | Description |
157+
|---|---|
158+
| **Auto-title** | A short title is generated by the LLM from the first exchange; falls back to the first message text if generation fails |
159+
| **Model badge** | Shows which LLM model was used for the session |
160+
| **Message count** | Number of messages in the session |
161+
| **Timestamp** | Date and time of the last message |
162+
| **Restore** | Click to resume a previous session |
163+
| **Delete** | Remove a session from history |
164+
165+
Sessions are saved automatically after each complete response.
166+
167+
---
168+
169+
## Context Window Management
170+
171+
A visual indicator shows how much of the model's context window is currently in use.
172+
173+
```
174+
Context usage: ████████░░░░░░░░░░░░ 42%
175+
```
176+
177+
| Feature | Description |
178+
|---|---|
179+
| **Token estimation** | ~4 characters per token (fast client-side estimate) |
180+
| **Context window size** | Obtained from the LLM provider's response metadata or the LLM Instance configuration |
181+
| **Bar colour** | Green → Yellow at 60% → Red at 80% |
182+
| **Compact button** | Summarises the current conversation using the LLM to free up context space while preserving the key information |
183+
184+
:::tip When to compact
185+
When the context bar turns red (above 80%), click **Compact** before the limit is reached. Compacting summarises the full history into a concise context block and continues the conversation from there, freeing up significant space without losing the thread.
186+
:::
187+
188+
---
189+
190+
## Files & Attachments Reference
191+
192+
| Capability | Detail |
193+
|---|---|
194+
| **Upload method** | Drag-and-drop onto the chat window, or click the attachment button |
195+
| **Transfer format** | Multipart form (files sent together with the message) |
196+
| **Document processing** | Apache Tika extracts text from PDF, DOCX, XLSX, PPTX, HTML, TXT, and more |
197+
| **Image processing** | Passed directly as media bytes to vision-capable models |
198+
| **Display** | Shown as file badges on the sent message bubble |
199+
200+
---
201+
202+
## API — Chat Endpoints
203+
204+
The chat features are also accessible programmatically for integration into custom applications.
205+
206+
### LLM Direct Chat
207+
208+
```
209+
GET /api/sn/{siteName}/chat?q={question}
210+
```
211+
212+
Sends a question to the RAG pipeline for a Semantic Navigation Site.
213+
214+
**Example:**
215+
216+
```bash
217+
curl "http://localhost:2700/api/sn/Sample/chat?q=What+are+the+main+features?" \
218+
-H "Key: <YOUR_API_TOKEN>"
219+
```
220+
221+
### AI Agent Chat
222+
223+
```
224+
POST /api/v2/llm/agent/{agentId}/chat
225+
```
226+
227+
Sends a message to a specific AI Agent.
228+
229+
```bash
230+
curl -X POST "http://localhost:2700/api/v2/llm/agent/my-agent/chat" \
231+
-H "Key: <YOUR_API_TOKEN>" \
232+
-H "Content-Type: application/json" \
233+
-d '{ "message": "Summarise the latest quarterly report" }'
234+
```
235+
236+
---
237+
238+
*Previous: [Token Usage](./token-usage.md) | Next: [Architecture Overview](./architecture-overview.md)*

docs-turing/genai-llm.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -315,7 +315,7 @@ MCP Servers are configured in **Administration → MCP Servers**.
315315

316316
## AI Agents
317317

318-
An **AI Agent** is the central composition object in Turing ES's GenAI system. It combines a specific LLM Instance, a selected set of Tool Callings, and a set of MCP Servers into a single, named, deployable assistant. Each AI Agent has its own personality, capability set, and visual identity, and appears as a **separate tab in the Chat interface** for users to interact with.
318+
An **AI Agent** is the central composition object in Turing ES's GenAI system. It combines a specific LLM Instance, a selected set of Tool Callings, and a set of MCP Servers into a single, named, deployable assistant. Each AI Agent has its own personality, capability set, and visual identity, and appears as a **separate tab in the Chat interface** for users to interact with. See the [Chat](./chat.md) page for the full interface documentation.
319319

320320
AI Agents are configured in **Administration → AI Agents**.
321321

sidebars-turing.ts

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,7 @@ const sidebars: SidebarsConfig = {
3333
label: "Generative AI",
3434
items: [
3535
"genai-llm",
36+
"chat",
3637
"token-usage",
3738
],
3839
},

0 commit comments

Comments
 (0)