Skip to content

Commit 7a11e2d

Browse files
update README and addressed Docker user change
Signed-off-by: gopal-raj-suresh <gopal.raj.dummugudupu@cloud2labs.com>
1 parent a957bdd commit 7a11e2d

3 files changed

Lines changed: 39 additions & 27 deletions

File tree

sample_solutions/Docugen-Microagents/README.md

Lines changed: 36 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ Documentation Generator (DocuGen) is an enterprise-grade, AI-powered documentati
1212
- [Quick Start](#quick-start)
1313
- [User Interface](#user-interface)
1414
- [Troubleshooting](#troubleshooting)
15-
- [Additional Info](#additional-info)
15+
- [Performance Metrics](#performance-metrics)
1616

1717
---
1818

@@ -236,9 +236,20 @@ The system includes 11 specialized repository analysis tools implemented in `api
236236

237237
Before you begin, ensure you have the following installed:
238238

239-
- **Docker and Docker Compose**
239+
- **Docker and Docker Compose** (required for running the application containers)
240+
- **Docker daemon running** (required for PR Agent's GitHub MCP server container)
240241
- **Enterprise inference endpoint access** (token-based authentication)
241242

243+
### Required Model
244+
245+
This application requires the following model to be deployed on your inference endpoint:
246+
247+
- **Qwen/Qwen3-4B-Instruct-2507** - Small language model optimized for Intel Xeon processors with 8K context window
248+
249+
All nine AI agents (Code Explorer, API Reference, Call Graph, Error Analysis, Environment Config, Dependency Analyzer, Planner, Mermaid Generator, and QA Validator) use this model for efficient documentation generation.
250+
251+
**Note:** This model must be available through your GenAI Gateway or APISIX Gateway deployment before running the application.
252+
242253
### Required API Configuration
243254

244255
**For Inference Service (Documentation Generation):**
@@ -260,7 +271,6 @@ This application supports multiple inference deployment patterns:
260271
- **INFERENCE_API_ENDPOINT**: URL to your inference service (example: `https://api.example.com`)
261272
- **INFERENCE_API_TOKEN**: Authentication token/API key for your chosen service
262273

263-
**Note:** All nine AI agents (Code Explorer, API Reference, Call Graph, Error Analysis, Env Config, Dependency Analyzer, Planner, Mermaid Generator, QA Validator) plus PR Agent use Qwen/Qwen3-4B-Instruct-2507 optimized for Intel Xeon processors
264274

265275
### Local Development Configuration
266276

@@ -307,8 +317,8 @@ docker ps
307317
### Clone the Repository
308318

309319
```bash
310-
git clone https://github.com/cld2labs/GenAISamples.git
311-
cd GenAISamples/Docugen-Microagents
320+
git clone https://github.com/opea-project/Enterprise-Inference.git
321+
cd Enterprise-Inference/sample_solutions/Docugen-Microagents
312322
```
313323

314324
### Set up the Environment
@@ -324,10 +334,26 @@ This application requires **two `.env` files** for proper configuration:
324334
# From the Docugen-Microagents directory
325335
cat > .env << EOF
326336
# Docker Compose Configuration
337+
338+
# Local URL Endpoint (only needed for non-public domains)
339+
# If using a local domain like api.example.com mapped to localhost, set to the domain without https://
340+
# Otherwise, set to: not-needed
327341
LOCAL_URL_ENDPOINT=not-needed
342+
343+
BACKEND_PORT=8000
344+
FRONTEND_PORT=3000
345+
328346
EOF
329347
```
330348

349+
OR
350+
351+
Copy from the example file and edit with your credentials as required.
352+
353+
```bash
354+
cp .env.example .env
355+
```
356+
331357
**Note:** If using a local domain (e.g., `api.example.com` mapped to localhost), replace `not-needed` with your domain name (without `https://`).
332358

333359
#### Step 2: Create `api/.env` File
@@ -363,10 +389,6 @@ Or manually create `api/.env` with:
363389
INFERENCE_API_ENDPOINT=https://api.example.com
364390
INFERENCE_API_TOKEN=your-pre-generated-token-here
365391

366-
# APISIX Gateway Example (uncomment and configure when using APISIX):
367-
# INFERENCE_API_ENDPOINT=https://api.example.com/Qwen3-4B-Instruct
368-
# INFERENCE_API_TOKEN=your-keycloak-generated-token-here
369-
370392
# ==========================================
371393
# Docker Network Configuration
372394
# ==========================================
@@ -435,6 +457,8 @@ CORS_ORIGINS=["http://localhost:3000", "http://localhost:3001", "http://localhos
435457
VERIFY_SSL=true
436458
```
437459

460+
**Note:** All nine AI agents (Code Explorer, API Reference, Call Graph, Error Analysis, Env Config, Dependency Analyzer, Planner, Mermaid Generator, QA Validator) plus PR Agent use Qwen/Qwen3-4B-Instruct-2507 optimized for Intel Xeon processors
461+
438462
**Important Configuration Notes:**
439463

440464
- **INFERENCE_API_ENDPOINT**: Your actual inference service URL (replace `https://api.example.com`)
@@ -491,10 +515,10 @@ docker compose logs -f
491515
docker compose logs -f
492516

493517
# Backend only
494-
docker compose logs -f Docugen-Microagents-backend
518+
docker logs -f Docugen-Microagents-backend
495519

496520
# Frontend only
497-
docker compose logs -f Docugen-Microagents-frontend
521+
docker logs -f Docugen-Microagents-frontend
498522
```
499523

500524
**Verify the services are running:**
@@ -562,15 +586,7 @@ For detailed troubleshooting guidance and solutions to common issues, refer to:
562586

563587
---
564588

565-
## Additional Info
566-
567-
### Model Compatibility
568-
569-
| Model Name | Deployment Platform | Notes |
570-
|------------|---------------------|-------|
571-
| Qwen/Qwen3-4B-Instruct-2507 | Xeon | Optimized SLM with 8K context window for efficient documentation generation across all nine micro-agents. |
572-
573-
### Performance Metrics
589+
## Performance Metrics
574590

575591
The system tracks comprehensive performance metrics for each agent execution, providing visibility into token usage, processing speed, and resource consumption. Metrics are calculated and displayed in real-time during workflow execution:
576592

sample_solutions/Docugen-Microagents/api/.env.example

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -18,10 +18,6 @@
1818
INFERENCE_API_ENDPOINT=https://api.example.com
1919
INFERENCE_API_TOKEN=your-pre-generated-token-here
2020

21-
# APISIX Gateway Example (uncomment and configure when using APISIX):
22-
# INFERENCE_API_ENDPOINT=https://api.example.com/Qwen3-4B-Instruct
23-
# INFERENCE_API_TOKEN=your-keycloak-generated-token-here
24-
2521
# ==========================================
2622
# Docker Network Configuration
2723
# ==========================================

sample_solutions/Docugen-Microagents/ui/Dockerfile

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,8 +13,8 @@ RUN npm install
1313
# Copy application code
1414
COPY . .
1515

16-
# Create non-root user
17-
RUN useradd -m -u 1000 appuser && chown -R appuser:appuser /app
16+
# Use existing node user (already has UID 1000 in node:20-slim)
17+
RUN chown -R node:node /app
1818

1919
# Expose port
2020
EXPOSE 3000
@@ -29,7 +29,7 @@ HEALTHCHECK --interval=30s --timeout=10s --start-period=40s --retries=3 \
2929
CMD node -e "require('http').get('http://localhost:3000', (r) => process.exit(r.statusCode === 200 ? 0 : 1))"
3030

3131
# Switch to non-root user
32-
USER appuser
32+
USER node
3333

3434
# Run development server with environment variable
3535
CMD ["sh", "-c", "VITE_API_TARGET=http://backend:5001 npm run dev -- --host 0.0.0.0 --port 3000"]

0 commit comments

Comments
 (0)