Skip to content

Commit f49e6b6

Browse files
authored
Release v1.2.0 – Added Docker Compose support and credited contributor
- Added optional Docker Compose setup integrating Orpheus-FastAPI and a GPU-enabled llama.cpp inference server. - Includes automatic model download via model-init service. - Native install path remains unchanged to ensure compatibility for non-Docker users. - Updated README to include contribution credit for @richardr1126 (https://github.com/richardr1126). - Squash-merged PR #21 with clean commit history. Thank you to @richardr1126 for the Docker orchestration work and for helping expand accessibility for container-based deployments!
1 parent a58d6ed commit f49e6b6

1 file changed

Lines changed: 6 additions & 0 deletions

File tree

README.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,11 @@ High-performance Text-to-Speech server with OpenAI-compatible API, 8 voices, emo
88

99
## Changelog
1010

11+
**v1.2.0** (2025-04-12)
12+
- ❤️ Added optional Docker Compose support with GPU-enabled `llama.cpp` server and Orpheus-FastAPI integration
13+
- 🐳 Docker implementation contributed by [@richardr1126](https://github.com/richardr1126) – huge thanks for the clean setup and orchestration work!
14+
- 🧱 Native install path remains unchanged for non-Docker users
15+
1116
**v1.1.0** (2025-03-23)
1217
- ✨ Added long-form audio support with sentence-based batching and crossfade stitching
1318
- 🔊 Improved short audio quality with optimized token buffer handling
@@ -84,6 +89,7 @@ The docker compose file orchestrates the Orpheus-FastAPI for audio and a llama.c
8489

8590
```bash
8691
cp .env.example .env # Nothing needs to be changed, but the file is required
92+
copy .env.example .env # For Windows CMD
8793
```
8894

8995
```bash

0 commit comments

Comments
 (0)