Skip to content

Commit 1280215

Browse files
docs: consolidate README, clean up e2e tests
- Merge Quick Start and Production Deployment into single Install section - Add collapsible FAQ sections - Remove debug hooks and DebugS3Client from integration conftest - Delete fake state recovery tests, keep real Redis test - Fix stale port comments in HA tests - Update e2e memoryLimitMb 64→128
1 parent 0e4e9ef commit 1280215

5 files changed

Lines changed: 58 additions & 302 deletions

File tree

README.md

Lines changed: 48 additions & 46 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,9 @@ S3's server-side encryption is great, but your cloud provider holds the keys. S3
4343

4444
---
4545

46-
## Quick Start
46+
## Install
47+
48+
**Option A** — inline secrets (quick start):
4749

4850
```bash
4951
helm install s3proxy oci://ghcr.io/serversidehannes/s3proxy-python/charts/s3proxy-python \
@@ -52,22 +54,36 @@ helm install s3proxy oci://ghcr.io/serversidehannes/s3proxy-python/charts/s3prox
5254
--set secrets.awsSecretAccessKey="wJalr..."
5355
```
5456

57+
**Option B** — existing K8s secret (recommended for production):
58+
59+
```bash
60+
kubectl create secret generic s3proxy-secrets \
61+
--from-literal=S3PROXY_ENCRYPT_KEY="your-32-byte-key" \
62+
--from-literal=AWS_ACCESS_KEY_ID="AKIA..." \
63+
--from-literal=AWS_SECRET_ACCESS_KEY="wJalr..."
64+
65+
helm install s3proxy oci://ghcr.io/serversidehannes/s3proxy-python/charts/s3proxy-python \
66+
--set secrets.existingSecrets.enabled=true \
67+
--set secrets.existingSecrets.name=s3proxy-secrets
68+
```
69+
70+
Then point any S3 client at the proxy:
71+
5572
```bash
5673
aws s3 --endpoint-url http://s3proxy-python:4433 cp file.txt s3://bucket/
5774
```
5875

59-
That's it. Point any S3 client at the proxy, use the **same credentials** you configured above.
76+
Use the **same credentials** you configured above. That's it.
77+
78+
> **Endpoints** — In-cluster: `http://s3proxy-python.<ns>:4433` · Gateway: `http://s3-gateway.<ns>` · Ingress: `https://s3proxy.example.com`
79+
>
80+
> **Health**`GET /healthz` · `GET /readyz` · **Metrics**`GET /metrics`
6081
6182
---
6283

6384
## Battle-Tested
6485

65-
<p align="center">
66-
<img src="https://img.shields.io/badge/PostgreSQL_17-336791?style=flat-square&logo=postgresql&logoColor=white" alt="PostgreSQL">
67-
<img src="https://img.shields.io/badge/Elasticsearch_9-005571?style=flat-square&logo=elasticsearch&logoColor=white" alt="Elasticsearch">
68-
<img src="https://img.shields.io/badge/ScyllaDB_6-53cadd?style=flat-square&logo=scylladb&logoColor=white" alt="ScyllaDB">
69-
<img src="https://img.shields.io/badge/ClickHouse_24-ffcc00?style=flat-square&logo=clickhouse&logoColor=black" alt="ClickHouse">
70-
</p>
86+
Verified with real database operators: **backup, cluster delete, restore, data integrity check.**
7187

7288
| Database | Operator | Backup Tool |
7389
|:--------:|:--------:|:-----------:|
@@ -76,8 +92,6 @@ That's it. Point any S3 client at the proxy, use the **same credentials** you co
7692
| ScyllaDB 6.x | Scylla Operator 1.19 | Scylla Manager |
7793
| ClickHouse 24.x | Altinity Operator | clickhouse-backup |
7894

79-
All verified: **backup, cluster delete, restore, data integrity check.**
80-
8195
---
8296

8397
## How It Works
@@ -94,33 +108,6 @@ Master Key → KEK (derived via SHA-256)
94108

95109
---
96110

97-
## Production Deployment
98-
99-
### External Secrets (recommended)
100-
101-
```bash
102-
kubectl create secret generic s3proxy-secrets \
103-
--from-literal=S3PROXY_ENCRYPT_KEY="your-32-byte-key" \
104-
--from-literal=AWS_ACCESS_KEY_ID="AKIA..." \
105-
--from-literal=AWS_SECRET_ACCESS_KEY="wJalr..."
106-
107-
helm install s3proxy oci://ghcr.io/serversidehannes/s3proxy-python/charts/s3proxy-python \
108-
--set secrets.existingSecrets.enabled=true \
109-
--set secrets.existingSecrets.name=s3proxy-secrets
110-
```
111-
112-
### Endpoints
113-
114-
| Access | Endpoint |
115-
|--------|----------|
116-
| In-cluster | `http://s3proxy-python.<ns>:4433` |
117-
| Gateway | `http://s3-gateway.<ns>` |
118-
| Ingress | `https://s3proxy.example.com` |
119-
120-
Health: `GET /healthz` · `GET /readyz` · Metrics: `GET /metrics`
121-
122-
---
123-
124111
## Configuration
125112

126113
| Value | Default | Description |
@@ -141,15 +128,30 @@ See [chart/README.md](chart/README.md) for all options.
141128

142129
## FAQ
143130

144-
**Can I use existing unencrypted data?** Yes. S3Proxy detects unencrypted objects and returns them as-is. Migrate by copying through the proxy.
145-
146-
**What if I lose my encryption key?** Data is unrecoverable. Back up your key.
147-
148-
**What if Redis fails mid-upload?** Upload fails and must restart. Use `redis-ha.enabled=true` with persistence.
149-
150-
**MinIO / R2 / Spaces?** Yes. Set `s3.host` to your endpoint.
151-
152-
**Presigned URLs?** GET works. PUT/POST don't — the proxy encrypts the body which invalidates the pre-signed signature.
131+
<details>
132+
<summary><strong>Can I use existing unencrypted data?</strong></summary>
133+
Yes. S3Proxy detects unencrypted objects and returns them as-is. Migrate by copying through the proxy.
134+
</details>
135+
136+
<details>
137+
<summary><strong>What if I lose my encryption key?</strong></summary>
138+
Data is unrecoverable. Back up your key.
139+
</details>
140+
141+
<details>
142+
<summary><strong>What if Redis fails mid-upload?</strong></summary>
143+
Upload fails and must restart. Use <code>redis-ha.enabled=true</code> with persistence.
144+
</details>
145+
146+
<details>
147+
<summary><strong>MinIO / R2 / Spaces?</strong></summary>
148+
Yes. Set <code>s3.host</code> to your endpoint.
149+
</details>
150+
151+
<details>
152+
<summary><strong>Presigned URLs?</strong></summary>
153+
GET works. PUT/POST don't — the proxy encrypts the body which invalidates the pre-signed signature.
154+
</details>
153155

154156
---
155157

e2e/docker-compose.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -416,7 +416,7 @@ services:
416416
--set secrets.awsAccessKeyId="minioadmin" \
417417
--set secrets.awsSecretAccessKey="minioadmin" \
418418
--set logLevel="DEBUG" \
419-
--set performance.memoryLimitMb=64 \
419+
--set performance.memoryLimitMb=128 \
420420
--set gateway.enabled=true \
421421
--set ingress.enabled=true \
422422
--set 'ingress.annotations.nginx\.ingress\.kubernetes\.io/proxy-body-size=256m' \

tests/ha/test_ha_redis_e2e.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -186,8 +186,8 @@ def test_upload_parts_to_different_pods(self, s3_clients, test_bucket):
186186
Test uploading parts to different pods maintains sequential numbering.
187187
188188
Scenario:
189-
- Part 1 → Pod A (port 4433)
190-
- Part 2 → Pod B (port 4434)
189+
- Part 1 → Pod A (port 4450)
190+
- Part 2 → Pod B (port 4451)
191191
- Both pods share Redis state
192192
- Internal parts should be [1, 2] (sequential)
193193
"""

tests/integration/conftest.py

Lines changed: 1 addition & 53 deletions
Original file line numberDiff line numberDiff line change
@@ -93,28 +93,6 @@ def _wait_for_port(port: int, proc: subprocess.Popen, timeout: float = 15) -> No
9393
raise RuntimeError(f"s3proxy failed to start on port {port} after {timeout}s")
9494

9595

96-
# === DEBUGGING HOOKS - show exactly where tests get stuck ===
97-
98-
def pytest_runtest_logstart(nodeid, location):
99-
"""Called before each test starts."""
100-
print(f"\n>>> STARTING: {nodeid}", file=sys.stderr, flush=True)
101-
102-
103-
def pytest_runtest_logfinish(nodeid, location):
104-
"""Called after each test finishes."""
105-
print(f"<<< FINISHED: {nodeid}", file=sys.stderr, flush=True)
106-
107-
108-
def pytest_runtest_setup(item):
109-
"""Called before test setup."""
110-
print(f" [setup] {item.name}", file=sys.stderr, flush=True)
111-
112-
113-
def pytest_runtest_teardown(item):
114-
"""Called before test teardown."""
115-
print(f" [teardown] {item.name}", file=sys.stderr, flush=True)
116-
117-
11896
@pytest.fixture(scope="session")
11997
def s3proxy_server():
12098
"""Start s3proxy server for e2e tests.
@@ -139,46 +117,16 @@ def s3proxy_server():
139117
print(f"[FIXTURE] Stopping s3proxy (pid={proc.pid})...")
140118

141119

142-
class DebugS3Client:
143-
"""Wrapper that logs every S3 operation."""
144-
145-
def __init__(self, client):
146-
self._client = client
147-
# Expose exceptions directly for contextlib.suppress compatibility
148-
self.exceptions = client.exceptions
149-
150-
def __getattr__(self, name):
151-
attr = getattr(self._client, name)
152-
if callable(attr):
153-
def wrapper(*args, **kwargs):
154-
# Log the call
155-
args_str = ", ".join([f"{k}={v!r}" for k, v in kwargs.items() if k != "Body"])
156-
if "Body" in kwargs:
157-
body = kwargs["Body"]
158-
args_str += f", Body=<{len(body)} bytes>"
159-
print(f" -> s3.{name}({args_str})", file=sys.stderr, flush=True)
160-
try:
161-
result = attr(*args, **kwargs)
162-
print(f" <- s3.{name} OK", file=sys.stderr, flush=True)
163-
return result
164-
except Exception as e:
165-
print(f" <- s3.{name} FAILED: {e}", file=sys.stderr, flush=True)
166-
raise
167-
return wrapper
168-
return attr
169-
170-
171120
@pytest.fixture
172121
def s3_client(s3proxy_server):
173122
"""Create boto3 S3 client pointing to s3proxy."""
174-
client = boto3.client(
123+
return boto3.client(
175124
"s3",
176125
endpoint_url=s3proxy_server,
177126
aws_access_key_id="minioadmin",
178127
aws_secret_access_key="minioadmin",
179128
region_name="us-east-1",
180129
)
181-
return DebugS3Client(client)
182130

183131

184132
@pytest.fixture

0 commit comments

Comments
 (0)