Skip to content

Commit 50a646f

Browse files
committed
test(server): loadverify benches and rate limit; docs: TESTING bench section; style: drop em dashes
1 parent 1481847 commit 50a646f

7 files changed

Lines changed: 293 additions & 5 deletions

File tree

.gitignore

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,10 @@ mergedcoverage
1818
# HTML coverage report generate by Go tests
1919
coverage.html
2020

21+
# go test -cpuprofile / -memprofile (local bench artifacts)
22+
*.pprof
23+
loadverify-cpu
24+
2125
# SQLite database
2226
chat.db
2327
marchat.db

ARCHITECTURE.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -109,7 +109,7 @@ The server package contains the core server logic and components that are used b
109109

110110
#### Core Components
111111

112-
- **WebSocket Handlers**: Connection management and message routing; failed handshakes close with **RFC 6455** close frames (registered status code + UTF-8 reason, not a raw text payloadsee `PROTOCOL.md`)
112+
- **WebSocket Handlers**: Connection management and message routing; failed handshakes close with **RFC 6455** close frames (registered status code + UTF-8 reason, not a raw text payload (see `PROTOCOL.md`)
113113
- **Database Layer**: Pluggable SQL backends (SQLite/PostgreSQL/MySQL) with dialect-aware schema and query helpers
114114
- **Admin Interfaces**: Both TUI and web-based administrative panels
115115
- **Plugin Integration**: Plugin command handling and execution
@@ -190,7 +190,7 @@ The **Go package** at repository path `config/` loads server settings from the p
190190

191191
#### Client (`client/config/`)
192192

193-
The **client** stores `config.json`, `profiles.json`, keystore, themes, and debug logs under the **per-user application data directory** (e.g. `%APPDATA%\marchat` on Windows, `~/.config/marchat` on Linux), or under `MARCHAT_CONFIG_DIR` when set. This applies both when developing from a clone and when using release binaries. Path helpers in `client/config` share the same resolution: `ResolveClientConfigDir()`, `GetConfigPath()`, and the primary keystore path (`GetKeystorePath`) honor `MARCHAT_CONFIG_DIR` first. For the keystore file, `GetKeystorePath` prefers that directory, then an existing `keystore.dat` under the standard user marchat folder (when the override has no keystore yet), and only then a legacy `./keystore.dat` in the process working directoryso a stray repo-local file does not override the real profile keystore.
193+
The **client** stores `config.json`, `profiles.json`, keystore, themes, and debug logs under the **per-user application data directory** (e.g. `%APPDATA%\marchat` on Windows, `~/.config/marchat` on Linux), or under `MARCHAT_CONFIG_DIR` when set. This applies both when developing from a clone and when using release binaries. Path helpers in `client/config` share the same resolution: `ResolveClientConfigDir()`, `GetConfigPath()`, and the primary keystore path (`GetKeystorePath`) honor `MARCHAT_CONFIG_DIR` first. For the keystore file, `GetKeystorePath` prefers that directory, then an existing `keystore.dat` under the standard user marchat folder (when the override has no keystore yet), and only then a legacy `./keystore.dat` in the process working directory, so a stray repo-local file does not override the real profile keystore.
194194

195195
#### Configuration Sources (server)
196196

TESTING.md

Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -41,6 +41,8 @@ The Marchat test suite provides foundational coverage of the application's core
4141
| `cmd/server/subprocess_doctor_test.go` | Server binary smoke | `go run ./cmd/server -doctor` / `-doctor-json` subprocess (covers `main` early exits) |
4242
| `server/handlers_test.go` | Server-side request handling | Database operations, message insertion, IP extraction |
4343
| `server/hub_test.go` | WebSocket hub management | User bans, kicks, connection management, non-blocking send verification |
44+
| `server/loadverify_ratelimit_test.go` | WebSocket read-pump rate limit | Window, burst (20), and cooldown behavior (same constants as `client.go`) |
45+
| `server/loadverify_bench_test.go` | Hub broadcast benchmarks (optional) | Channel vs system-wide fan-out, parallel senders, JSON marshal baseline; see [Optional hub load benchmarks](#optional-hub-load-benchmarks-server) |
4446
| `server/integration_test.go` | End-to-end workflows | Message flow, ban flow, concurrent operations |
4547
| `server/admin_web_test.go` | Admin web interface | HTTP endpoints, authentication, admin panel functionality |
4648
| `server/config_ui_test.go` | Server configuration UI | Configuration management, environment handling |
@@ -126,6 +128,31 @@ cd plugin/sdk
126128
go test ./...
127129
```
128130

131+
### Optional hub load benchmarks (server)
132+
133+
`server/loadverify_bench_test.go` defines `BenchmarkLoadverify_*` helpers for profiling hub broadcast paths. **`go test ./...` does not run benchmarks** unless you pass `-bench` (and usually `-run=^$` so only benchmarks execute).
134+
135+
What they approximate:
136+
137+
- **Hub `Run` loop** fed by `hub.broadcast`, with clients registered and channel membership like production, but **no real WebSocket** and **large per-client send buffers** so the harness measures routing/coordination rather than production backpressure.
138+
- **`TypingMessage`** on the broadcast path avoids `SendMessageToPlugins` (see file comments). A separate sub-benchmark times `json.Marshal` on a text-shaped message for comparison.
139+
140+
Interpreting results: channel-scoped delivery still iterates **all** registered clients in `hub.go` and filters by channel; the `fixedChannel8` sub-benchmarks vary total clients while keeping eight members in `#bench` to highlight that cost scales with **server-wide** connections, not only room size. ns/op and B/op depend on hardware, OS, and Go version, so use these runs for trends and profiling, not as fixed targets.
141+
142+
Examples (repo root; adjust `-bench` regex as needed):
143+
144+
```bash
145+
go test ./server -run=Loadverify -v
146+
go test ./server -run='^$' -bench=Loadverify -benchmem -count=5
147+
```
148+
149+
**Windows PowerShell:** quote `-cpuprofile` (e.g. `-cpuprofile="loadverify-cpu.pprof"`) so the path is not misparsed; the profile is written to the shell’s current directory unless you pass an absolute path.
150+
151+
```powershell
152+
go test ./server -run='^$' -bench=Loadverify_HubBroadcast_ChannelMessage/all_in_channel_128 -cpuprofile="loadverify-cpu.pprof"
153+
go tool pprof -top .\loadverify-cpu.pprof
154+
```
155+
129156
### Using Test Scripts
130157

131158
#### Linux/macOS

deploy/CADDY-REVERSE-PROXY.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -278,7 +278,7 @@ If the shell or IDE exports a **stale `MARCHAT_*`**, the server used to keep the
278278
| **`remote error: tls: internal error`** on connect | Caddyfile must use **named hosts** + **`tls internal`**; recreate Caddy volumes if certs were issued under bad config; use **`wss://localhost:8443`**. |
279279
| **Reconnect loop / dial failures** | Client debug log: **Windows** `%APPDATA%\marchat\marchat-client-debug.log`; **Linux** `~/.config/marchat/marchat-client-debug.log`; **macOS** `~/Library/Application Support/marchat/marchat-client-debug.log` (unless **`MARCHAT_CONFIG_DIR`** is set). |
280280
| **Invalid admin key** | Server must use same key as client; with **Overload**, **`config/.env`** overrides stale shell env after **server restart**. |
281-
| **Keystore decrypt error** | Wrong **`--keystore-passphrase`**, corrupted file, or (on older clients) path-dependent keystore salt if **`keystore.dat`** moved; use a current client build (embedded salt + auto-migration). If **`MARCHAT_GLOBAL_E2E_KEY`** is set, the client uses the env key and does not update the file—unset it to use the on-disk key again. Backup/remove **`keystore.dat`** only if you intend to recreate the keystore (you will need the same global key via env or peer copy). |
281+
| **Keystore decrypt error** | Wrong **`--keystore-passphrase`**, corrupted file, or (on older clients) path-dependent keystore salt if **`keystore.dat`** moved; use a current client build (embedded salt + auto-migration). If **`MARCHAT_GLOBAL_E2E_KEY`** is set, the client uses the env key and does not update the file. Unset it to use the on-disk key again. Backup/remove **`keystore.dat`** only if you intend to recreate the keystore (you will need the same global key via env or peer copy). |
282282
| **Caddy cannot reach server** | Server on **8080**, Docker **`host.docker.internal`** (Compose **`extra_hosts`**). |
283283

284284
---

plugin/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -95,13 +95,13 @@ type Message struct {
9595
| `Recipient` | Target user for DMs (empty = broadcast) |
9696
| `Edited` | `true` if the message was edited after send |
9797

98-
**Backwards compatibility**: All extended fields use `omitempty`. Plugins compiled against older SDK versions silently ignore unknown JSON keys and omit them on output no recompile required.
98+
**Backwards compatibility**: All extended fields use `omitempty`. Plugins compiled against older SDK versions silently ignore unknown JSON keys and omit them on output; no recompile required.
9999

100100
**Message routing rules**:
101101

102102
- The hub only forwards messages with `type` set to `"text"` to plugins. Other types (typing, reactions, etc.) are not delivered.
103103
- Plugin replies that **omit** `type` (or set it to anything other than `"text"`) are broadcast to clients but are **not** re-forwarded to other plugins. This prevents accidental infinite loops.
104-
- To opt into **plugin-to-plugin chaining**, set `Type: "text"` on outbound `sdk.Message` explicitly. Use with care the echo plugin, for example, should not do this or it will loop.
104+
- To opt into **plugin-to-plugin chaining**, set `Type: "text"` on outbound `sdk.Message` explicitly. Use with care: the echo plugin, for example, should not do this or it will loop.
105105
- **Encrypted messages**: The hub does not filter encrypted messages before forwarding to plugins. Plugins receive them with `Encrypted: true` and opaque `Content`. Plugins that parse `Content` should check `msg.Encrypted` and skip or handle accordingly.
106106

107107
### Message Processing

server/loadverify_bench_test.go

Lines changed: 182 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,182 @@
1+
// Optional hub load benchmarks for maintainers (not run by plain go test ./...).
2+
// They use in-memory SQLite, a real Hub with plugin manager, synthetic Clients
3+
// (nil conn, large send buffers) and TypingMessage to avoid plugin IPC on broadcast.
4+
//
5+
// Run (from repo root):
6+
//
7+
// go test ./server -run=^$ -bench=Loadverify -benchmem -count=5 | tee loadverify-bench.txt
8+
//
9+
// CPU profile (GC / hot paths). The -bench regexp must match the full sub-benchmark
10+
// name (underscores): BenchmarkLoadverify_HubBroadcast_ChannelMessage/all_in_channel_128.
11+
//
12+
// PowerShell: quote -cpuprofile or ".pprof" is parsed incorrectly and you get a wrong filename.
13+
//
14+
// From repo root, -cpuprofile="name.pprof" is usually created in that same directory (your shell cwd).
15+
// If pprof cannot find it, also try .\server\name.pprof (behavior can depend on Go version).
16+
//
17+
// go test ./server -run=^$ -bench=Loadverify_HubBroadcast_ChannelMessage/all_in_channel_128 -cpuprofile="loadverify-cpu.pprof"
18+
// go tool pprof -top .\loadverify-cpu.pprof
19+
//
20+
// Code reality check: channel-scoped broadcasts still range over every entry in
21+
// h.clients (see hub.go broadcast case); recipients are filtered by channel
22+
// membership. Benchmarks vary both total clients and in-channel count.
23+
//
24+
// Compare ChannelMessage/all_in_channel_* vs ChannelMessage_fixedChannel8:
25+
// cost grows with total registered clients, not only #bench population.
26+
27+
package server
28+
29+
import (
30+
"database/sql"
31+
"encoding/json"
32+
"fmt"
33+
"testing"
34+
"time"
35+
36+
"github.com/Cod-e-Codes/marchat/shared"
37+
_ "modernc.org/sqlite"
38+
)
39+
40+
func loadverifyDrain(ch <-chan interface{}) {
41+
go func() {
42+
for range ch {
43+
}
44+
}()
45+
}
46+
47+
// setupLoadverifyHub registers total clients; the first inChannel join "bench",
48+
// the rest join "lobby" only. Starts hub.Run in the background.
49+
func setupLoadverifyHub(b *testing.B, total, inChannel int) *Hub {
50+
b.Helper()
51+
if inChannel > total {
52+
b.Fatal("inChannel > total")
53+
}
54+
db, err := sql.Open("sqlite", ":memory:")
55+
if err != nil {
56+
b.Fatalf("sqlite: %v", err)
57+
}
58+
b.Cleanup(func() { db.Close() })
59+
CreateSchema(db)
60+
hub := NewHub("", "", "", db)
61+
go hub.Run()
62+
time.Sleep(20 * time.Millisecond)
63+
64+
// Large buffer avoids hub drop-on-full during fast benchmarks; production uses 256
65+
// (handlers.go) and then conn.Close() runs. These Clients have conn == nil.
66+
const loadverifySendBuf = 65536
67+
for i := 0; i < total; i++ {
68+
c := &Client{
69+
username: fmt.Sprintf("loadverify-%d", i),
70+
send: make(chan interface{}, loadverifySendBuf),
71+
}
72+
loadverifyDrain(c.send)
73+
hub.clientsMutex.Lock()
74+
hub.clients[c] = true
75+
hub.clientsMutex.Unlock()
76+
if i < inChannel {
77+
hub.joinChannel(c, "bench")
78+
} else {
79+
hub.joinChannel(c, "lobby")
80+
}
81+
}
82+
83+
return hub
84+
}
85+
86+
func BenchmarkLoadverify_HubBroadcast_ChannelMessage(b *testing.B) {
87+
for _, n := range []int{8, 32, 64, 128} {
88+
b.Run(fmt.Sprintf("all_in_channel_%d", n), func(b *testing.B) {
89+
hub := setupLoadverifyHub(b, n, n)
90+
91+
// TypingMessage avoids plugin IPC in hub Run (TextMessage triggers SendMessageToPlugins).
92+
msg := shared.Message{
93+
Sender: "loadverify",
94+
Channel: "bench",
95+
Type: shared.TypingMessage,
96+
CreatedAt: time.Now(),
97+
}
98+
99+
b.ResetTimer()
100+
for i := 0; i < b.N; i++ {
101+
hub.broadcast <- msg
102+
}
103+
})
104+
}
105+
}
106+
107+
func BenchmarkLoadverify_HubBroadcast_ChannelMessage_fixedChannel8(b *testing.B) {
108+
// 8 recipients in #bench, many extra clients only in #lobby; highlights
109+
// iteration over all registered clients vs channel population.
110+
for _, total := range []int{16, 64, 128} {
111+
b.Run(fmt.Sprintf("in_bench_8_total_%d", total), func(b *testing.B) {
112+
hub := setupLoadverifyHub(b, total, 8)
113+
114+
// TypingMessage avoids plugin IPC in hub Run (TextMessage triggers SendMessageToPlugins).
115+
msg := shared.Message{
116+
Sender: "loadverify",
117+
Channel: "bench",
118+
Type: shared.TypingMessage,
119+
CreatedAt: time.Now(),
120+
}
121+
122+
b.ResetTimer()
123+
for i := 0; i < b.N; i++ {
124+
hub.broadcast <- msg
125+
}
126+
})
127+
}
128+
}
129+
130+
func BenchmarkLoadverify_HubBroadcast_SystemWide(b *testing.B) {
131+
for _, n := range []int{8, 32, 64} {
132+
b.Run(fmt.Sprintf("clients_%d", n), func(b *testing.B) {
133+
hub := setupLoadverifyHub(b, n, n)
134+
135+
msg := shared.Message{
136+
Sender: "System",
137+
Channel: "bench",
138+
Content: "announce",
139+
Type: shared.TypingMessage,
140+
CreatedAt: time.Now(),
141+
}
142+
143+
b.ResetTimer()
144+
for i := 0; i < b.N; i++ {
145+
hub.broadcast <- msg
146+
}
147+
})
148+
}
149+
}
150+
151+
func BenchmarkLoadverify_HubBroadcast_ParallelSenders(b *testing.B) {
152+
const clients = 32
153+
hub := setupLoadverifyHub(b, clients, clients)
154+
155+
msg := shared.Message{
156+
Sender: "loadverify",
157+
Channel: "bench",
158+
Type: shared.TypingMessage,
159+
CreatedAt: time.Now(),
160+
}
161+
162+
b.ResetTimer()
163+
b.RunParallel(func(pb *testing.PB) {
164+
for pb.Next() {
165+
hub.broadcast <- msg
166+
}
167+
})
168+
}
169+
170+
func BenchmarkLoadverify_JSONMarshal_TextMessage(b *testing.B) {
171+
msg := shared.Message{
172+
Sender: "user",
173+
Channel: "general",
174+
Content: "hello world loadverify",
175+
Type: shared.TextMessage,
176+
CreatedAt: time.Now(),
177+
}
178+
b.ResetTimer()
179+
for i := 0; i < b.N; i++ {
180+
_, _ = json.Marshal(&msg)
181+
}
182+
}
Lines changed: 75 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,75 @@
1+
// Unit tests for the same window/burst/cooldown logic as client.readPump rate limiting.
2+
// Keeps the algorithm checkable without spinning up WebSockets.
3+
4+
package server
5+
6+
import (
7+
"testing"
8+
"time"
9+
)
10+
11+
// loadverifyRateLimitStep matches the gate in client.readPump before a message
12+
// would be forwarded (uses the same package-level constants as client.go).
13+
func loadverifyRateLimitStep(now time.Time, timestamps *[]time.Time, cooldown *time.Time) bool {
14+
if now.Before(*cooldown) {
15+
return false
16+
}
17+
cutoff := now.Add(-rateLimitWindow)
18+
filtered := (*timestamps)[:0]
19+
for _, ts := range *timestamps {
20+
if ts.After(cutoff) {
21+
filtered = append(filtered, ts)
22+
}
23+
}
24+
*timestamps = filtered
25+
if len(*timestamps) >= rateLimitMessages {
26+
*cooldown = now.Add(rateLimitCooldown)
27+
return false
28+
}
29+
*timestamps = append(*timestamps, now)
30+
return true
31+
}
32+
33+
func TestLoadverify_RateLimitAllowsTwentyThenCooldown(t *testing.T) {
34+
t0 := time.Date(2026, 4, 10, 12, 0, 0, 0, time.UTC)
35+
var ts []time.Time
36+
var cd time.Time
37+
38+
var accepted int
39+
for i := 0; i < 25; i++ {
40+
if loadverifyRateLimitStep(t0, &ts, &cd) {
41+
accepted++
42+
}
43+
}
44+
if accepted != 20 {
45+
t.Fatalf("expected 20 accepts in same-window burst, got %d (timestamps=%d)", accepted, len(ts))
46+
}
47+
if !cd.After(t0) {
48+
t.Fatalf("expected cooldown deadline after burst, got cd=%v t0=%v", cd, t0)
49+
}
50+
}
51+
52+
func TestLoadverify_RateLimitCooldownSuppressesUntilExpiry(t *testing.T) {
53+
t0 := time.Date(2026, 4, 10, 12, 0, 0, 0, time.UTC)
54+
var ts []time.Time
55+
var cd time.Time
56+
57+
for i := 0; i < 20; i++ {
58+
if !loadverifyRateLimitStep(t0, &ts, &cd) {
59+
t.Fatalf("message %d should be accepted", i+1)
60+
}
61+
}
62+
if loadverifyRateLimitStep(t0, &ts, &cd) {
63+
t.Fatal("21st same-timestamp message should be dropped")
64+
}
65+
66+
during := t0.Add(rateLimitCooldown / 2)
67+
if loadverifyRateLimitStep(during, &ts, &cd) {
68+
t.Fatal("message during cooldown should be dropped")
69+
}
70+
71+
after := cd.Add(time.Nanosecond)
72+
if !loadverifyRateLimitStep(after, &ts, &cd) {
73+
t.Fatal("first message after cooldown should be accepted")
74+
}
75+
}

0 commit comments

Comments
 (0)