Skip to content

Add AnyCable vs Socket.io comparison page#54

Open
irinanazarova wants to merge 32 commits into
masterfrom
compare/socket-io
Open

Add AnyCable vs Socket.io comparison page#54
irinanazarova wants to merge 32 commits into
masterfrom
compare/socket-io

Conversation

@irinanazarova
Copy link
Copy Markdown
Contributor

Summary

Adds /compare/socket-io — a methodology-first comparison built around benchmarks of three real configurations at 10,000 concurrent clients on identical Railway infrastructure (Pro tier, 32 vCPU / 32 GB).

The three configurations measured:

  • Default Socket.io — no opt-ins, in-memory adapter
  • Socket.io + Connection State Recovery — opt-in feature added in Socket.io 4.6
  • AnyCable — extended Action Cable protocol with the in-memory broker

Headline numbers (full data on the page):

  • Delivery rate: 87.4% / 100% / 100%
  • Replay latency p99: lost / 9.0 s / 1.0 s
  • Idle connection capacity: AnyCable holds 50,000 idle connections on a single instance using 1.98 GB and ~0.3 vCPU

Page structure

H2s are analytical takeaway statements so skim-readers get the thesis from the headings alone:

  1. Only AnyCable delivers every message on time
  2. Every Socket.io deploy severs every connection. This is architectural.
  3. AnyCable holds 50,000 idle connections on under 2 GB and 0.3 vCPU

Plus: methodology callout (authorship, reproducibility, customer trust signals), migration code example, three-column feature comparison with citations to both AnyCable and Socket.io docs, FAQ accordion (incl. AnyCable+ managed-tier entry), and CTA linking to docs + customer page.

Design

Page-scoped under .compare-page so styles don't leak elsewhere.

  • Notebook-feel dotted background continuous across the entire page
  • Rounded white cards (8 px) for tables and dark-bg code blocks on the dotted surface — tables read as discrete notes pinned to graph paper
  • Sticky right column so illustrations / tables stay visible while the left text scrolls (resolves the previous "right side empty for thousands of pixels" problem)
  • Inline <code> chips at 0.85 em with subtle background and 4 px radius — reads calm next to body sans-serif
  • Red accent reserved for "worst per metric" only; not used for emphasis
  • Anchor links on all H2s (#delivery, #deploys, #capacity, #migration, #faq-heading) so readers and agents can deep-link
  • FAQ as native <details> accordion — closed by default, narrow centered column

Mobile

  • Hero stacks to single column
  • H1 size clamps to 28-64 px responsive
  • About-slide right column hidden via existing mediaMax($tablet) rule

Files

  • src/compare/socket-io/index.html — new comparison page
  • src/modules/blocks/compare.scss — new page-scoped styles
  • src/modules/blocks/about-slide.scss — added -sticky and -full media modifiers + calm inline-code chips
  • src/index.scss — imports compare.scss
  • src/blog/anycable-vs-socket-io/index.md — removed (content now lives on the comparison page)
  • .gitignorebenchmark/ (lives in its own repo: irinanazarova/anycable-socketio-benchmarks)

Reproducibility

All benchmark numbers come from runs reproducible from the open-source bench repo: https://github.com/irinanazarova/anycable-socketio-benchmarks. The bench-runner deploys to Railway and uses internal networking (*.railway.internal) to drive 10K-client tests without local NAT/event-loop bottlenecks.

Test plan

  • Open /compare/socket-io at desktop width (1440 px) — confirm sticky right column, dotted background, hero card layout
  • Scroll through each pillar — confirm tables stay visible while reading the left column
  • Click each section anchor link — confirm scroll position respects the 60 px header offset
  • Open and close FAQ items — confirm +/− indicator, red title color on open
  • Resize to 390 px (iPhone width) — confirm hero stacks to single column, no horizontal overflow, H1 fits
  • Click each doc-citation link in the feature comparison table — confirm 200 (no 404s)
  • Verify no JS errors in console

Adds /compare/socket-io — a methodology-first comparison built around
benchmarks of three real configurations (default Socket.io, Socket.io with
Connection State Recovery, and AnyCable) at 10,000 concurrent clients on
identical Railway infrastructure.

Page structure (analytical-takeaway H2s):
1. Only AnyCable delivers every message on time
2. Every Socket.io deploy severs every connection. This is architectural.
3. AnyCable holds 50,000 idle connections on under 2 GB and 0.3 vCPU

Plus: methodology callout, migration code example, three-column feature
comparison with citations to AnyCable + Socket.io docs, FAQ accordion,
and CTA.

Design system:
- Notebook-feel dotted background, continuous across the page
- Rounded white cards (8 px) for tables and code blocks on the dotted bg
- Sticky right column so illustrations stay visible while text scrolls
- Inline code chips (0.85 em, subtle background, 4 px radius)
- Red accent reserved for "worst per metric" only — not for emphasis
- Mobile: hero stacks single-column, table column hides via existing rule

Files:
- src/compare/socket-io/index.html  new comparison page
- src/modules/blocks/compare.scss   page-scoped styles (under .compare-page)
- src/modules/blocks/about-slide.scss  -sticky and -full media modifiers,
                                       calm inline-code chips
- src/index.scss  imports compare.scss
- src/blog/anycable-vs-socket-io/index.md  removed (content lives on the
                                           comparison page now)
- .gitignore  benchmark/ directory (lives in its own repo)
@bolt-new-by-stackblitz
Copy link
Copy Markdown

Review PR in StackBlitz Codeflow Run & review this pull request in StackBlitz Codeflow.

- Removed the floating-nav <nav> markup, its IntersectionObserver script,
  and the .compare-nav SCSS rule. The nav was display:none in the design
  but still shipped markup + JS to every visitor.
- Removed background: $backgroundSecondaryColor from the hero section's
  inline style. SCSS variable strings don't resolve in inline CSS — the
  declaration was a no-op.
- Compare-page customer-list links now point to /#customers anchor on the
  main page (the cases-slide section already had id='customers').
- llms.txt now references /compare/socket-io with headline numbers and
  links the open-source bench repo. The 'When to use AnyCable instead of
  Socket.io' section is rewritten to be CSR-aware: it acknowledges that
  Socket.io 4.6+ has an opt-in catch-up feature, then surfaces the
  measured 7x replay-latency gap (CSR p99 9.0s / max 12.0s vs AnyCable
  p99 1.0s / max 3.5s) and CSR's documented caveats.

Why this matters: when a CTO asks an LLM 'AnyCable vs Socket.io which
should I use?', the LLM scrapes llms.txt for context. Without these
updates, it would not see the benchmark data or the CSR analysis.
…nical

For LLM/agent visibility on the compare page:

- FAQPage structured data — the 8 visible <details> Q&A pairs are now
  also exposed as schema.org FAQPage JSON-LD. LLMs and search engines
  can parse them as structured data without scraping the accordion.
- Per-page og:url and rel=canonical — meta.hbs now accepts an optional
  pageUrl and falls back to the homepage. Compare page sets it to
  https://anycable.io/compare/socket-io. Improves social previews and
  prevents canonical drift.
- sitemap.xml lists the 12 indexable pages (homepage, compare/socket-io
  at priority 0.9, four case studies, four anycasts, eula, notice).
- robots.txt declares the sitemap location and allows all crawlers.

The compare page already had analytical-takeaway H2s, semantic tables
with citations, and meta description; this completes the LLM-facing
surface so an agent answering 'AnyCable vs Socket.io which should I use?'
on a CTO's behalf has both the prose and the structured signals.
CTA section: replace 'Get started → docs' / 'View pricing' with
'Try Managed — free up to 2K' (https://plus.anycable.io) as primary
and 'Self-host (open source)' as outlined secondary. Heading now
says 'Drop AnyCable into your Node.js app today'. Lower friction for
the Node.js CTO who finished the comparison and wants to try it.

New 'Try it in your stack' section between Migration and the feature
table, with three cards linking the JS surfaces:
- @anycable/core (browsers, Node.js, React Native)
- twilio-ai-js-demo (working Next.js + voice AI demo)
- @anycable/serverless-js (Vercel / Cloudflare / Lambda)

For an evaluating CTO the question 'is this 1 hour or 1 week to set up?'
is now answered by a clone-and-run demo, not just a CTA.
Three related changes that came out of a careful padding audit:

1. Strip ~120 inline cell-padding declarations from the four data tables.
   Each had different inline values (9/10/12/14 px combinations) that
   overrode the unified .compare-page table SCSS rule. Tables now share
   identical row height and edge gutter.

2. Add 12px / 16px padding inside .slide-show__frame so the table is
   inset from the rounded card border. Fixes 'content glued to panel
   border' — row dividers and headers now terminate before the card
   edge instead of touching it.

3. Restyle FAQ items as rounded white cards on the section's gray
   surface, matching the 'Try it in your stack' card pattern. Each
   <details> is now its own white panel with a subtle border, 10 px
   gap between cards, and a hover/open border-color shift. Visually
   unifies the FAQ with the rest of the page (hero cards, demo cards,
   table panels — all rounded white cards on the dotted page).

Plus from earlier in this batch: managed-tier as primary CTA
(plus.anycable.io), JS demo cards section between Migration and
feature comparison.
…y bg

- Add explicit padding-top/bottom: 64px to .compare-page .slide.about-slide
  (the global .about-slide rule zeroes section padding; this restores
  breathing room between consecutive sections and prevents right-column
  table cards from sitting flush with the section's top edge).
- Restructure FAQ DOM so the gray section background extends full-width
  while the .faq-block items stay centered at max-width 720px.
- Apply a single 56px top/bottom padding to both .about-slide and
  .cases-slide on the compare page so the cases sub-section between
  Pillar 1 and Pillar 2 shares the same rhythm and its content no
  longer sits flush against its top/bottom edges. Adjacent sections
  now have a consistent 112px gap instead of 128 + irregular zeros.
- Hero (80/60) and CTA (120/64) keep their distinctive paddings.
- Pillar 2: link "thundering herd" to the EM avalanche article on
  evilmartians.com (target=_blank rel=noopener).
- .faq-block: explicit width:100% so the flex parent doesn't size it
  to content (514px closed) and snap to 720 max-width when an item
  opens — that snap was the visible horizontal jump on click.
- slide-show__frame on compare page: bump vertical inset (16/8) so
  the table's first/last rows don't sit flush against the card edge.
- about-slide__media-align-top on compare page: 32px margin-top so
  right-column cards start meaningfully below the section's gray top
  rather than reading as flush.
The earlier 56px top/bottom padding on .about-slide / .cases-slide
created strips of dotted page bg between adjacent sections, breaking
the continuous gray left-column background. The user's actual concern
was tables glued to the *card* border — addressed by the
slide-show__frame internal padding, which stays. Section transitions
now flow cleanly section-to-section.

Also drop the .about-slide__media-align-top margin-top that pushed
right-column cards visibly below the section's top edge — without
the section padding above it, that offset misaligned the card from
its column's natural start.
Five additions designed to surface AnyCable for the queries our
buyers actually run, and to give LLMs / RAG pipelines extractable
content to lift verbatim:

- TL;DR / "three findings" block with on-page anchor nav, placed
  right under the hero stat cards. Numbered findings mirror the
  three pillars; nav exposes #delivery #deploys #capacity #migration
  #try-it #faq for citation.
- Three new FAQ entries (visible accordion + FAQPage JSON-LD):
  HIPAA / SOC2 self-hosting, cost vs Pusher / Ably, LLM token
  streaming. Closes the audit's biggest absent-from-LLM gaps.
- TechArticle JSON-LD with author / publisher / datePublished /
  about[] tags so Gemini's engineering-blog retrieval path can
  attach to the page.
- Trimmed Doximity engineering quote (public, from the podcast
  interview) replaces the bare logo list in "Proven at scale".
  Quote drops the Rails mention; substance is deploy resilience.
Establish one principle for vertical rhythm on the compare page:
padding lives on the columns (.about-slide__content and __media),
not on the slide itself. The slide stays unpadded so the gray
left-column background flows continuously between adjacent
sections; the columns each get 80px top + bottom padding so
content (FAQ items, Try-it cards, tables, code blocks) never
sits flush against a gray edge.

Reset .about-slide__title margin-top to 0 on the compare page
so the column padding handles top spacing instead of stacking
with the global 120px H2 margin.

Extract two repeated inline patterns to utility classes:
- .compare-quote-card — tinted-bg variant of slide-show__frame
  used for the Doximity quote panel
- .compare-try-it-grid — the JS demo cards grid
Tighten the default .slide-show__frame inset to "12px 0" since
table cells already own their horizontal padding.

Strip "padding: 0; overflow: hidden;" inline styles from the
four slide-show__frame elements so the SCSS rule actually
applies (those inline rules were silently overriding the card
padding fix from earlier commits).
- Local color tokens at the top of the file replace ~40 hard-coded
  hex values across the SCSS. Globals ($accentPrimaryColor,
  $backgroundPrimaryColor, $borderPrimaryColor, etc.) reused where
  they match.
- One root .compare-page block now nests every rule, so .faq-block,
  .hero-card, and the heading-anchor styles can no longer leak into
  unrelated pages that happen to use the same names.
- Two duplicate .compare-page { } blocks consolidated into one.
- Components renamed for consistent .compare- prefixing:
    .faq-block      → .compare-faq
    .faq-item       → .compare-faq__item
    .faq-answer     → .compare-faq__answer
    .hero-cards     → .compare-hero-cards
    .hero-card      → .compare-hero-card
    .hero-card__*   → .compare-hero-card__*
- Comments tightened to focus on WHY (the gotchas: flex-parent
  width snap on .compare-faq, column-not-slide rhythm rule, !important
  needed to override global slide bg, etc.).
- Components grouped in page-flow order for readability:
  background → rhythm → headings → hero cards → tldr → about
  callout → right-col card variants → try-it grid → tables → faq.

No visual changes — verified all components render identically.
- Remove the "About this comparison" methodology callout. Two
  explanation blocks (TL;DR + About) under the hero is too much;
  TL;DR carries the analytical summary, the hero's small "Open
  source — run it yourself" link covers reproducibility, and the
  customer line lives in the CTA footer.
- Rework the closing CTA so the offering matches reality:
    primary action  → Start Pro (2 months free)  →  plus.anycable.io/pro
    secondary       → Try free Managed            →  plus.anycable.io
    footnote line   → MIT / GitHub / customers / benchmark repo
  Self-hosted Pro is the monetized product; free Managed is the
  low-friction trial; open source is a trust signal best handled
  in the FAQ + a single GitHub link, not a button.
- Move the CTA's inline styles into a new .compare-cta block in
  compare.scss following the rest of the file's BEM conventions.
Cleans up duplicates left behind after c9efd66 added SVG
replacements for Tasktag and after the source-vs-public path
deduplication for Rangee. Active assets still in src/public/.
After running the new sharded benchmark across 4 Railway test-client
containers (each with its own source IP and ephemeral port pool),
anycable-go held 199,970 / 200,000 idle WebSockets on a single Pro-
tier instance — 8.3 GB of memory, 2.63% of 32 vCPU peak (~0.8 vCPU).
Memory tracked the predicted ~40 KB/conn linearly. anycable-go itself
had memory and CPU headroom remaining; we stopped because the headline
was already strong.

Updated:
- Hero stat card: 50,000 → 200,000 (+ memory and CPU figures)
- TL;DR Capacity finding
- Pillar 3 H2: "holds 200,000 idle connections on ~8 GB and ~0.8 vCPU"
- Pillar 3 body: replaces the "test-client ceiling at ~56K" caveat
  with the sharded methodology explanation
- Capacity table: adds 100K and 200K rows; demotes 50K from key row
  to scaling-line entry
- FAQ answer (visible HTML + FAQPage JSON-LD)
- llms.txt: capacity bullet + page summary

Bench source: irinanazarova/anycable-socketio-benchmarks ce9adb7+
(see lib/idle-runner.ts and bench/idle-multi.ts).
Pillar 3 H2 read 'AnyCable holds 200,000 idle connections...' without
specifying topology. Now reads 'One AnyCable node holds…' and the
body opens with 'single anycable-go instance.' Other mentions on the
page (hero card, TL;DR, FAQ) already said 'single instance' — this
brings the headline into line.

Multi-node / cluster numbers are a separate benchmark for a future
write-up; flagging this as one-node keeps the current claim honest.
Re-ran the same 200K idle benchmark against the AnyCable Pro v1.6.13
binary on identical Railway Pro-tier infrastructure (32 vCPU / 32 GB,
broker preset, memory broker, one stream subscription per client).

  | binary   | conns held       | memory peak | CPU peak  |
  | -------- | ---------------- | ----------- | --------- |
  | OSS      | 199,970/200,000  | 8.35 GB     | 2.63%     |
  | Pro      | 199,989/200,000  | 3.56 GB     | 1.94%     |

That's ~17.8 KB/conn for Pro vs ~42 KB/conn for OSS — beats Pro's
"2x more efficient" marketing claim and lines up with the CTA's
self-hosted Pro framing. Updated:

- Pillar 3 (capacity) body: notes the OSS test, then adds a Pro
  paragraph framing the efficiency delta as "twice the connection
  headroom on the same box."
- Capacity table: 200K row split into OSS (8.35 GB) and Pro (3.56 GB).
- FAQ "How does AnyCable compare on performance?" answer + JSON-LD
  mirror the new Pro datapoint.
- llms.txt capacity bullet mentions both OSS and Pro numbers.
Pushed AnyCable Pro on the same single Pro-tier Railway box to its
ceiling: 999,954 of 1,000,000 idle WebSocket connections, peak
memory 19.3 GB, peak CPU 9.4% of 32 vCPU. 25 test-client shards
× 40K each, single anycable-go-pro process, no Redis/NATS backplane.

Updated:
- Pillar 3 body: adds a Pro 1M paragraph as a "comfortably past a
  million" footnote — the page's lede stays delivery + deploy
  resilience, this is supporting evidence about the connection
  ceiling for readers who go looking.
- Capacity table: appends a 1,000,000 Pro row at the bottom as the
  new "key" highlighted entry; demotes 200K Pro to a regular
  scaling-line entry (it remains the apples-to-apples-with-OSS row).
- FAQ "How does AnyCable compare on performance?" answer + JSON-LD
  mirror the new datapoint.
- llms.txt capacity bullet: appends the Pro 1M number.

Hero stat card #3 still shows the 200K OSS number — that's the
right apples-to-apples-with-Socket.io comparison number; 1M Pro
is a "by the way" rather than the headline.
Hero stat card #3 used to show only the open-source 200K number;
now it shows OSS 200K alongside Pro 1M (with the Pro memory/CPU
detail), since Pro is the headline ceiling per single instance:

  AnyCable (open source)     200,000
  AnyCable Pro             1,000,000
  Pro memory / CPU used    19 GB / ~3 vCPU

Also: harden .gitignore so Claude Code session artifacts
(.claude/, .playwright-mcp/, root-level *.png/*.jpeg, scratch
TODO/audit MDs, root package-lock.json) can't get accidentally
swept into commits via `git add -A`. The previous commit on this
branch (since reset) added 128 such files; this prevents
recurrence.
Ran the same 1M-connection test (25 shards × 40K each) against the
single-instance Socket.io we already had on Railway, with Node heap
bumped to 30 GB. Result: 119,826 connections accepted, 880,174
rejected during ramp. Memory peak was only 6.3 GB — Socket.io
didn't run out of RAM, the single Node event loop saturated
processing handshakes at ~580/sec while shards were attempting
~5,000/sec aggregate.

That's the architectural answer made measurable: ~120K is the
single-process Socket.io ceiling under aggressive parallel ramp;
AnyCable Pro held 1M on the same hardware (8.3× more), and
anycable-go's Go runtime simply spreads the upgrade work across
all 32 vCPU instead of one Node event loop.

Updated:
- Hero stat card #3 now reads as a 3-way comparison: Socket.io
  119,826 (red, 'is-worst') / OSS 200,000 / Pro 1,000,000.
  Footnote explains the test setup.
- Pillar 3 body adds a paragraph explaining why Socket.io
  throttled (Node single event loop) and what would be needed
  to actually reach 1M (Redis-adapter cluster of Node processes).
- Capacity table appends a Socket.io ceiling row (highlighted
  red) for direct visual comparison alongside the AnyCable rows.
- FAQ "How does AnyCable compare on performance?" answer + JSON-LD
  mirror the Socket.io datapoint.
- llms.txt capacity bullet incorporates the measured Socket.io
  number and the "needs Redis adapter cluster to reach 1M" note.
Now that all three have been pushed to the same 1M-connection idle
target on identical Railway hardware, the page leads with the
direct comparison — and the memory-efficiency story between OSS
and Pro is now explicit.

  Same 1,000,000-target idle test, same single 32 GB Pro tier:
    Socket.io          119,826  (single Node event loop saturated)
    AnyCable OSS       993,994  (RAM-bound, hit 32 GB ceiling)
    AnyCable Pro       999,954  (19 GB used, 13 GB headroom remained)

OSS uses ~33 KB / conn at 1M; Pro uses ~19 KB. Pro is roughly
1.7× more memory-efficient at scale and ~2.4× at smaller loads
(same gap, different scales).

Updated:
- Hero stat card #3: OSS row goes from 200K to 993,994; footnote
  now explains all three numbers in one line ("OSS hit 32 GB
  ceiling, Pro held same load on 19 GB, Socket.io saturated at
  ~120K").
- TL;DR Capacity bullet rewritten as the 3-way comparison.
- Pillar 3 H2 now reads "1,000,000 idle connections — Pro on
  19 GB, OSS on 32 GB"; body restructured into 4 paragraphs
  (intro / Socket.io / OSS / Pro) plus a short closing.
- Capacity table: replaces the OSS 200K key row with the OSS 1M
  row (32 GB ceiling marked); Pro 1M row stays as the highlighted
  green key row.
- FAQ "How does AnyCable compare on performance?" and JSON-LD
  mirror the new 3-way story.
- llms.txt capacity bullet rewritten as a single dense 3-way
  comparison so retrieval surfaces all three numbers.
Six tightening edits surfaced by an end-to-end review:

1. Refreshed pageDescription so Google snippets and ChatGPT preview
   cards lead with the strongest line we now have ("Socket.io 120K /
   OSS 994K / Pro 1M on the same Railway box, plus 100% delivery").
   Also refreshed pageTitle to mention 1M / single node.
2. Trimmed Pillar 3 body from 5 paragraphs to 3. The 1M/Pro story
   was starting to dominate the page; now the capacity pillar reads
   in three beats — intro, three walls (Socket.io / OSS / Pro), and
   the memory-efficiency comparison.
3. Capacity table collapsed from 9 rows to 6: kept 10K (sanity), 200K
   (OSS vs Pro side-by-side at scale), and 994K-OSS / 999,954-Pro /
   119,826-Socket.io as the three "wall" rows. Dropped the small-scale
   1K/20K/50K/100K rows that diluted the headline.
4. Migration section ends with an explicit 4-bullet checklist —
   "what changes in your code" — for senior engineers who want the
   irreducible list rather than prose paragraphs.
5. TechArticle JSON-LD: refreshed `headline` and `description` to
   reflect the 3-way capacity story; added `about[]` tags for "Node.js
   WebSocket scaling bottleneck", "1M WebSocket connections single
   instance", "AnyCable Pro vs open source", "AnyCable Pro memory
   efficiency", and "WebSocket deploy resilience". Bumped dateModified
   to 2026-05-03.
Two visualizations added so the structural argument and the
benchmark evidence land instinct-level rather than purely through
prose and tables:

1. Pillar 2 architecture diagram. A 720x380 inline SVG showing the
   structural difference: Socket.io's "Node.js process containing
   both your app and the WebSocket hub" vs AnyCable's "your app |
   anycable-go as separate processes, broadcast over HTTP." Sits at
   the top of Pillar 2's left column with the existing H3 +
   paragraph explanations below as text reinforcement.

2. Pillar 3 memory + CPU chart. The actual ASCII line chart from
   bench-runner's Railway-metrics polling during the 1M Pro run
   (peak 19.34 GB, peak CPU 9.37%). Embedded as a captioned <figure>
   below the capacity table in Pillar 3's right column, in the dark
   monospace style we use for code blocks. Same number lands twice
   — once precise in the table, once intuitive in the chart.

Both styled via .compare-arch-diagram and .compare-bench-chart
blocks added to compare.scss; no JS, no extra runtime deps, no
design-asset maintenance.
Vladimir's review pushed us to use a hard-TCP terminate + the @anycable/core
Monitor's natural reconnect-with-backoff loop (instead of a clean
cable.disconnect/connect, which bypassed backoff). After re-running with
the new path, AnyCable's tail latency at 10K is honestly higher than the
old measurement: ~6 s p99, ~9 s max — still ~1.7 s ahead of Socket.io+CSR
on the tail, but no longer the "7× faster" we were claiming.

Pivot the headline narrative: both AnyCable and CSR deliver 100% under
jitter; the differentiator is server cost. AnyCable Pro holds the same
10K reconnecting fleet on ~271 MB peak memory vs ~627 MB for CSR — about
2.3× less. Replace the "Replay latency p99" hero card with "Server memory
at 10,000 clients" so the new pillar is visible above the fold.

Number changes (today's apples-to-apples run on the same Railway box):
- Default Socket.io: 86.88% delivery, 156,856 lost (was 87.41%, 150,642)
- CSR p99 7.86 s, max 10.79 s (was 8.99 s, 12.03 s)
- AnyCable p99 6.18 s, max 9.34 s (was 1.04 s, 3.53 s)
- New row: server memory peak 675 / 627 / 760 (Pro 271) MB
- Section title: "AnyCable delivers every message — with the lowest server cost"
- "Where the replay tail comes from" paragraph honestly attributes most of
  the tail to client-side reconnect backoff (both clients use ~0.5–5 s
  jitter); the ~1.7 s gap is the per-stream parallel batch replay vs CSR's
  per-socket serial drain.

Numbers in the table are 1-decimal for readability; precise values stay
in the bench JSON output.
@irinanazarova irinanazarova requested a review from palkan May 5, 2026 15:33
The previous benchmark sub-section showed 1K vs 5K downtime on a single
box (a single architectural data point). With Vladimir's listener-
attach fix landed, we re-ran avalanche scaling on a constrained box
(1 vCPU / 0.5 GB) — the realistic small-tier case where the cliff
shows up.

Replaces the 1K/5K table with a 5K → 25K scaling table:
- 5K: 4.5 s recovery, 100% reconnect
- 10K: 3.9 s, 100%
- 15K: 5.8 s, 98.5% (224 lost)
- 20K: 8.0 s, 96.2% (753 lost)
- 25K: never recovers — 0% reconnect, all 25K permanently lost

The 25K row is the architectural cliff: memory hits ~95% of the cap
just before redeploy, then the post-redeploy reconnect storm OOMs the
new container. Confirmed on a clean isolated re-run.

Updates the inline timeline illustration to show both the graceful
case (5K) and the cliff (25K) on the same box — the contrast is the
point. AnyCable column reads "0 s" at every scale because the
WebSocket layer doesn't restart when the app does.

Section title: "the avalanche scales — until it doesn't".
Two passes on the prose:

1. Tightening — read each paragraph, kept the nuance, removed any word
   the meaning didn't lean on. Hero subtitle, TL;DR bullets, every
   per-protocol bullet in Pillar 1, the structural-loss math paragraph,
   Pillar 2 architecture text, the avalanche-cliff narrative, Pillar 3
   capacity prose. ~10–25% shorter per paragraph; same numbers, same
   nuance, less reader-time spent.

2. New section in Pillar 1: "Why a 1-second blip becomes a multi-second
   tail." Surfaces what the trace data showed — terminate → cable
   connect is p99 ~8 s, dominated entirely by the 0.5–5 s reconnect
   backoff that every realtime client uses to avoid stampeding the
   server. Implication, made explicit: any TCP-level disruption fans
   out to multiple seconds of offline window by design — without
   delivery guarantees those seconds are lost messages, with replay
   they're delayed messages. Same backoff cost, completely different
   user experience.

   Replaces the previous "Where the replay tail comes from at 10K"
   subsection which only described the protocol gap (~1.7 s CSR/AC
   delta). The protocol-gap explanation is preserved as a second
   paragraph so the reader still sees both pieces.
… footnote

Rest of the prose pass:
- Impact section: trimmed redundant words ("still"/"the experience" → "the flow").
- Live-chat use-case card: comma instead of "with no" preposition.
- Try-it intro: replaced "depending on whether you want to" with a colon.
- Feature comparison header: dropped "production" before "hardening" (implicit).
- "When Socket.io is right" closing line: "AnyCable provides both by default" → "AnyCable: both by default."
- Migration trade-off: removed "to deploy"/"a single"/"In return" filler.
- Capacity hero card footnote: dropped doubled "box" and trailing "remaining".

No information loss; same numbers, same nuance. The page now reads at the
density the rest of it does.
Right-column tables, code samples, scaling table, and feature matrix
were hidden below 1024px (about-slide__media: display:none), so mobile
and tablet visitors were seeing only the prose — roughly half the page.

Scoped under .compare-page so the homepage's existing mobile behavior
is unchanged.

- Section flips to flex-direction: column at ≤tablet so media stacks
  below content; media regains display:flex with full width
- slide-show__frame switches from overflow:hidden to overflow-x:auto
  on mobile so wide tables scroll horizontally inside the rounded
  card; same on <pre> blocks for code samples
- compare-try-it-grid + compare-bench-chart get min-width:0 + width
  100% to break out of the flex-min-content trap
- inline <code> tokens (io.to(room).emit(...)) wrap on mobile via
  white-space:normal + overflow-wrap:anywhere

Verified clean at 375 / 580 / 688 / 768 / 820 / 1023 / 1024 / 1280.
Addresses the "Socket.io is old, just use uWS" pushback we hear from
founders comparing AnyCable to Socket.io. The "10× faster" claim is
genuinely true on the wire (server memory, replay latency, idle
capacity), so we acknowledge it head-on with measured numbers, then
show where the wire isn't the bottleneck.

New section between Pillar 1 and the Impact slide:

- Lead acknowledges the wire-speed claim and presents the measured
  wins: 72 MB at 10K, sub-second p99 latency, 1.018M idle on 5.45 GB
  (5.35 KB/conn — beats AnyCable Pro's 19 KB/conn at the same scale).
- Comparison table at 10K under jitter (5 columns: uWS, Socket.io,
  AnyCable OSS, AnyCable Pro) — splits OSS/Pro per the prior request.
- "But the wire is rarely the bottleneck" subsection: replay-less
  uWS loses ~14% — same loss profile as Socket.io without CSR.
- Avalanche scaling table on the same 0.5 GB / 1 vCPU box used for
  Pillar 2's Socket.io cliff: side-by-side ladder shows uWS doubles
  the clean-recovery threshold (10K → 20K) and pushes the survivable
  cliff from 25K to ~90K, but at the high end of uWS's capacity the
  reconnect storm itself becomes the user experience (151s p50 at 50K).
- Closing point: uWS solves "wire is too heavy"; AnyCable solves
  "delivery during disruption" + "deploy resilience" — different
  layers of the stack.

Also adds the section to the TL;DR on-page nav so someone scanning
the page can jump straight to the uWS comparison.
The dedicated uWS section was carrying the full comparison story alone;
this weaves the data into the rest of the page so the reader gets a
consistent picture from hero through TL;DR through pillars and FAQ.

What changed:

- New "Where the differences come from" section between TL;DR and
  Pillar 1 — sets up the architecture mental model (where the WS
  layer runs, whether the protocol carries replay state) so every
  benchmark number that follows is predictable.
- Hero subtitle reframed to lead on delivery + deploy resilience
  (where AnyCable wins decisively) instead of memory (where uWS now
  wins on bare-wire density).
- All three hero cards updated to 4 rows including uWS:
  • Delivery rate: shows uWS at 86%, paired with Default Socket.io
    as the no-replay losers
  • Server memory: shows uWS at 72 MB with a † footnote noting it
    doesn't include a replay buffer; AnyCable Pro stays positioned
    as "lightest setup that delivers 100%"
  • Idle capacity: shows uWS at 1,018,366 / 5.4 KB per conn, AnyCable
    Pro at 999,954 / 19 KB per conn — both reach 1M, framing the
    AnyCable footprint as "broker overhead, not bloat"
- TL;DR bullets rewritten to weave uWS in naturally: replay-less
  setups lumped together on delivery; deploy-fragile architectures
  lumped together; capacity bullet shows uWS leading on bare wire
  but the trade-off is broker features.
- Pillar 1 (Delivery) lead paragraph adds uWS's ~14% loss alongside
  Socket.io default's ~13%.
- Pillar 3 (Capacity) lead + table updated: 4-way comparison with
  per-connection memory + replay column, header changed to clarify
  that AnyCable's heavier footprint is the broker overhead.
- Feature comparison table gets a uWebSockets.js column — every
  framework feature beyond raw transport is "DIY" or "No", same
  as Default Socket.io. The diagnostic point that uWS solves
  Socket.io's wire problem but not its framework gaps.
- "What you don't have to build" intro rewritten to cover uWS too.
- New FAQ entry "What about uWebSockets.js?" with the same data,
  mirrored into the JSON-LD FAQPage block.
- Page title, meta description, TechArticle JSON-LD, and SEO
  keywords updated to surface uWS comparison terms ("uWebSockets.js
  vs Socket.io", "uWebSockets.js vs AnyCable") so visitors searching
  for those queries find this page.
- TL;DR on-page nav gets an "Architecture" link.
- dateModified bumped to 2026-05-08.
386 inline style="" attrs → 16. Reach for design tokens (compare.scss
local + global variables.scss) before hard-coded hex; reach for BEM
blocks before utility classes; reach for utilities before inline.

New BEM blocks: compare-hero, compare-data-table (+ --compact modifier
for the feature matrix), compare-data-table__footnote, compare-bullet-list,
compare-code-block, compare-try-card, compare-quote-card__*. Utility
set: t-eyebrow, t-mute/meta/quiet/strong/accent/best, t-num, t-tiny,
t-bold; code tints c-key/str/err/com; row modifiers is-row-best/worst;
auto-spacing for sequential h3.about-slide__subtitle (kills the 11×
margin-top: 32px). Code palette tokens: \$compare-code-keyword/string/
error/comment plus \$compare-best.

Architecture contract documented at the top of compare.scss so the
next person reaches for the right tool. Visual diff: identical.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant