The like weight decreases as a user likes more items within a time window:
w(u) = 1 / (1 + α × (n - 1))
Where:
w(u): Weight for user u's like (0.0 to 1.0)α: Decay coefficient (default: 0.05)n: Number of likes in window (likeWindowCount)
Examples:
- 1st like:
w = 1 / (1 + 0.05 × 0) = 1.0 - 10th like:
w = 1 / (1 + 0.05 × 9) = 0.689 - 20th like:
w = 1 / (1 + 0.05 × 19) = 0.513 - 100th like:
w = 1 / (1 + 0.05 × 99) = 0.168
If user exceeds threshold within 30 seconds, apply penalty:
w_final = w_base × penaltyMultiplier (if rapid)
= w_base (otherwise)
Where:
rapidThreshold: Default 50 likes in 30spenaltyMultiplier: Default 0.1 (90% reduction)
Purpose: Prevent spam/bot behavior while keeping likes pressable.
Given current state, predict next like's weight:
w_next = 1 / (1 + α × n)
Used for UI hints (e.g., "Your next like will have 68% support power").
Maps CR score (0.1-10.0) to like weight multiplier (0.5-2.0):
x = log₁₀(CR_clamped / CR_min) / log₁₀(CR_max / CR_min)
CR_m = 0.5 + 1.5 × x_clamped
Where:
CR_clamped: CR value clamped to [CR_min, CR_max]CR_min: 0.1 (default)CR_max: 10.0 (default)x_clamped: Clamped to [0, 1]
Examples:
- CR = 0.1:
x = 0.0→CRm = 0.5 - CR = 1.0:
x = 0.5→CRm = 1.25 - CR = 10.0:
x = 1.0→CRm = 2.0
Logarithmic scaling ensures:
- Low CR (0.1-1.0): Multiplier 0.5-1.25 (penalty zone)
- Medium CR (1.0-3.0): Multiplier 1.25-1.6 (normal zone)
- High CR (3.0-10.0): Multiplier 1.6-2.0 (bonus zone)
After curation event:
Δ = event_weight × outcome_score
CR_new = CR_old + learning_rate × Δ
Then apply time decay:
decay_factor = 0.5^(days_since_last / halfLifeDays)
CR_decayed = CR_base + (CR_current - CR_base) × decay_factor
Where:
CR_base: 1.0 (neutral point)halfLifeDays: 90 days (default)- Decay pulls CR toward 1.0 over time
Event weights (default):
- Note adopted: +0.15
- Bridge success: +0.25
- Stake success: +0.20
- Stake failure: -0.15
- Spam flag: -0.30
CP issuance diminishes with frequent events:
multiplier = 1 / (1 + rate × count)
But never below minMultiplier:
m_final = max(minMultiplier, multiplier)
Where:
rate: 0.05 (default)minMultiplier: 0.2 (default, 20% floor)
Example (noteAdopted events in 24h):
- 1st:
m = 1.0→CP = 10 - 5th:
m = 0.71→CP = 7.1 - 10th:
m = 0.5→CP = 5.0 - 50th:
m = 0.2(floor) →CP = 2.0
High CR earns bonus CP:
CP_multiplier = min(1.1, 1.0 + 0.1 × (CR - 1.0))
Capped at 1.1 (10% bonus max).
CP = base_amount × diminishing_multiplier × CR_multiplier
Example:
- Base: 10 CP
- Diminishing (5th event): 0.71
- CR (2.0): 1.1
- Final:
10 × 0.71 × 1.1 = 7.81 CP
Stake succeeds if totalScore > threshold (default: 0.5):
supportDensityScore = Δ_supportDensity
breadthScore = Δ_breadth / 5
contextScore = Δ_contextCount / 10
crossClusterScore = crossClusterReactions / 10
totalScore = 0.4 × supportDensityScore
+ 0.3 × breadthScore
+ 0.2 × contextScore
+ 0.1 × crossClusterScore
Rewards:
- Success: Unlock stake + bonus (20% of stake default)
- Failure: Slash (30% of stake default)
Combines CR and CP earned (90d):
CR_m = getCRMultiplier(CR)
CP_m = clamp(1.0, 1.2, 1.0 + 0.2 × log₁₀(1 + CP_90d / 50))
view_weight = clamp(0.2, 2.0, CR_m × CP_m)
Examples:
- CR=1.0, CP=0:
CP_m = 1.0, w = 1.25 × 1.0 = 1.25 - CR=2.0, CP=100:
CP_m ≈ 1.07, w = 1.5 × 1.07 = 1.605 - CR=5.0, CP=500:
CP_m = 1.2, w = clamp(0.2, 2.0, 1.87 × 1.2) = 2.0(capped)
Purpose: Engaged curators' views count more toward visibility.
Ratio of weighted likes to qualified views, with Bayesian smoothing:
support_density = (weightedLikeSum + priorLikes)
/ (qualifiedUniqueViewers + priorViews)
Default priors: priorLikes = 1, priorViews = 10
Can be raised to power β for sensitivity adjustment:
support_density_adj = support_density^β
Default β = 1.0 (no adjustment).
Raw conversion rate (unique likers / unique viewers):
support_rate = (uniqueLikers + priorUniqueLikers)
/ (qualifiedUniqueViewers + priorViews)
Default: priorUniqueLikers = 1
Ratio without upper clamp (can exceed 1.0):
WSI = (weightedLikeSum + priorLikes)
/ (qualifiedUniqueViewers + priorViews)
Same as WSI but clamped to [0, 1]:
WSR_clamped = min(1.0, WSI)
Uses Shannon entropy to calculate effective cluster count:
H = -Σ p_i × log(p_i) (Shannon entropy)
breadth = e^H (Effective cluster count)
Where:
p_iis the proportion of support from cluster i (after normalizing)- Clusters aggregated: top 49 kept, remaining summed into "other"
Interpretation:
- Breadth ≈ 1: All support from one cluster (H ≈ 0)
- Breadth ≈ 5: Support evenly spread across ~5 clusters (H ≈ ln(5) ≈ 1.6)
- Breadth > 10: Broad cross-cluster appeal
Levels (algorithm.md specification):
low: breadth < 3medium: 3 ≤ breadth < 5high: breadth ≥ 5
Estimate how long reactions continue:
persistence = daysSinceFirst × recentRate
With time decay adjustment:
decay_factor = 0.5^(daysSinceFirst / halfLifeDays)
persistence_adj = persistence × decay_factor
Default halfLifeDays = 14.
Persistence levels:
- Low: < 1 day
- Medium: 1-7 days
- High: > 7 days
What fraction of support comes from the largest cluster:
top_cluster_share = max(w_c) / Σ w_c
Interpretation:
- 1.0: All from one cluster (narrow)
- 0.5: Top cluster is 50% (moderate)
- 0.2: Well distributed (broad)
PRS = {
1.0 if prsSource = 'saved'
0.8 if prsSource = 'liked'
0.6 if prsSource = 'following'
0.0 otherwise (unknown)
}
No complex modeling; direct signal from user history.
Weighted sum of 5 components:
CVS = w_like × like
+ w_context × context
+ w_collection × collection
+ w_bridge × bridge
+ w_sustain × sustain
Default weights (from algorithm.md v1.0):
w_like = 0.40w_context = 0.25w_collection = 0.20w_bridge = 0.10w_sustain = 0.05
Each component normalized to [0, 1].
Like component:
like = min(1.0, weightedLikeSum / 100)
Context component:
context = min(1.0, contextNoteCount / 20)
Collection component:
collection = min(1.0, collectionSaveCount / 50)
Bridge component:
bridge = min(1.0, crossClusterEngagement / 10)
Sustain component:
sustain = min(1.0, persistenceDays / 30)
finalScore = w_prs × PRS + w_cvs × CVS
Default weights:
w_prs = 0.70w_cvs = 0.30
Then apply penalty:
penalty = spamSuspect ? 0.5 : 1.0
finalScore_penalized = finalScore × penalty
Measures inequality in exposure/engagement distribution:
values_sorted = sort(values)
n = length(values)
cumsum = cumulative_sum(values_sorted)
total = sum(values)
Gini = 1 - (2 / n) × Σ (cumsum[i] / total)
Interpretation:
- Gini = 0: Perfect equality
- Gini = 1: Perfect inequality (all to one item)
- Gini < 0.5: Relatively equal
- Gini > 0.7: Highly concentrated
Threshold:
sorted_by_popularity = sort_desc(items, by: totalExposures)
threshold_index = floor(n × percentile)
threshold = sorted[threshold_index].totalExposures
Default percentile = 0.2 (top 20% = head).
Long tail exposure rate:
tail_rate = exposures_tail / exposures_total
Long tail click rate:
tail_ctr = clicks_tail / exposures_tail
Fraction of clusters with at least one exposure:
coverage = unique_clusters_exposed / total_clusters
Shannon entropy of cluster exposure distribution:
p_c = exposures_cluster_c / exposures_total
entropy = -Σ p_c × log₂(p_c)
Normalized:
entropy_normalized = entropy / log₂(total_clusters)
Interpretation:
- 0: All from one cluster
- 1: Perfectly uniform across clusters
Average unique clusters per user:
diversity = avg_users(unique_clusters_per_user)
Average click position (lower = more top-heavy):
position_bias = Σ (position × clicked) / Σ clicked
Interpretation:
- Position bias ≈ 0-2: Very top-heavy
- Position bias ≈ 5-10: Moderate distribution
- Position bias > 10: Diverse clicking
- finalScore: 9 decimal places
- Public metrics: 6 decimal places
- Intermediate calculations: Full precision
Always add priors or check for zero:
// Good
const ratio = (numerator + prior) / (denominator + prior)
// Bad
const ratio = numerator / denominator // Can be NaN!All scores clamped to valid ranges:
const clamped = Math.max(min, Math.min(max, value))Use Math.fround() for deterministic 32-bit precision when needed.
| Metric | Range | Purpose |
|---|---|---|
| Like Weight | 0.0-1.0 | Spam prevention |
| CR Score | 0.1-10.0 | Curator quality |
| CR Multiplier | 0.5-2.0 | Like amplification |
| CP Multiplier | 0.2-1.0 | Issuance scaling |
| View Weight | 0.2-2.0 | Attention value |
| Support Density | 0.0-1.0+ | Like density |
| Support Rate | 0.0-1.0 | Conversion rate |
| Breadth | 0.0-∞ | Cluster spread |
| Persistence | 0.0-∞ days | Reaction longevity |
| PRS | 0.0-1.0 | Personal relevance |
| CVS | 0.0-1.0 | Cultural value |
| Final Score | 0.0-1.0 | Combined ranking |
| Gini | 0.0-1.0 | Inequality |
| Coverage | 0.0-1.0 | Cluster fraction |
| Entropy (norm) | 0.0-1.0 | Distribution evenness |
- Architecture - System design
- Usage Guide - Code examples
- Parameters - Tuning guide
- Main Spec - Normative spec