Skip to content

Commit 70bc179

Browse files
authored
Nethermind DB sizes (#539)
1 parent f64cd1d commit 70bc179

1 file changed

Lines changed: 20 additions & 16 deletions

File tree

website/docs/Usage/ResourceUsage.md

Lines changed: 20 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ DB Size is shown with values for different types of nodes: Full, and different l
3535
| Client | Version | Date | DB Full | DB Post-Merge | DB Post-Cancun | DB Rolling | DB Aggressive | RAM | Notes |
3636
|--------|---------|------|---------|---------------|----------------|------------|---------------|-----|-------|
3737
| Geth | 1.15.11 | May 2025 | ~1.2 TiB | ~830 GiB | n/a | n/a | n/a | ~ 8 GiB | |
38-
| Nethermind | 1.31.10 | May 2025 | ~1.1 TiB | ~740 GiB | tbd | tbd | n/a | ~ 7 GiB | With HalfPath, can automatic online prune at ~350 GiB free |
38+
| Nethermind | 1.36.0 | February 2026 | ~1.1 TiB | ~740 GiB | ~600 GiB | ~240 GiB | n/a | ~ 7 GiB | With HalfPath, can automatic online prune at ~350 GiB free |
3939
| Besu | v25.8.0 | August 2025 | ~1.35 TiB | ~850 GiB | n/a | tbd | ~290 GiB | ~ 10 GiB | |
4040
| Reth | 1.5.0 | July 2025 | ~1.6 TiB | ~950 GiB | tbd | tbd | tbd | ~ 9 GiB | |
4141
| Erigon | 3.0.3 | May 2025 | ~1.0 TiB | ~650 GiB | n/a | tbd | tbd | See comment | Erigon will have the OS use all available RAM as a DB cache during post-sync operation, but this RAM is free to be used by other programs as needed. During sync, it may run out of memory on machines with 32 GiB or less |
@@ -52,7 +52,7 @@ on [Paprika](https://github.com/NethermindEth/nethermind/pull/7157) and
5252

5353
Please pay attention to the Version and Date. Newer versions might sync faster, or slower.
5454

55-
These are initial syncs of a full node without history expiry. For clients that support it, snap sync was used; otherwise, full sync.
55+
These are initial syncs of a node with a stated amount of history expiry. For clients that support it, snap sync was used; otherwise, full sync.
5656

5757
NB: All execution clients need to [download state](https://github.com/ethereum/go-ethereum/issues/20938#issuecomment-616402016) after getting blocks. If state isn't "in" yet, your sync is not done. This is a heavily disk latency dependent operation, which is why HDD cannot be used for a node.
5858

@@ -62,32 +62,36 @@ This should complete in under 4 hours. If it does not, or even goes on for a wee
6262

6363
Cache size default in all tests.
6464

65-
| Client | Version | Date | Test System | Time Taken | Notes |
66-
|--------|---------|------|-------------|------------|--------|
67-
| Geth | 1.15.10 | April 2025 | OVH Baremetal NVMe | ~ 5 hours | |
68-
| Nethermind | 1.24.0| January 2024 | OVH Baremetal NVMe | ~ 5 hours | Ready to attest after ~ 1 hour |
69-
| Besu | v25.8.0 | August 2025 | OVH Baremetal NVMe | ~ 13 hours | With history expiry |
70-
| Erigon | 3.0.3 with expiry PR | May 2025 | OVH Baremetal NVMe | ~ 2 hours | With history expiry |
71-
| Reth | beta.1 | March 2024 | OVH Baremetal NVMe | ~ 2 days 16 hours | |
72-
| Nimbus | 0.1.0-alpha | May 2025 | OVH Baremetal NVME | ~ 5 1/2 days | With Era1 import |
73-
| Ethrex | 4.0.0 | October 2025 | OVH Baremetal NVME | ~ 2 hours | |
65+
| Client | Version | Date | Node Type | Test System | Time Taken | Notes |
66+
|--------|---------|------|-----------|-------------|------------|--------|
67+
| Geth | 1.15.10 | April 2025 | Full | OVH Baremetal NVMe | ~ 5 hours | |
68+
| Nethermind | 1.24.0| January 2024 | Full | OVH Baremetal NVMe | ~ 5 hours | Ready to attest after ~ 1 hour |
69+
| Nethermind | 1.36.0| February 2026 | post-Cancun | Netcup RS G11 | ~ 2 hours | Ready to attest after ~ 1 hour |
70+
| Besu | v25.8.0 | August 2025 | post-merge | OVH Baremetal NVMe | ~ 13 hours | |
71+
| Erigon | 3.0.3 | May 2025 | post-merge | OVH Baremetal NVMe | ~ 2 hours | |
72+
| Reth | beta.1 | March 2024 | Full | OVH Baremetal NVMe | ~ 2 days 16 hours | |
73+
| Nimbus | 0.1.0-alpha | May 2025 | Full | OVH Baremetal NVME | ~ 5 1/2 days | With Era1 import |
74+
| Ethrex | 4.0.0 | October 2025 | post-merge | OVH Baremetal NVME | ~ 2 hours | |
7475

7576
## Test Systems
7677

78+
Latency is what matters most to Ethereum clients. Measure it with `sudo ioping -D -c 30 /dev/<ssd-device>` during load. Ideally while running a client, but using an `fio` to generate
79+
synthetic load will also get you a ballpark figure. You'd want to be under 300 us max (microseconds, not milliseconds) for an Ethereum execution client. High latency negatively impacts
80+
attestation performance, and is particularly noticeable during sync committee duties.
81+
7782
IOPS is random read-write IOPS [measured by fio with "typical" DB parameters](https://arstech.net/how-to-measure-disk-performance-iops-with-fio-in-linux/), 150G file, without other processes running.
7883

7984
Specifically `fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=150G --readwrite=randrw --rwmixread=75; rm test`. If the test shows it'd take hours to complete, feel free to cut it short once the IOPS display for the test looks steady.
8085

8186
150G was chosen to "break through" any caching strategems the SSD uses for bursty writes. Execution clients write steadily, and the performance of an SSD under heavy write is more important than its performance with bursty writes.
8287

83-
Read and write latencies can be measured with `sudo ioping -D -c 30 /dev/<ssd-device>` during the `fio`.
84-
8588
Servers have been configured with [noatime](https://www.howtoforge.com/reducing-disk-io-by-mounting-partitions-with-noatime) and [no swap](https://www.geeksforgeeks.org/how-to-permanently-disable-swap-in-linux/) to improve latency.
8689

8790

8891
| Name | RAM | SSD Size | CPU | r/w IOPS | r/w latency | Notes |
8992
|----------------------|--------|----------|------------|------|-------|--------|
90-
| [OVH](https://ovhcloud.com/) Baremetal NVMe | 32 GiB | 1.9 TB | Intel Hexa | 177k/59k | | |
93+
| [OVH](https://ovhcloud.com/) Baremetal NVMe | 32 GiB | 1.9 TB | Intel Hexa | 177k/59k | 150us max | This is in line with any good NVMe drive |
94+
| [Netcup](https://netcup.eu) RS G11 | 96 GiB | 3 TB | 20 vCPU on an AMD 84-core | | 400us avg / 1.1ms max | This is an example of a system with storage that is fast enough to attest, but too slow to get best rewards |
9195

9296
## Getting better latency
9397

@@ -96,11 +100,11 @@ Ethereum execution layer clients need decently low latency. IOPS can be used as
96100
For cloud providers, here are some results for syncing Geth.
97101
- AWS, gp2 or gp3 with provisioned IOPS have both been tested successfully.
98102
- Linode block storage, make sure to get NVMe-backed storage.
99-
- Netcup is sufficient as of late 2021.
103+
- Netcup RS G11 works, but rewards are not optimal.
100104
- There are reports that Digital Ocean block storage is too slow, as of late 2021.
101105
- Strato V-Server is too slow as of late 2021.
102106

103-
Dedicated servers with SATA or NVMe SSD will always have sufficiently low latency. Do avoid hardware RAID though, see below.
107+
Dedicated servers with NVMe SSD will always have sufficiently low latency. Do avoid hardware RAID though, see below.
104108
OVH Advance line is a well-liked dedicated option; Linode or Strato or any other provider will work as well.
105109

106110
For own hardware, we've seen three causes of high latency:

0 commit comments

Comments
 (0)