Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions app/seidb.go
Original file line number Diff line number Diff line change
Expand Up @@ -164,6 +164,11 @@ func parseSSConfigs(appOpts servertypes.AppOptions) config.StateStoreConfig {
}
ssConfig.ReadMode = parsedRM
}

// DB-state consistency checks live in the composite state store constructor.
if err := ssConfig.Validate(); err != nil {
panic(fmt.Sprintf("invalid state-store config: %s", err))
}
return ssConfig
}

Expand Down
251 changes: 251 additions & 0 deletions docs/migration/giga_store_migration.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,251 @@
# Giga SS Store Migration Guide

## Overview
Giga SS Store is the next step in Sei's storage evolution on top of SeiDB. It splits the
hot EVM state into its own dedicated state-store (SS) database so the node can scale to
**~150k TPS** target throughput. Migrating to Giga SS Store repartitions the SS layer into
two cooperating stores:

| Layer | Cosmos backend | EVM backend |
|-------|----------------|-------------|
| **SC** (State Commit, app hash) | memiavl | FlatKV |
| **SS** (State Store, historical queries) | single MVCC DB (Pebble/Rocks) | dedicated EVM SS MVCC DB(s) under `data/evm_ss/` |

Only the **SS** layer changes for this migration. SC layer config is unaffected; it stays
on the default `cosmos_only` write/read mode where memiavl remains authoritative for the
app hash.

## Prerequisite
- This migration guide is for **RPC nodes only**. Validator nodes and archive nodes are
not supported by this migration flow yet.
- Migrating to Giga SS Store **requires a full state sync**. There is no in-place migration
path for a running full node and no live "dual-write then split" workflow in this guide.
A state sync wipes the local data directory and imports a fresh snapshot into the new
layout.
- Go and binary version supporting Giga SS Store (same release as the rest of your SeiDB
build).
- `sc-enable = true` and `ss-enable = true`. Both must be enabled for this migration.

## Giga SS Store Introduction
Giga SS Store solves the following scaling problems for EVM-heavy workloads:

- EVM reads and writes share the same underlying SS DB with all other Cosmos modules,
which creates write amplification and read contention as EVM state grows.
- Historical EVM queries (e.g. `debug_traceBlock*`, `eth_getLogs`) compete for the same
LSM compactions as bank, staking, wasm, etc.
- A single SS DB for every module caps how much parallelism the storage layer can
provide at the target 150k TPS.

### Benefits of migrating to Giga SS Store
- EVM reads served exclusively from a dedicated EVM SS database.
- Non-EVM modules no longer pay write amplification for EVM state.
- Iterators (`Iterator`, `ReverseIterator`, `RawIterate`) continue to work against the
Cosmos SS store, so operational tooling that relies on them is unaffected.
- Backend change (PebbleDB ↔ RocksDB) can be combined with the same state sync since a
single `ss-backend` flag drives both the Cosmos SS MVCC DB and the EVM SS sub-DBs.

## Migration Steps

### Step 1: Add Configurations
Apply the following settings in `app.toml`. Usually you can find this file under
`~/.sei/config/app.toml`.

```bash
#############################################################################
### Giga SS Store Configuration ###
#############################################################################

[state-commit]
# State commit stays on its default (cosmos_only) for both write and read modes.
# memiavl remains authoritative for the app hash. Do not flip sc-write-mode or
# sc-read-mode off cosmos_only unless the network has upgraded to lattice hash.
sc-enable = true

[state-store]
# Enable state-store. Giga SS Store lives inside the SS layer.
ss-enable = true

# DBBackend for the Cosmos SS MVCC DB and for every EVM SS sub-DB.
# Supported backends: pebbledb, rocksdb. Default pebbledb.
ss-backend = "pebbledb"

# evm-ss-write-mode controls how EVM writes are routed at the SS layer.
# - cosmos_only (default): all writes to Cosmos SS, EVM SS DB not opened.
# - dual_write: EVM writes to both Cosmos SS and EVM SS.
# - split_write: EVM writes go ONLY to EVM SS, non-EVM ONLY to Cosmos SS.
# For Giga SS Store, set this to split_write.
evm-ss-write-mode = "split_write"

# evm-ss-read-mode controls how EVM reads are routed at the SS layer.
# - cosmos_only (default): all reads from Cosmos SS.
# - evm_first: try EVM SS first, fall back to Cosmos SS on miss.
# - split_read: EVM reads ONLY from EVM SS (no fallback).
# For Giga SS Store, set this to split_read.
evm-ss-read-mode = "split_read"
```

If you are switching backend in the same step:
- PebbleDB → RocksDB: set `ss-backend = "rocksdb"`, build with `-tags rocksdbBackend`,
and install RocksDB per the [SeiDB Migration Guide](./seidb_migration.md#step-2-tune-configs-based-on-node-role).
- No data migration tool is needed across backends for Giga SS Store — the state sync is
what populates the new layout.

### Step 2: State Sync
Giga SS Store is fully compatible with the existing state snapshot format. On import, the
composite state store routes each snapshot node based on the **importing node's**
`evm-ss-write-mode`:

- With `split_write`, EVM snapshot nodes are written only into EVM SS and non-EVM nodes
are written only into Cosmos SS.
- The import path also normalizes legacy `evm_flatkv` snapshot nodes to `evm`, so
snapshots produced by either the old or new FlatKV module are accepted.

Both stores end up fully populated at the snapshot height, which is why the node can
start directly with `evm-ss-read-mode = "split_read"` without a fallback phase.

Use the state sync flow documented in the
[SeiDB Migration Guide](./seidb_migration.md#step-3-state-sync). A minimal shape:

```bash
export TRUST_HEIGHT_DELTA=10000
export MONIKER="<moniker>"
export CHAIN_ID="<chain_id>"
export PRIMARY_ENDPOINT="<rpc_endpoint>"
export SEID_HOME="/root/.sei"

# 1. Stop seid
systemctl stop seid

# 2. Back up files you need to preserve and wipe local state
cp $SEID_HOME/data/priv_validator_state.json /root/priv_validator_state.json
cp $SEID_HOME/config/priv_validator_key.json /root/priv_validator_key.json
cp $SEID_HOME/genesis.json /root/genesis.json
rm -rf $SEID_HOME/data/*
rm -rf $SEID_HOME/wasm
rm -rf $SEID_HOME/config/priv_validator_key.json
rm -rf $SEID_HOME/config/genesis.json
rm -rf $SEID_HOME/config/config.toml

# 3. Re-init, update config.toml and app.toml (set the Giga SS Store values from Step 1)
seid init --chain-id "$CHAIN_ID" "$MONIKER"

# 4. Resolve trust height/hash and persistent peers against PRIMARY_ENDPOINT,
# then update config.toml (see SeiDB Migration Guide for the full snippet).

# 5. Restore the backed up files
cp /root/priv_validator_state.json $SEID_HOME/data/priv_validator_state.json
cp /root/priv_validator_key.json $SEID_HOME/config/priv_validator_key.json
cp /root/genesis.json $SEID_HOME/config/genesis.json

# 6. Start seid
systemctl restart seid
```

## Verification
To confirm that Giga SS Store is active, check the startup logs. You should see both of
the following:

- `"SeiDB SS is enabled"` with the configured `backend`.
- `"SeiDB EVM StateStore optimization is enabled"` with `writeMode=split_write` and
`readMode=split_read`.
- `"EVM state store enabled"` from the composite store constructor with the `dir`,
`writeMode`, and `readMode` labels.

On a local RPC node, confirm `debug_traceBlockByNumber` succeeds after state sync
completes:

```bash
curl -s -X POST http://127.0.0.1:8545 \
-H 'Content-Type: application/json' \
--data '{"jsonrpc":"2.0","method":"debug_traceBlockByNumber","params":["latest",{}],"id":1}'
```

The response should contain a `"result"` field rather than an RPC error.

## Safety Checks
Three layers of checks make it hard to silently run the node on a misconfigured or
stale EVM SS DB — in particular, they catch the exact footgun of flipping
`evm-ss-write-mode` / `evm-ss-read-mode` to anything other than `cosmos_only` and
restarting without state syncing.

1. **Config-level check (at process start).** `app/seidb.go` runs
`StateStoreConfig.Validate()` while parsing `app.toml`. It rejects mode combinations
that are self-inconsistent regardless of DB state:
- `evm-ss-write-mode = "split_write"` with `evm-ss-read-mode = "cosmos_only"` — EVM
writes would land only in EVM SS but reads would never consult it.
- `evm-ss-write-mode = "cosmos_only"` with `evm-ss-read-mode = "split_read"` — reads
are forced to EVM SS which never gets any writes.

2. **Directory-existence check (before the EVM SS is opened).** When the EVM SS is
enabled (any mode is non-`cosmos_only`), `NewCompositeStateStore` refuses to proceed
if Cosmos SS already has committed history but the EVM SS directory
(`data/evm_ss/` by default) does not exist yet. This is the earliest possible
defense: the check runs *before* the sub-DBs are opened, so rejecting the config
does not leave a confusing empty `data/evm_ss/` behind for the operator to clean
up, and the node cannot silently begin writing into a brand-new EVM SS DB.

3. **DB-state checks (inside `NewCompositeStateStore`).** Once the EVM SS DB is open,
the constructor also inspects the on-disk version metadata and fails fast in the
following situations:
- **Cosmos SS has history but EVM SS is empty (pre-recovery).** This is the
belt-and-suspenders counterpart to check (2): it catches the same misconfiguration
when the EVM SS directory was left behind by a previous failed attempt but its
DBs are still empty. The changelog only covers the last `KeepRecent` blocks, so
WAL replay cannot rebuild the EVM SS from scratch.
- **Cosmos SS and EVM SS earliest versions do not match (post-recovery).** After the
WAL has had a chance to close any small gap, a mismatch in earliest version
indicates the two DBs were populated from different snapshots (or pruned
independently), which would produce inconsistent reads under `evm_first` /
`split_read`.

If any of these checks fire, the correct remediation is to either (a) perform the
state sync described above, or (b) revert `evm-ss-write-mode` /
`evm-ss-read-mode` back to `cosmos_only`. If the existing `data/evm_ss/` directory
is stale (from a failed attempt), remove it before state syncing.

## Rollback Steps
To roll back to a pre-Giga SS Store layout:
- Set `evm-ss-write-mode = "cosmos_only"` and `evm-ss-read-mode = "cosmos_only"` in
`app.toml`.
- Restart the node. The EVM SS DB under `data/evm_ss/` will not be opened but will
remain on disk until manually removed.

If you want to fully reclaim the EVM SS disk usage, stop the node and delete the
`data/evm_ss/` directory after reverting the modes.

## FAQ

### Where can I find the data files after migrating to Giga SS Store?
- Cosmos SS data lives under the same directory as before (typically `data/pebbledb/`
for the default `pebbledb` backend).
- EVM SS data lives under `data/evm_ss/`.
- SC data (memiavl + FlatKV) is untouched by this migration.

### Does Giga SS Store change the app hash or consensus?
No. The SC layer stays at `cosmos_only` write/read mode, so memiavl remains the
authoritative source for the app hash. Giga SS Store is a per-node SS change that is
invisible to the network.

### Can I migrate a validator node with this guide?
Not yet. This migration guide is for RPC nodes only. Validator-node migration is out of
scope for now.

### Can I migrate an archive node with this guide?
Not yet. Archive-node migration is also out of scope for this guide today.

### Can I toggle back to `cosmos_only` after enabling Giga SS Store?
Yes. Set both modes back to `cosmos_only` and restart. The EVM SS DB will simply stop
being opened; Cosmos SS still has every write that went through it while you were in
`split_write` mode (via the import path at state-sync time) or `dual_write` mode. If
you only ever ran in `split_write`, Cosmos SS will not have the EVM writes that
happened after the state sync, so rolling back cleanly requires another state sync.

### Why can't I just flip the modes on a running node?
`split_write` and `split_read` require the EVM SS DB to already contain the full
history that Cosmos SS has. A live flip would leave the EVM SS DB empty (or far
behind) while `split_read` refuses to fall back to Cosmos SS, which would translate
into missing EVM state at query time. The safety checks described above block this
scenario at startup.

### Does Giga SS Store support historical proofs?
No, same as SeiDB. SS stores raw KVs and does not reconstruct IAVL-style proofs.
42 changes: 42 additions & 0 deletions sei-db/config/ss_config.go
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
package config

import "fmt"

// DBBackend defines the SS DB backend.
type DBBackend string

Expand Down Expand Up @@ -115,3 +117,43 @@ func DefaultStateStoreConfig() StateStoreConfig {
SeparateEVMSubDBs: false,
}
}

// Validate rejects self-inconsistent evm-ss mode combinations. DB-state
// consistency checks live in the composite state store constructor.
func (c StateStoreConfig) Validate() error {
writeMode := c.WriteMode
if writeMode == "" {
writeMode = CosmosOnlyWrite
}
readMode := c.ReadMode
if readMode == "" {
readMode = CosmosOnlyRead
}

if !writeMode.IsValid() {
return fmt.Errorf("invalid evm-ss-write-mode: %s", writeMode)
}
if !readMode.IsValid() {
return fmt.Errorf("invalid evm-ss-read-mode: %s", readMode)
}

// split_write + cosmos_only read: EVM writes land only in EVM SS but
// reads never consult it.
if writeMode == SplitWrite && readMode == CosmosOnlyRead {
return fmt.Errorf(
"invalid evm-ss modes: write=%q routes EVM writes only to EVM SS but read=%q never reads from it; use read=%q or %q",
writeMode, readMode, EVMFirstRead, SplitRead,
)
}

// cosmos_only write + split_read: EVM SS never receives writes but
// split_read has no Cosmos fallback.
if writeMode == CosmosOnlyWrite && readMode == SplitRead {
return fmt.Errorf(
"invalid evm-ss modes: read=%q requires EVM SS but write=%q never populates it; use write=%q or %q, or read=%q or %q",
readMode, writeMode, DualWrite, SplitWrite, CosmosOnlyRead, EVMFirstRead,
)
}

return nil
}
37 changes: 37 additions & 0 deletions sei-db/config/toml_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -286,6 +286,43 @@ func TestStateCommitConfigValidate(t *testing.T) {
}
}

// TestStateStoreConfigValidate covers self-inconsistent evm-ss mode
// combinations. DB-state checks live in the composite store constructor.
func TestStateStoreConfigValidate(t *testing.T) {
tests := []struct {
name string
writeMode WriteMode
readMode ReadMode
hasError bool
}{
{"default cosmos_only", CosmosOnlyWrite, CosmosOnlyRead, false},
{"dual_write + evm_first", DualWrite, EVMFirstRead, false},
{"split_write + split_read (giga target)", SplitWrite, SplitRead, false},
{"dual_write + split_read", DualWrite, SplitRead, false},
{"split_write + evm_first", SplitWrite, EVMFirstRead, false},
{"empty modes treated as cosmos_only", WriteMode(""), ReadMode(""), false},
{"invalid write mode", WriteMode("nope"), CosmosOnlyRead, true},
{"invalid read mode", CosmosOnlyWrite, ReadMode("nope"), true},
{"split_write + cosmos_only is broken", SplitWrite, CosmosOnlyRead, true},
{"cosmos_only + split_read is broken", CosmosOnlyWrite, SplitRead, true},
}

for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
cfg := DefaultStateStoreConfig()
cfg.WriteMode = tc.writeMode
cfg.ReadMode = tc.readMode

err := cfg.Validate()
if tc.hasError {
require.Error(t, err)
} else {
require.NoError(t, err)
}
})
}
}

// TestTemplateFieldPathsExist is a comprehensive test that extracts all template
// field paths and verifies they exist in the config struct. This catches typos
// and renamed fields.
Expand Down
Loading
Loading