This document describes the environment variables that can be used to configure the ar.io node.
| ENV_NAME | TYPE | DEFAULT_VALUE | DESCRIPTION |
|---|---|---|---|
| START_HEIGHT | Number or "Infinity" | 0 | Starting block height for node synchronization (0 = start from the beginning) |
| STOP_HEIGHT | Number or "Infinity" | "Infinity" | Stop block height for node synchronization (Infinity = keep syncing until stopped) |
| TRUSTED_NODE_URL | String | "https://arweave.net" | Arweave node to use for fetching data |
| TRUSTED_NODE_HOST | String | "arweave.net" | Hostname for the trusted Arweave node used by Envoy proxy |
| TRUSTED_NODE_PORT | Number | 443 | Port for the trusted Arweave node used by Envoy proxy |
| TRUSTED_GATEWAY_URL | String | "https://arweave.net" | Arweave node to use for proxying requests |
| TRUSTED_GATEWAYS_URLS | String | See below | A JSON map of gateway URLs to priority (number) or config object ({"priority": N, "trusted": bool}). Simple number values are implicitly trusted. Default: {"https://turbo-gateway.com": 1, "https://arweave.net": {"priority": 2, "trusted": false}}. When trusted is false, data is only cached if a hash is already known and matches; the hash is never written to the DB as authoritative. |
| TRUSTED_GATEWAYS_REQUEST_TIMEOUT_MS | String | "10000" | Connection timeout in milliseconds for trusted gateways (time to receive response headers) |
| STREAM_STALL_TIMEOUT_MS | String | "30000" | Stall timeout in milliseconds for data streams from gateways and peers. Stream is aborted if no data is received for this duration. Prevents stalled transfers from hanging indefinitely while allowing large, actively-streaming transfers to complete. |
| TRUSTED_GATEWAYS_BLOCKED_ORIGINS | String | "" | Comma-separated list of X-AR-IO-Origin header values to block when forwarding to trusted gateways (prevents loops and blocks unwanted sources) |
| TRUSTED_GATEWAYS_BLOCKED_IPS_AND_CIDRS | String | "" | Comma-separated list of IPs and CIDR ranges to block when forwarding to trusted gateways (prevents forwarding requests from specific client IPs) |
| TRUSTED_ARNS_GATEWAY_URL | String | "https://NAME.turbo-gateway.com" | ArNS gateway |
| TRUSTED_ARNS_RESOLVER_HOST_HEADER | String | - | Host header override for ArNS resolver requests; supports __NAME__ placeholder (e.g., __NAME__.ar-io.dev) |
| TURBO_ENDPOINT | String | "https://turbo.arweave.dev" | Turbo endpoint URL for root transaction ID lookups |
| TURBO_REQUEST_TIMEOUT_MS | Number | 10000 | Request timeout in milliseconds for Turbo requests |
| TURBO_REQUEST_RETRY_COUNT | Number | 3 | Number of retries for Turbo requests |
| BUNDLER_URLS | String | "https://turbo.ardrive.io/" | Comma-separated list of bundler service URLs advertised in /ar-io/info endpoint for client service discovery |
| GATEWAYS_ROOT_TX_URLS | JSON Object | {"https://turbo-gateway.com": 1} | JSON map of AR.IO gateway URLs and priority weights for root transaction offset discovery via HEAD requests. Lower numbers = higher priority |
| GATEWAYS_ROOT_TX_REQUEST_TIMEOUT_MS | Number | 10000 | Request timeout in milliseconds for gateway HEAD requests |
| GATEWAYS_ROOT_TX_RATE_LIMIT_BURST_SIZE | Number | 5 | Rate limit burst size for gateway HEAD requests |
| GATEWAYS_ROOT_TX_RATE_LIMIT_TOKENS_PER_INTERVAL | Number | 6 | Rate limit tokens per interval for gateway HEAD requests (6 per minute = 1 per 10 seconds) |
| GATEWAYS_ROOT_TX_RATE_LIMIT_INTERVAL | String | "minute" | Rate limit interval for gateway HEAD requests (second/minute/hour/day) |
| HYPERBEAM_ENDPOINT | String | "https://arweave.net" | HyperBEAM node URL for offset lookups. Override to use a different HyperBEAM node |
| HYPERBEAM_REQUEST_TIMEOUT_MS | Number | 10000 | Request timeout in milliseconds for HyperBEAM offset lookups |
| HYPERBEAM_ROOT_TX_RATE_LIMIT_BURST_SIZE | Number | 5 | Rate limit burst size for HyperBEAM offset lookups |
| HYPERBEAM_ROOT_TX_RATE_LIMIT_TOKENS_PER_INTERVAL | Number | 6 | Rate limit tokens per interval for HyperBEAM offset lookups (6 per minute = 1 per 10 seconds) |
| HYPERBEAM_ROOT_TX_RATE_LIMIT_INTERVAL | String | "minute" | Rate limit interval for HyperBEAM offset lookups (second/minute/hour/day) |
| ROOT_TX_LOOKUP_ORDER | String | "db,gateways,hyperbeam,cdb,graphql" | Comma-separated list of root TX lookup sources in order of priority. Options: 'db' (local database), 'cdb' (CDB64 file index), 'gateways' (AR.IO gateways), 'hyperbeam' (HyperBEAM offset API), 'turbo' (Turbo API), 'graphql' (GraphQL). CDB64 enabled by default since Release 67 |
| CDB64_ROOT_TX_INDEX_WATCH | Boolean | true | Enable runtime file watching for local CDB64 directories. When true, .cdb files added to or removed from watched directories are automatically loaded/unloaded without restart |
| CDB64_ROOT_TX_INDEX_SOURCES | String | shipped manifest (see note) | Comma-separated list of CDB64 index sources. Supports: local paths, directories, Arweave TX IDs (43-char base64url), bundle data items (txId:offset:size), HTTP URLs, and partitioned manifests. Default ships with ~964M records covering non-AO, non-Redstone data items up to block 1,820,000 |
| CDB64_REMOTE_RETRIEVAL_ORDER | String | "gateways,chunks" | Comma-separated list of data sources for fetching remote CDB64 files. Options: 'gateways' (trusted gateways), 'chunks' (L1 chunk reconstruction), 'tx-data' (Arweave node /tx/:id/data) |
| CDB64_REMOTE_CACHE_MAX_REGIONS | Number | 100 | Maximum number of byte-range regions to cache per remote CDB64 source |
| CDB64_REMOTE_CACHE_TTL_MS | Number | 300000 | TTL in milliseconds for cached CDB64 byte-range regions (5 minutes) |
| CDB64_REMOTE_REQUEST_TIMEOUT_MS | Number | 30000 | Request timeout in milliseconds for remote CDB64 source requests (30 seconds) |
| CDB64_REMOTE_MAX_CONCURRENT_REQUESTS | Number | 4 | Maximum concurrent HTTP requests across all remote CDB64 sources. Limits request pile-up when reading CDB files from HTTP/S3 endpoints |
| CDB64_REMOTE_SEMAPHORE_TIMEOUT_MS | Number | 5000 | Maximum time in milliseconds to wait for a request slot before failing. Prevents indefinite blocking when concurrent request limit is reached |
| ROOT_TX_CACHE_MAX_SIZE | Number | 100000 | Maximum size of the root transaction ID cache |
| ROOT_TX_CACHE_TTL_MS | Number | 300000 | TTL in milliseconds for root transaction ID cache entries (5 minutes) |
| ROOT_TX_INDEX_CIRCUIT_BREAKER_TIMEOUT_MS | Number | 30000 | Circuit breaker timeout in milliseconds for root transaction index requests |
| ROOT_TX_INDEX_CIRCUIT_BREAKER_FAILURE_THRESHOLD | Number | 50 | Circuit breaker failure threshold percentage for root transaction index requests |
| ROOT_TX_INDEX_CIRCUIT_BREAKER_SUCCESS_THRESHOLD | Number | 2 | Number of successful requests needed to close circuit breaker for root transaction index |
| WEIGHTED_PEERS_TEMPERATURE_DELTA | Number | 0.1 | Any positive number above 0, best to keep 1 or less. Used to determine the sensivity of which the probability of failing or succeeding peers decreases or increases. |
| PEER_MAX_CONCURRENT_OUTBOUND | Number | 10 | Maximum concurrent outbound contiguous data requests per AR.IO peer. Saturated peers are skipped rather than queued. Does not apply to trusted gateway forwarding or chunk retrieval. |
| PEER_CANDIDATE_COUNT | Number | 5 | Number of candidate peers selected for each contiguous data retrieval attempt from AR.IO peers. |
| PEER_HEDGE_DELAY_MS | Number | 500 | Milliseconds to wait before firing a hedged request to the next candidate peer. Set to 0 for sequential (advance on failure only) behavior. |
| PEER_MAX_HEDGED_REQUESTS | Number | 3 | Maximum number of concurrent hedged requests per getData() call to AR.IO peers. |
| PEER_HASH_RING_VIRTUAL_NODES | Number | 150 | Number of virtual nodes per peer on the consistent hash ring, used for cache-locality peer selection. |
| PEER_HASH_RING_HOME_SET_SIZE | Number | 3 | Number of "home" peers returned by the hash ring for a given data ID. Home peers are prioritized before weighted fallback selection. |
| INSTANCE_ID | String | "" | Adds an "INSTANCE_ID" field to output logs |
| LOG_FORMAT | String | "simple" | Sets the format of output logs, accepts "simple" and "json" |
| SKIP_CACHE | Boolean | false | If true, skips the local cache and always fetches headers from the node |
| SKIP_DATA_CACHE | Boolean | false | If true, skips the data cache (read-through data cache) and always fetches data from upstream sources |
| NEGATIVE_CACHE_ENABLED | Boolean | false | If true, enables the negative data cache that short-circuits 404 responses for repeatedly not-found data IDs |
| NEGATIVE_CACHE_MAX_SIZE | Number | 100000 | Maximum number of entries in the negative cache and miss tracker LRU structures |
| NEGATIVE_CACHE_TTL_MS | Number | 7200000 | Base TTL in milliseconds for negative cache entries (2 hours). Doubles with each re-promotion (exponential backoff) up to NEGATIVE_CACHE_MAX_TTL_MS |
| NEGATIVE_CACHE_MISS_THRESHOLD_MS | Number | 300000 | Duration in milliseconds over which misses must occur before first promotion to negative cache (5 minutes). Accepts 0 to disable duration requirement |
| NEGATIVE_CACHE_MISS_COUNT_THRESHOLD | Number | 10 | Number of misses required before an ID can be promoted to the negative cache. After first promotion, a single miss triggers re-promotion |
| NEGATIVE_CACHE_MISS_TRACKER_TTL_MS | Number | 3600000 | TTL in milliseconds for miss tracker entries (1 hour). Controls how long partial miss counts are retained before expiring |
| NEGATIVE_CACHE_MAX_TTL_MS | Number | 172800000 | Maximum TTL in milliseconds for negative cache entries (48 hours). Caps the exponential backoff growth |
| NEGATIVE_CACHE_PROMOTION_HISTORY_TTL_MS | Number | 604800000 | TTL in milliseconds for promotion history entries (7 days). Controls how long the cache remembers prior promotions for fast re-promotion and backoff |
| NEGATIVE_CACHE_HEALTH_WINDOW_MS | Number | 60000 | Sliding window in milliseconds for tracking success/failure rates (1 minute). Counters reset after each window |
| NEGATIVE_CACHE_UNHEALTHY_THRESHOLD | Number | 0.8 | Failure rate (0-1) above which the system is considered unhealthy and promotions are suppressed. 0.8 means >80% failures suppresses promotions |
| NEGATIVE_CACHE_HEALTH_MIN_SAMPLE_SIZE | Number | 10 | Minimum number of success+failure samples in the health window before the unhealthy threshold is evaluated. Prevents suppression on insufficient data |
| UNTRUSTED_CACHE_RETRY_RATE | Number | 0.1 | Probability (0-1) of stochastically re-verifying cached data from untrusted sources on cache hit |
| TRUSTED_CACHE_RETRY_RATE | Number | 0.0 | Probability (0-1) of stochastically re-verifying cached data from trusted sources on cache hit |
| BACKGROUND_CACHE_RANGE_MAX_SIZE | Number | 0 | Maximum item size (bytes) eligible for background full-item cache after a range cache miss. 0 disables background caching |
| BACKGROUND_CACHE_RANGE_CONCURRENCY | Number | 1 | Maximum concurrent background full-item cache fetches triggered by range cache misses |
| PORT | Number | 4000 | ar.io node exposed port number |
| SIMULATED_REQUEST_FAILURE_RATE | Number | 0 | Number from 0 to 1, representing the probability of a request failing |
| AR_IO_WALLET | String | "" | Arweave wallet address used for staking and rewards |
| ADMIN_API_KEY | String | Generated | API key used for admin API requests (if not set, it's generated and logged into the console) |
| ADMIN_API_KEY_FILE | String | Generated | Alternative way to set the API key used for admin API requests via filepath, it takes precedene over ADMIN_API_KEY if defined |
| BACKFILL_BUNDLE_RECORDS | Boolean | false | If true, ar.io node will start indexing missing bundles |
| FILTER_CHANGE_REPROCESS | Boolean | false | If true, all indexed bundles will be reprocessed with the new filters (you can use this when you change the filters) |
| ON_DEMAND_RETRIEVAL_ORDER | String | trusted-gateways,ar-io-network,chunks-offset-aware,tx-data | Data source retrieval order for on-demand data requests. Note: 'chunks-data-item' is deprecated, use 'chunks-offset-aware' instead |
| SKIP_FORWARDING_HEADERS | String | ao-peer-port | Comma-separated list of HTTP headers that indicate a compute-origin request (e.g., from HyperBEAM). When present, remote forwarding to AR.IO peers and trusted gateways is skipped to prevent request loops. Local sources (cache, S3, DB) are still served. |
| SKIP_FORWARDING_USER_AGENTS | String | (empty) | Comma-separated list of User-Agent substrings. Requests whose User-Agent contains any of these substrings (case-insensitive) skip remote forwarding. |
| SKIP_FORWARDING_EMPTY_USER_AGENT | Boolean | true | When true, requests with missing or empty User-Agent headers skip remote forwarding. Catches HTTP clients like Erlang's gun (used by HyperBEAM) that don't send a User-Agent. |
| BACKGROUND_RETRIEVAL_ORDER | String | chunks | Data source retrieval order for background data requests (i.e., unbundling) |
| ENABLE_SAMPLING_DATA_SOURCE | Boolean | false | Enable probabilistic sampling of requests through an experimental data source for A/B testing |
| SAMPLING_DATA_SOURCE | String | undefined | Data source name to sample (e.g., chunks-offset-aware, trusted-gateways). Required when ENABLE_SAMPLING_DATA_SOURCE is true |
| SAMPLING_RATE | Number | 0.1 | Fraction of requests to sample (0.0 to 1.0). 0.1 = 10% of requests |
| SAMPLING_STRATEGY | String | random | Sampling strategy: random (uniform distribution) or deterministic (hash-based, same ID always gets same sampling decision) |
| ANS104_UNBUNDLE_FILTER | String | {"never": true} | Only bundles compliant with this filter will be unbundled |
| ANS104_INDEX_FILTER | String | {"never": true} | Only bundles compliant with this filter will be indexed |
| ANS104_DOWNLOAD_WORKERS | String | 5 | Sets the number of ANS-104 bundles to attempt to download in parallel |
| ANS104_UNBUNDLE_WORKERS | Number | 0, or 1 if filters are set | Sets the number of workers used to handle unbundling |
| DATA_ITEM_FLUSH_COUNT_THRESHOLD | Number | 1000 | Sets the number of new data items indexed before flushing to to stable data items |
| MAX_FLUSH_INTERVAL_SECONDS | Number | 600 | Sets the maximum time interval in seconds before flushing to stable data items |
| WRITE_ANS104_DATA_ITEM_DB_SIGNATURES | Boolean | false | If true, the data item signatures will be written to the database. |
| WRITE_TRANSACTION_DB_SIGNATURES | Boolean | true | If true, the transactions signatures will be written to the database. |
| ENABLE_DATA_DB_WAL_CLEANUP | Boolean | false | If true, the data database WAL cleanup worker will be enabled |
| ENABLE_BACKGROUND_DATA_VERIFICATION | Boolean | false | If true, unverified data will be verified in background |
| ENABLE_DATA_ITEM_ROOT_TX_SEARCH | Boolean | true | If true, enables searching external APIs (GraphQL/Turbo) to find root transaction when local attributes are incomplete for offset-aware data sources |
| ENABLE_PASSTHROUGH_WITHOUT_OFFSETS | Boolean | true | If true, allows data retrieval without offset information in offset-aware data sources (falls back to less efficient methods) |
| MAX_DATA_ITEM_QUEUE_SIZE | Number | 100000 | Sets the maximum number of data items to queue for indexing before skipping indexing new data items |
| ARNS_ROOT_HOST | String | undefined | Domain name(s) for ArNS host. Supports comma-separated values for multiple hosts (e.g., arweave.dev,g8way.io). The first host is the "primary" used for gateway identity. ArNS subdomains, apex content, and sandbox redirects work on all configured hosts. |
| SANDBOX_PROTOCOL | String | undefined | Protocol (http/https) for ArNS sandbox redirects and x402 payment resource URLs. Set to 'https' when behind a reverse proxy/CDN. Used when ARNS_ROOT_HOST is set. |
| START_WRITERS | Boolean | true | If true, start indexing blocks, tx, ANS104 bundles |
| RUN_OBSERVER | Boolean | true | If true, runs the Observer alongside the gateway to generate Network compliance reports |
| MIN_RELEASE_NUMBER | String | 0 | Sets the minimum Gateway release version to check while doing a gateway version assessment |
| AR_IO_NODE_RELEASE | String | 0 | Sets the current ar.io node version to be set on X-AR-IO-Node-Release header on requests to the reference gateway |
| OBSERVER_WALLET | String | "" | The public wallet address of the wallet being used to sign report upload transactions and contract interactions for Observer |
| CHUNKS_DATA_PATH | String | "./data/chunks" | Sets the location for chunked data to be saved. If omitted, chunked data will be stored in the data directory |
| CONTIGUOUS_DATA_PATH | String | "./data/contiguous" | Sets the location for contiguous data to be saved. If omitted, contiguous data will be stored in the data directory |
| HEADERS_DATA_PATH | String | "./data/headers" | Sets the location for header data to be saved. If omitted, header data will be stored in the data directory |
| SQLITE_DATA_PATH | String | "./data/sqlite" | Sets the location for sqlite indexed data to be saved. If omitted, sqlite data will be stored in the data directory |
| DUCKDB_DATA_PATH | String | "./data/duckdb" | Sets the location for duckdb data to be saved. If omitted, duckdb data will be stored in the data directory |
| TEMP_DATA_PATH | String | "./data/tmp" | Sets the location for temporary data to be saved. If omitted, temporary data will be stored in the data directory |
| OBSERVER_STATE_PATH | String | "./data/observer" | Sets the location for Observer state data to be saved. Used to persist continuous observation state across restarts |
| LMDB_DATA_PATH | String | "./data/LMDB" | Sets the location for LMDB data to be saved. If omitted, LMDB data will be stored in the data directory |
| SECRETS_PATH | String | "./secrets" | Sets the location for sensitive configuration files (e.g., CDP API keys). Mounted read-only in container for security |
| CHAIN_CACHE_TYPE | String | "redis" | Sets the method for caching chain data, defaults redis if gateway is started with docker-compose, otherwise defaults to LMDB |
| REDIS_CACHE_URL | String (URL) | "redis://localhost:6379" | URL of Redis database to be used for caching |
| REDIS_CACHE_TTL_SECONDS | Number | 28800 | TTL value for Redis cache, defaults to 8 hours (28800 seconds) |
| ENABLE_FS_HEADER_CACHE_CLEANUP | Boolean | true if starting with docker, otherwise false | If true, periodically deletes cached header data |
| ENABLE_CHUNK_SYMLINK_CLEANUP | Boolean | true | If true, periodically removes dead symlinks from chunk cache directories (symlinks pointing to expired cached data) |
| CHUNK_SYMLINK_CLEANUP_INTERVAL | Number | 86400 | Interval in seconds between dead symlink cleanup runs (default: 24 hours) |
| NODE_JS_MAX_OLD_SPACE_SIZE | Number | 2048 or 8192, depending on number of workers | Sets the memory limit, in Megabytes, for NodeJs. Default value is 2048 if using less than 2 unbundle workers, otherwise 8192 |
| SUBMIT_CONTRACT_INTERACTIONS | Boolean | true | If true, Observer will submit its generated reports to the ar.io Network |
| REDIS_MAX_MEMORY | String | 256mb | Sets the max memory allocated to Redis |
| REDIS_DATA_PATH | String | "./data/redis" | Sets the location for Redis data persistence files (dump.rdb, appendonly.aof). Only used if persistence is enabled via EXTRA_REDIS_FLAGS |
| EXTRA_REDIS_FLAGS | String | --save "" --appendonly no | Additional CLI flags passed to Redis server. Default disables persistence for performance. Set to "--save 300 10 --appendonly yes --appendfsync everysec" to enable hybrid persistence (recommended for x402 paid tokens) |
| ARWEAVE_PEER_CHUNK_GET_MAX_PEER_ATTEMPT_COUNT | Number | 5 | Maximum number of Arweave peers to try sequentially when fetching a chunk via GET before giving up |
| ARWEAVE_PEER_CHUNK_GET_PEER_SELECTION_COUNT | Number | 10 | Number of candidate peers to select from each pool (bucket and general) for chunk GET requests |
| ARWEAVE_CHUNK_GET_GEOMETRY_TIMEOUT_MS | Number | 5000 | Per-request timeout (ms) for TX geometry resolution (getTxOffset/getTxField) used during chunk retrieval |
| ARWEAVE_CHUNK_GET_GEOMETRY_RETRY_COUNT | Number | 2 | Number of retries for TX geometry resolution requests during chunk retrieval |
| LEGACY_AWS_S3_CHUNK_DATA_BUCKET | String | undefined | S3 bucket name for legacy chunk data source. Required when 'legacy-s3' is in CHUNK_DATA_RETRIEVAL_ORDER |
| LEGACY_AWS_S3_CHUNK_DATA_PREFIX | String | undefined | Optional prefix for chunk data in the legacy S3 bucket. If omitted, the root of the bucket will be /{dataRoot}/{relativeOffset} |
| LEGACY_AWS_S3_ACCESS_KEY_ID | String | undefined | AWS access key for legacy S3 chunk bucket (optional - falls back to AWS_ACCESS_KEY_ID if not set) |
| LEGACY_AWS_S3_SECRET_ACCESS_KEY | String | undefined | AWS secret key for legacy S3 chunk bucket (optional - falls back to AWS_SECRET_ACCESS_KEY if not set) |
| LEGACY_AWS_S3_REGION | String | undefined | AWS region for legacy S3 chunk bucket. Required if using separate credentials (LEGACY_AWS_S3_ACCESS_KEY_ID) |
| LEGACY_AWS_S3_ENDPOINT | String | undefined | Custom endpoint for legacy S3 chunk bucket (optional - for S3-compatible services) |
| LEGACY_PSQL_CONNECTION_STRING | String | undefined | PostgreSQL connection URL for legacy chunk metadata source (format: postgresql://user:pass@host:port/database). Required when 'legacy-psql' is in CHUNK_METADATA_RETRIEVAL_ORDER |
| LEGACY_PSQL_PASSWORD_FILE | String | undefined | File path containing PostgreSQL password (alternative to including password in connection string for better security) |
| LEGACY_PSQL_SSL_REJECT_UNAUTHORIZED | Boolean | true | If false, allows connections to PostgreSQL servers with self-signed certificates (common workaround for cloud providers) |
| LEGACY_PSQL_MAX_CONNECTIONS | Number | 10 | Maximum number of connections in the PostgreSQL connection pool |
| LEGACY_PSQL_IDLE_TIMEOUT_SECONDS | Number | 30 | Time in seconds before idle connections are closed in the pool |
| LEGACY_PSQL_CONNECT_TIMEOUT_SECONDS | Number | 10 | Maximum time in seconds to wait when establishing a new connection |
| LEGACY_PSQL_MAX_LIFETIME_SECONDS | Number | 1800 | Maximum lifetime in seconds for a connection before it's rotated (30 minutes default) |
| LEGACY_PSQL_STATEMENT_TIMEOUT_MS | Number | 5000 | Server-side query timeout in milliseconds. Prevents queries from running forever. Critical for preventing system hangs |
| LEGACY_PSQL_IDLE_IN_TRANSACTION_TIMEOUT_MS | Number | 10000 | Server-side timeout in milliseconds for idle transactions. Cleans up stuck transactions that hold locks |
| WEBHOOK_TARGET_SERVERS | String | undefined | Target servers for webhooks |
| WEBHOOK_INDEX_FILTER | String | {"never": true} | Only emit webhooks for transactions and data items compliant with this filter |
| WEBHOOK_BLOCK_FILTER | String | {"never": true} | Only emit webhooks for blocks compliant with this filter |
| CONTIGUOUS_DATA_CACHE_CLEANUP_THRESHOLD | Number | undefined | Sets the age threshold in seconds; files older than this are candidates for contiguous data cache cleanup |
| ENABLE_MEMPOOL_WATCHER | Boolean | false | If true, the observer will start indexing pending tx from the mempool |
| MEMPOOL_POLLING_INTERVAL_MS | Number | 30000 | Sets the mempool polling interval in milliseconds |
| TAG_SELECTIVITY | String | Refer to config.ts | A JSON map of tag names to selectivity weights used to order SQLite tag joins |
| MAX_EXPECTED_DATA_ITEM_INDEXING_INTERVAL_SECONDS | Number | undefined | Sets the max expected data item indexing interval in seconds |
| AR_IO_SQLITE_BACKUP_S3_BUCKET_NAME | String | "" | S3-compatible bucket name, used by the Litestream backup service |
| AR_IO_SQLITE_BACKUP_S3_BUCKET_REGION | String | "" | S3-compatible bucket region, used by the Litestream backup service |
| AR_IO_SQLITE_BACKUP_S3_BUCKET_ACCESS_KEY | String | "" | S3-compatible bucket access_key credential, used by Litestream backup service, omit if using resource-based IAM role |
| AR_IO_SQLITE_BACKUP_S3_BUCKET_SECRET_KEY | String | "" | S3-compatible bucket access_secret_key credential, used by Litestream backup service, omit if using resource-based IAM role |
| AR_IO_SQLITE_BACKUP_S3_BUCKET_PREFIX | String | "" | A prepended prefix for the S3 bucket where SQLite backups are stored. |
| ARNS_MAX_CONCURRENT_RESOLUTIONS | Number | Number of ArNS resolvers | Maximum number of concurrent resolutions allowed for ARNs. |
| AWS_ACCESS_KEY_ID | String | undefined | AWS access key ID for accessing AWS services |
| AWS_SECRET_ACCESS_KEY | String | undefined | AWS secret access key for accessing AWS services |
| AWS_REGION | String | undefined | AWS region where the resources are located |
| AWS_ENDPOINT | String | undefined | Custom endpoint for AWS services |
| AWS_S3_CONTIGUOUS_DATA_BUCKET | String | undefined | AWS S3 bucket name used for storing data |
| AWS_S3_CONTIGUOUS_DATA_PREFIX | String | undefined | Prefix for the S3 bucket to organize data |
| CHUNK_POST_MIN_SUCCESS_COUNT | String | "3" | Minimum count of 200 responses for of a given chunk to be considered properly seeded |
| CHUNK_POST_MIN_PREFERRED_SUCCESS_COUNT | String | "2" | Minimum count of 200 responses from preferred (tip) nodes for a chunk to be considered properly seeded. Set to 0 to disable preferred node requirement |
| CHUNK_POST_MAX_CONSECUTIVE_FAILURES | String | "5" | Maximum consecutive 4xx responses before stopping chunk broadcast. Only applies when no peers have accepted the chunk. Set to 0 to disable early termination |
| ARWEAVE_POST_DRY_RUN | Boolean | false | If true, simulates transaction header and chunk submission without posting to Arweave. POST /tx and POST /chunk return 200 OK as if successful; only the final network broadcast is skipped. Works on both port 3000 (Envoy) and port 4000 (direct). By default, transaction signatures and chunk merkle proofs are still validated before success. When disabled, Envoy routes these requests to trusted Arweave nodes instead; GET /tx is always proxied to the trusted node regardless of this setting. |
| ARWEAVE_POST_DRY_RUN_SKIP_VALIDATION | Boolean | false | If true (and ARWEAVE_POST_DRY_RUN is enabled), skips transaction signature verification and chunk merkle proof validation for faster testing. |
| ARWEAVE_PEER_DNS_RECORDS | String | "peers.arweave.xyz" | Comma-separated DNS hostnames to resolve for Arweave peer discovery. Set to empty string to disable and fall back to static TRUSTED_NODE_HOST/FALLBACK_NODE_HOST |
| ARWEAVE_PEER_DNS_PORT | Number | 1984 | Port to use when connecting to discovered Arweave peers |
| ARWEAVE_NODE_MAX_HEIGHT_LAG | Number | 5 | Maximum number of blocks a peer can be behind the reference height before being excluded |
| ARWEAVE_NODE_MAX_HEIGHT_LEAD | Number | 5 | Maximum number of blocks a peer can be ahead of the reference height before being excluded |
| ARWEAVE_HEIGHT_MIN_CONSENSUS_COUNT | Number | 2 | Minimum number of peers that must agree on a height (within MAX_HEIGHT_LAG) for consensus |
| ARWEAVE_NODE_FULL_SYNC_THRESHOLD | Number | 100 | Maximum gap between blocks and height+1 for a peer to be classified as a full node |
| ARWEAVE_PEER_HEALTH_CHECK_INTERVAL_MS | Number | 30000 | Interval in milliseconds between peer health check cycles |
| ENABLE_ARWEAVE_PEER_EDS | Boolean | true | Enable Envoy EDS-based dynamic peer routing (Docker/Envoy only). Set to false to use static trusted node clusters |
| CHUNK_GET_BASE64_SIZE_BYTES | Number | 368640 | Assumed size in bytes for base64-encoded chunk responses, used for x402 payment and rate limiting calculations (default: 360 KiB) |
| CHUNK_REQUEST_CONCURRENCY | Number | 50 | Maximum number of concurrent chunk fetch requests across all HTTP requests. Limits load on chunk backends under high concurrency |
| CHUNK_FIRST_DATA_TIMEOUT_MS | Number | 10000 | Timeout in milliseconds for receiving the first chunk when assembling data from chunks. If exceeded, the request fails and falls through to alternative data sources. 0 disables |
| CHUNK_REBROADCAST_SOURCES | String | "" | Comma-separated list of sources that trigger chunk rebroadcasting. Valid: legacy-s3, ar-io-network, arweave-network. Empty disables rebroadcasting |
| CHUNK_REBROADCAST_RATE_LIMIT_TOKENS | Number | 60 | Rate limit tokens per interval for chunk rebroadcasting |
| CHUNK_REBROADCAST_RATE_LIMIT_INTERVAL | String | "minute" | Rate limit interval for chunk rebroadcasting: second, minute, hour, day |
| CHUNK_REBROADCAST_MAX_CONCURRENT | Number | 5 | Maximum concurrent chunk rebroadcast operations |
| CHUNK_REBROADCAST_DEDUP_TTL_SECONDS | Number | 3600 | Deduplication cache TTL in seconds (don't rebroadcast same chunk within this period) |
| CHUNK_REBROADCAST_MIN_SUCCESS_COUNT | Number | 1 | Minimum broadcast success count for chunk to be added to dedup cache |
| BUNDLE_REPAIR_RETRY_INTERVAL_SECONDS | String | "300" | Interval in seconds for retrying bundles |
| BUNDLE_REPAIR_RETRY_BATCH_SIZE | String | "1000" | Batch size for retrying bundles |
| APEX_TX_ID | String | undefined | If set, serves this transaction ID's data at the root path (/) |
| APEX_ARNS_NAME | String | undefined | If set, resolves and serves this ArNS name's content at the root path (/). Supports comma-separated values positionally mapped to ARNS_ROOT_HOST entries (e.g., turbo,ar-io for two hosts). A single value applies to all hosts. |
| ENABLE_RATE_LIMITER | Boolean | false | If true, enables rate limiting enforcement (returns 429 when limits exceeded). When false, limits are tracked but not enforced |
| RATE_LIMITER_TYPE | String | "redis" (docker), "memory" (standalone) | Sets the rate limiter implementation type. Use "memory" for local development/single-node, "redis" for production multi-node deployments |
| RATE_LIMITER_REDIS_ENDPOINT | String | "redis://redis:6379" | Redis endpoint URL for rate limiter (only used when RATE_LIMITER_TYPE is "redis") |
| RATE_LIMITER_REDIS_USE_TLS | Boolean | false | Whether to use TLS when connecting to Redis for rate limiting |
| RATE_LIMITER_REDIS_USE_CLUSTER | Boolean | false | Whether to use Redis cluster mode for rate limiting |
| RATE_LIMITER_RESOURCE_TOKENS_PER_BUCKET | Number | 1000000 | Maximum tokens in the resource bucket (1 token = 1 KiB, where 1 KiB = 1,024 bytes). Default allows ~1 GiB per resource |
| RATE_LIMITER_RESOURCE_REFILL_PER_SEC | Number | 100 | Tokens to refill per second for resource bucket. Default allows ~100 KiB/s sustained throughput per resource |
| RATE_LIMITER_IP_TOKENS_PER_BUCKET | Number | 100000 | Maximum tokens in the IP bucket (1 token = 1 KiB, where 1 KiB = 1,024 bytes). Default allows ~100 MiB per IP |
| RATE_LIMITER_IP_REFILL_PER_SEC | Number | 20 | Tokens to refill per second for IP bucket. Default allows ~20 KiB/s sustained throughput per IP |
| RATE_LIMITER_IPS_AND_CIDRS_ALLOWLIST | String | "" | Comma-separated list of IPs and CIDR ranges to exempt from rate limiting (e.g., "192.168.1.0/24,10.0.0.1") |
| RATE_LIMITER_ARNS_ALLOWLIST | String | "" | Comma-separated list of ArNS names to exempt from rate limiting and payment verification (e.g., "my-free-app,public-docs") |
| CACHE_PRIVATE_SIZE_THRESHOLD | Number | 104857600 (100 MB) | Response size threshold in bytes above which Cache-Control uses 'private' directive. Helps CDNs respect rate limiting and x402 payment requirements for large responses |
| CACHE_PRIVATE_CONTENT_TYPES | String | "" | Comma-separated list of content types that should use 'private' Cache-Control directive. Supports wildcards (e.g., "image/,video/"). Helps CDNs respect rate limiting and x402 payments for specific content types |
| CACHE_DEFAULT_MAX_AGE | Number | 30 | Default Cache-Control max-age (seconds) applied by middleware when no handler sets its own header |
| CACHE_STABLE_MAX_AGE | Number | 2592000 (30 days) | Cache-Control max-age (seconds) for stable (deeply confirmed) data |
| CACHE_UNSTABLE_TRUSTED_MAX_AGE | Number | 43200 (12 hours) | Cache-Control max-age (seconds) for unstable data from a trusted source |
| CACHE_UNSTABLE_MAX_AGE | Number | 7200 (2 hours) | Cache-Control max-age (seconds) for unstable data from an untrusted source |
| CACHE_NOT_FOUND_MAX_AGE | Number | 60 (1 minute) | Cache-Control max-age (seconds) for not-found responses |
| ENABLE_X_402_USDC_DATA_EGRESS | Boolean | false | If true, enables x402 USDC payment verification and settlement for data egress |
| X_402_USDC_NETWORK | String | "base-sepolia" | USDC network to use ("base" for mainnet, "base-sepolia" for testnet) |
| X_402_USDC_WALLET_ADDRESS | String | undefined | Ethereum wallet address (0x...) to receive USDC payments |
| X_402_USDC_FACILITATOR_URL | String | "https://x402.org/facilitator" | x402 facilitator endpoint URL. Default is Coinbase's testnet facilitator. Note: When CDP API keys are provided (CDP_API_KEY_ID and CDP_API_KEY_SECRET), the Coinbase facilitator is automatically used, overriding this setting |
| X_402_USDC_DATA_EGRESS_MIN_PRICE | Number | 0.001 | Minimum price in USDC for data egress (used when content length is unknown) |
| X_402_USDC_DATA_EGRESS_MAX_PRICE | Number | 1.00 | Maximum price in USDC for data egress (caps per-request cost) |
| X_402_USDC_PER_BYTE_PRICE | Number | 0.0000000001 | Price in USDC per byte of data egress (default: $0.10 per GB) |
| X_402_RATE_LIMIT_CAPACITY_MULTIPLIER | Number | 10 | Capacity multiplier for paid tier rate limits (e.g., 10x = paid users get 10x bucket capacity) |
| X_402_USDC_SETTLE_TIMEOUT_MS | Number | 5000 | Timeout in milliseconds for payment settlement operations |
| X_402_CDP_CLIENT_KEY | String | undefined | Coinbase Developer Platform client API key (public, safe to expose in browser paywall) |
| X_402_APP_NAME | String | "AR.IO Gateway" | Application name displayed in payment UI |
| X_402_APP_LOGO | String | undefined | URL to application logo displayed in payment UI |
| X_402_SESSION_TOKEN_ENDPOINT | String | undefined | Custom session token endpoint URL for payment authentication |
| CDP_API_KEY_ID | String | undefined | SENSITIVE SECRET: Coinbase Developer Platform secret API key ID (for Onramp session token generation and Coinbase facilitator integration). When provided with CDP_API_KEY_SECRET, automatically enables Coinbase facilitator. Must not be logged or exposed. Store in secure secrets manager and restrict access (least privilege). |
| CDP_API_KEY_SECRET | String | undefined | SENSITIVE SECRET: Coinbase Developer Platform secret API key secret (for Onramp session token generation and Coinbase facilitator integration). When provided with CDP_API_KEY_ID, automatically enables Coinbase facilitator. Must not be logged or exposed. Store in secure secrets manager and restrict access (least privilege). |
| CDP_API_KEY_SECRET_FILE | String | undefined | SENSITIVE SECRET: File path containing CDP secret API key secret. Takes precedence over CDP_API_KEY_SECRET if defined. Must not be logged or exposed. |
| OTEL_SERVICE_NAME | String | "ar-io-node" | Service name for OpenTelemetry traces. Used to identify this service in telemetry backends |
| OTEL_TRACING_SAMPLING_RATE_DENOMINATOR | Number | 1 | Head-based sampling rate denominator (1/N spans sampled). Note: In docker-compose, tail sampling via OTEL Collector is used instead for intelligent sampling |
| OTEL_EXPORTER_OTLP_ENDPOINT | String | "http://otel-collector:4318" (docker) | OTLP endpoint for traces/logs. In docker-compose, defaults to collector for tail sampling. For non-docker deployments, set to your telemetry backend URL |
| OTEL_EXPORTER_OTLP_HEADERS | String | undefined | Authentication headers for OTLP exporter (e.g., "x-honeycomb-team=your-api-key"). For non-docker deployments only. Use OTEL_COLLECTOR_DESTINATION_HEADERS in docker |
| OTEL_EXPORTER_OTLP_HEADERS_FILE | String | undefined | File path containing OTLP exporter headers. For non-docker deployments only. Use OTEL_COLLECTOR_DESTINATION_HEADERS_FILE in docker |
| OTEL_FILE_EXPORT_ENABLED | Boolean | false (true with yarn service:start) | Enable file-based export of OTEL spans for development/debugging. Spans written to OTEL_FILE_EXPORT_PATH |
| OTEL_FILE_EXPORT_PATH | String | "logs/otel-spans.jsonl" | Path for file-based OTEL span export (JSONL format). Only used when OTEL_FILE_EXPORT_ENABLED is true |
| OTEL_BATCH_LOG_PROCESSOR_SCHEDULED_DELAY_MS | Number | 2000 | Delay in milliseconds before batch log export |
| OTEL_BATCH_LOG_PROCESSOR_MAX_EXPORT_BATCH_SIZE | Number | 10000 | Maximum number of log records to export in a single batch |
| OTEL_COLLECTOR_IMAGE_TAG | String | "0.119.0" | Docker image tag for OpenTelemetry Collector. Only applies to docker-compose deployments |
| OTEL_COLLECTOR_DESTINATION_ENDPOINT | String | undefined | Final telemetry destination URL where OTEL Collector forwards sampled traces. Examples: Honeycomb (https://api.honeycomb.io), Grafana Cloud, Datadog, New Relic, Elastic APM |
| OTEL_COLLECTOR_HONEYCOMB_API_KEY | String | undefined | Honeycomb API key for authentication (x-honeycomb-team header). Configure ONE backend API key |
| OTEL_COLLECTOR_GRAFANA_CLOUD_API_KEY | String | undefined | Grafana Cloud API key for authentication (Authorization: Basic header). Base64 encoded instance_id:api_key. Configure ONE backend API key |
| OTEL_COLLECTOR_DATADOG_API_KEY | String | undefined | Datadog API key for authentication (DD-API-KEY header). Configure ONE backend API key |
| OTEL_COLLECTOR_NEW_RELIC_API_KEY | String | undefined | New Relic license key for authentication (api-key header). Configure ONE backend API key |
| OTEL_COLLECTOR_ELASTIC_API_KEY | String | undefined | Elastic APM secret token for authentication (Authorization: Bearer header). Configure ONE backend API key |
| OTEL_TAIL_SAMPLING_SUCCESS_RATE | Number | 1 | Percentage (1-100) of successful/fast/unpaid traces to sample. Default: 1% provides baseline metrics with 80-95% cost reduction |
| OTEL_TAIL_SAMPLING_SLOW_THRESHOLD_MS | Number | 2000 | Latency threshold in milliseconds. Requests exceeding this duration are eligible for sampling (rate controlled by OTEL_TAIL_SAMPLING_SLOW_RATE) |
| OTEL_TAIL_SAMPLING_ERROR_RATE | Number | 100 | Percentage (1-100) of error traces (5xx status codes) to sample. Default: 100% captures all errors for debugging. Lower values reduce costs but may miss issues |
| OTEL_TAIL_SAMPLING_SLOW_RATE | Number | 100 | Percentage (1-100) of slow request traces (exceeding OTEL_TAIL_SAMPLING_SLOW_THRESHOLD_MS) to sample. Default: 100% captures all slow requests for performance analysis |
| OTEL_TAIL_SAMPLING_PAID_TRAFFIC_RATE | Number | 100 | Percentage (1-100) of paid traffic traces (x402 verified payments) to sample. Default: 100% for billing/compliance. Reduce only if paid traffic is high volume |
| OTEL_TAIL_SAMPLING_PAID_TOKENS_RATE | Number | 100 | Percentage (1-100) of paid rate limit token usage traces to sample. Default: 100% for billing/compliance. Reduce only if paid token usage is high volume |
| OTEL_TAIL_SAMPLING_NESTED_BUNDLE_RATE | Number | 5 | Percentage (1-100) of nested bundle data item retrieval traces to sample. Captures TurboDynamoDB requests involving parent offsets for visibility into nested bundle paths |
| OTEL_TAIL_SAMPLING_OFFSET_OVERWRITE_RATE | Number | 10 | Percentage (1-100) of offset overwrite risk traces to sample. Captures traces where both DynamoDB offsets AND raw data paths executed, the scenario that triggered the Release 59 offset bug |
Security Note: Variables marked as SENSITIVE SECRET (such as CDP_API_KEY_ID, CDP_API_KEY_SECRET, and CDP_API_KEY_SECRET_FILE) contain confidential credentials that must never be printed to logs, exposed in error messages, or included in any diagnostic output. Always mask or omit these values in logs, store them in a secure secrets manager, and restrict access using the principle of least privilege.
The following environment variables configure the Observer service for network gateway observations.
These settings control which gateways are used as references for ArNS resolution and chunk verification checks.
| ENV_NAME | TYPE | DEFAULT_VALUE | DESCRIPTION |
|---|---|---|---|
| REFERENCE_GATEWAY_HOSTS | String | "turbo-gateway.com" | Comma-separated list of reference gateway hosts for ArNS and chunk checks. These gateways are tried in order before network fallback |
| REFERENCE_GATEWAY_NETWORK_ONLY | Boolean | false | If true, uses only network gateways for reference checks (no explicit hosts). Enables fully decentralized observation |
| REFERENCE_GATEWAY_NETWORK_FALLBACK | Boolean | true | If true, falls back to network consensus when explicit reference hosts fail or disagree |
| REFERENCE_GATEWAY_CONSENSUS_SIZE | Number | 3 | Number of network gateways to query when building consensus for ArNS resolution |
| REFERENCE_GATEWAY_CONSENSUS_THRESHOLD | Number | 2 | Minimum number of agreeing gateways required to establish consensus |
| REFERENCE_GATEWAY_MIN_PASS_RATE | Number | 0.8 | Minimum historical pass rate (0.0-1.0) for a network gateway to be eligible for consensus queries |
| REFERENCE_GATEWAY_MIN_CONSECUTIVE_PASSES | Number | 2 | Minimum consecutive passing epochs required for network gateway eligibility |
| REFERENCE_GATEWAY_MIN_EPOCH_COUNT | Number | 5 | Minimum total epochs observed for a network gateway to be considered for eligibility |
| REFERENCE_GATEWAY_MAX_NETWORK_POOL | Number | 10 | Maximum number of eligible network gateways to keep in the pool for random selection |
| REFERENCE_GATEWAY_NETWORK_CACHE_TTL_SECONDS | Number | 300 | TTL in seconds for caching the list of eligible network gateways from the AR.IO contract |
| REFERENCE_GATEWAY_CONSENSUS_MAX_ATTEMPTS | Number | 5 | Maximum number of gateway queries to attempt when trying to reach consensus threshold |
These settings control the continuous observation mode, which spreads observations across the epoch window and uses majority voting for pass/fail determination.
| ENV_NAME | TYPE | DEFAULT_VALUE | DESCRIPTION |
|---|---|---|---|
| OBSERVATIONS_PER_GATEWAY | Number | 3 | Number of times each gateway is observed during an epoch. Results are aggregated using majority voting |
| OBSERVATION_WINDOW_FRACTION | Number | 0.5 | Fraction of the epoch window (0.1-0.9) during which observations are spread. Higher values spread observations over more time |
| OBSERVATION_CYCLE_INTERVAL_MS | Number | 60000 | Interval in milliseconds between observation cycles. Each cycle processes scheduled observations |
| MAJORITY_VOTE_THRESHOLD | Number | 2 | Number of passing observations needed for a gateway to pass overall. Should be ≤ OBSERVATIONS_PER_GATEWAY |
These settings control chunk offset verification observations, which validate that gateways correctly return chunk data with proper offset information.
| ENV_NAME | TYPE | DEFAULT_VALUE | DESCRIPTION |
|---|---|---|---|
| OFFSET_OBSERVATION_SAMPLE_RATE | Number | 0.10 | Sample rate (0.0-1.0) for offset observations. Higher values test more chunks but increase load on observed gateways |
| OFFSET_OBSERVATION_ENABLED | Boolean | true | If true, enables offset observation checks. When false, offset observations are skipped entirely |
| OFFSET_OBSERVATION_ENFORCEMENT_ENABLED | Boolean | false | If true, offset observation failures affect gateway pass/fail status. When false, failures are logged but don't impact scoring |
These settings control whether Arweave transaction/data item tags and verification metadata are included as HTTP response headers when serving data. Note: for L2 data item signature and owner key headers, WRITE_ANS104_DATA_ITEM_DB_SIGNATURES must also be true.
| ENV_NAME | TYPE | DEFAULT_VALUE | DESCRIPTION |
|---|---|---|---|
| ARWEAVE_TAG_RESPONSE_HEADERS_ENABLED | Boolean | true | If true, includes transaction/data item tags as X-Arweave-Tag-* and verification headers (X-Arweave-Signature, X-Arweave-Owner, etc.) on /raw/:id and /:id responses |
| ARWEAVE_TAG_RESPONSE_HEADERS_MAX | Number | 100 | Maximum number of tag headers to include per response. If a transaction has more tags, an X-Arweave-Tags-Truncated: true header is added |
| ARWEAVE_TAG_RESPONSE_HEADERS_MAX_BYTES | Number | 8192 | Maximum total bytes for all emitted tag and verification headers. Prevents exceeding intermediary header size limits (nginx default 8KB). Verification headers are prioritized over tags |
| TX_METADATA_RESOLVE_CONCURRENCY | Number | 1 | Maximum number of concurrent background data item metadata resolutions. Limits resource pressure from remote fetches and DB writes when many uncached items are requested simultaneously |
When enabled, the transaction(id) GraphQL query can resolve unindexed data items on-demand by extracting metadata directly from ANS-104 bundle binaries. Only applies to single-ID lookups; the plural transactions(...) query is unaffected.
| ENV_NAME | TYPE | DEFAULT_VALUE | DESCRIPTION |
|---|---|---|---|
| GRAPHQL_ON_DEMAND_RESOLUTION_ENABLED | Boolean | true | If true, enables on-demand data item resolution as a fallback when transaction(id) returns null from the local database |
| GRAPHQL_ON_DEMAND_RESOLUTION_TIMEOUT_MS | Number | 5000 | Maximum time in milliseconds to wait for on-demand resolution before returning null. Background resolution continues and persists for future queries |
| GRAPHQL_ON_DEMAND_RESOLUTION_MAX_CONCURRENT | Number | 1 | Maximum number of concurrent on-demand resolution operations. When at capacity, requests return null immediately |
Signs gateway responses with an Ed25519 key per RFC 9421 (HTTP Message Signatures). Every qualifying response gets Signature and Signature-Input headers that cryptographically bind the gateway's trust claims (verification status, ArNS resolution, data item tags) to a staked on-chain identity.
When OBSERVER_WALLET is set, the gateway also creates an RSA attestation linking the Ed25519 signing key to the observer wallet (which is registered to the gateway on-chain), and uploads it permanently to Arweave.
| ENV_NAME | TYPE | DEFAULT_VALUE | DESCRIPTION |
|---|---|---|---|
| HTTPSIG_ENABLED | Boolean | false | Enable RFC 9421 HTTP Message Signature signing on gateway responses |
| HTTPSIG_KEY_FILE | String | data/keys/httpsig.pem | Path to Ed25519 private key PEM file. Auto-generated on first startup if missing |
| KEYS_DATA_PATH | String | ./data/keys | Host path for the keys volume mount (docker-compose only). Maps to /app/data/keys inside the container |
| HTTPSIG_BIND_REQUEST | Boolean | true | Include @method;req and @path;req in signatures, binding each response to the specific request that triggered it |
| HTTPSIG_UPLOAD_ATTESTATION | Boolean | true | Upload the attestation to Arweave at startup (requires OBSERVER_WALLET). Set to false to skip upload |
| OBSERVER_WALLET | String | - | Arweave wallet address for attestation signing. Key file must exist at <WALLETS_PATH>/<OBSERVER_WALLET>.json |
| WALLETS_PATH | String | wallets | Directory containing wallet JWK files |
Operator-defined TTL rules for data in ClickHouse. Reloaded from disk by
clickhouse-auto-import at the top of every import cycle. See
Parquet and ClickHouse usage
for the rules-file format and behavior.
| ENV_NAME | TYPE | DEFAULT_VALUE | DESCRIPTION |
|---|---|---|---|
| CLICKHOUSE_TTL_RULES_PATH | String | ./config/clickhouse-ttl-rules.yaml | Path to the YAML file of tag- and owner-based TTL rules loaded into ClickHouse before each import cycle |
When ClickHouse is configured alongside SQLite, GraphQL transaction queries hit both stores and merge the results. These settings let the composite backend skip SQLite for heights already covered by ClickHouse, avoiding duplicate scans. The buffer reserves recent heights (where ClickHouse ingestion may be partial) for SQLite.
| ENV_NAME | TYPE | DEFAULT_VALUE | DESCRIPTION |
|---|---|---|---|
| CLICKHOUSE_SQLITE_MIN_HEIGHT_ENABLED | Boolean | false | When true, restrict the SQLite fallback to heights above (ClickHouse max height - buffer) |
| CLICKHOUSE_SQLITE_MIN_HEIGHT_BUFFER | Number | 10 | Heights reserved for SQLite near the ClickHouse tip, to guard against partially ingested recent blocks |
| CLICKHOUSE_MAX_HEIGHT_CACHE_TTL_SECONDS | Number | 60 | TTL for the cached ClickHouse max-height lookup used by the boundary optimization |