🚨 CRITICAL: Use these remote clusters for distributed testing across environments.
mump2p supports connecting to remote P2P clusters for distributed testing and production use:
# Subscribe to a topic on a remote node
./grpc_p2p_client/p2p-client -mode=subscribe -topic=mytopic --addr=34.124.246.10:33212
# Publish messages to a remote node
./grpc_p2p_client/p2p-client -mode=publish -topic=mytopic -msg="Hello World" --addr=34.40.169.185:33212Key Points:
- Remote nodes use the standard sidecar port
33212 - Ensure you have the correct
CLUSTER_IDfor the target cluster - Messages will propagate across the entire distributed mesh network
Complete guide for setting up and using the mump2p development environment.
- Prerequisites
- Getting Started
- Build and Development Commands
- Configuration
- Two Ways to Connect
- Setup and Installation
- Monitoring
- Troubleshooting
- API Reference
- Client Tools
- Monitoring and Telemetry
- Troubleshooting
- Developer Tools
Install these tools before starting:
- Docker and Docker Compose
- Node.js and npm (for WebSocket testing)
- Golang (required for key generation script)
- wscat (WebSocket client for testing)
To install wscat:
npm install -g wscatNote: For key management, check out the key ring in the
keygen/directory.
./script/generate-identity.shThis creates the bootstrap peer identity needed for P2P node discovery.
Create .env file with your assigned credentials:
BOOTSTRAP_PEER_ID=<your-generated-peer-id>
CLUSTER_ID=<your-assigned-cluster-id>Note: Each participant will generate their own unique bootstrap identity and receive their assigned cluster ID. No need to copy from examples - use your specific values.
docker-compose -f docker-compose-optimum.yml up --build./test_suite.shThe project includes a Makefile with convenient shortcuts for common development tasks:
# Show all available commands and usage examples
make help
# Build all client binaries
make build
# Generate P2P identity (if missing)
make generate-identity
# Subscribe to a topic
make subscribe 127.0.0.1:33221 testtopic
# Publish random messages
make publish 127.0.0.1:33221 testtopic random
make publish 127.0.0.1:33221 testtopic random 10 1s
# Clean build artifacts
make cleanAfter building with make build, you can use the binaries directly:
Single-Node Client (p2p-client):
# P2P Client Help
./grpc_p2p_client/p2p-client --help
# Subscribe to a topic
./grpc_p2p_client/p2p-client -mode=subscribe -topic=testtopic --addr=127.0.0.1:33221
# Publish messages
./grpc_p2p_client/p2p-client -mode=publish -topic=testtopic -msg=HelloWorld --addr=127.0.0.1:33222
# Publish multiple messages with delay
./grpc_p2p_client/p2p-client -mode=publish -topic=testtopic -msg="Random Message" --addr=127.0.0.1:33222 -count=5 -sleep=1sMulti-Node Clients:
# Multi-node publisher (publish to multiple nodes)
./grpc_p2p_client/p2p-multi-publish -topic=testtopic -ipfile=ips.txt -count=10 -sleep=500ms
# Multi-node subscriber (subscribe to multiple nodes)
./grpc_p2p_client/p2p-multi-subscribe -topic=testtopic -ipfile=ips.txt -output-data=data.tsv -output-trace=trace.tsvSee the Multi-Node Client Tools section for detailed usage.
Example Output:
# Subscribe output:
Connecting to node at: 127.0.0.1:33221…
Trying to subscribe to topic testtopic…
Subscribed to topic "testtopic", waiting for messages…
Recv message: [1] [1757588485854443000 75] [1757588485852133000 50] HelloWorld
# Publish output:
Connecting to node at: 127.0.0.1:33222…
Published "[1757588485852133000 50] HelloWorld" to "testtopic" (took 840.875µs)
Default values are provided, but it's important to understand what each variable does.
Copy the example environment file:
cp .env.example .envImportant: After copying, you need to replace the BOOTSTRAP_PEER_ID in your .env file with the peer ID generated by make generate-identity.
Workflow:
- Run
make generate-identity- this creates a unique peer ID - Copy the generated peer ID from the output
- Edit your
.envfile and replace the exampleBOOTSTRAP_PEER_IDwith your generated one
The .env.example file contains:
BOOTSTRAP_PEER_ID=12D3KooWD5RtEPmMR9Yb2ku5VuxqK7Yj1Y5Gv8DmffJ6Ei8maU44
CLUSTER_ID=docker-dev-cluster
PROXY_VERSION=v0.0.1-rc16
P2P_NODE_VERSION=v0.0.1-rc16Variables explained:
BOOTSTRAP_PEER_ID: P2P node identity for network discoveryCLUSTER_ID: Logical cluster identifierPROXY_VERSION: Docker image version for proxy servicesP2P_NODE_VERSION: Docker image version for P2P node services
The docker-compose files use these version variables:
${PROXY_VERSION-latest}- usesPROXY_VERSIONif set, otherwiselatest${P2P_NODE_VERSION-latest}- usesP2P_NODE_VERSIONif set, otherwiselatest
CLUSTER_ID: Logical ID of the nodeNODE_MODE:optimumorgossipsubmode (should beoptimum)SIDECAR_PORT: gRPC bidirectional port to which proxies connect (default:33212)API_PORT: HTTP port exposing internal node APIs (default:9090)IDENTITY_DIR: Directory containing node identity (p2p.key) (needed only for bootstrap node; each node generates its own on start)BOOTSTRAP_PEERS: Comma-separated list of peer multiaddrs with /p2p/ ID for initial connectionOPTIMUM_PORT: Port used by the mump2p (default: 7070)OPTIMUM_MAX_MSG_SIZE: Max message size in bytes (default:1048576→ 1MB)OPTIMUM_MESH_MIN: Min number of mesh-connected peers (default:3)OPTIMUM_MESH_MAX: Max number of mesh-connected peers (default:12)OPTIMUM_MESH_TARGET: Target number of peers to connect to (default:6)OPTIMUM_SHARD_FACTOR: Number of shards per message (default: 4)OPTIMUM_SHARD_MULT: Shard size multiplier (default: 1.5)OPTIMUM_THRESHOLD: Minimum % of shard redundancy before forwarding message (e.g., 0.75 = 75%)
If you want to learn about mesh networking, see how Eth2 uses gossipsub.
You can also generate the identity with a single command:
curl -sSL https://raw.githubusercontent.com/getoptimum/optimum-dev-setup-guide/main/script/generate-identity.sh | bashThis downloads and runs the same identity generation script, creating the bootstrap peer identity and setting the environment variable.
- Via Proxy (recommended): Connect to proxies for managed access with authentication and rate limiting
- Direct P2P: Connect directly to P2P nodes for low-level integration
Generate the P2P bootstrap identity for node discovery:
./script/generate-identity.shOutput:
[SUCCESS] Generated P2P identity successfully!
[SUCCESS] Identity saved to: ./identity/p2p.key
[SUCCESS] Peer ID: 12D3KooWLsSmLLoE2T7JJ3ZyPqoXEusnBhsBA1ynJETsziCKGsBw
# Start all services in detached mode
docker-compose -f docker-compose-optimum.yml up --build -d
# Check service status
docker-compose -f docker-compose-optimum.yml psExpected Services:
proxy-1: HTTP :8081, gRPC :50051proxy-2: HTTP :8082, gRPC :50052p2pnode-1: API :9091, Sidecar :33221p2pnode-2: API :9092, Sidecar :33222p2pnode-3: API :9093, Sidecar :33223p2pnode-4: API :9094, Sidecar :33224
Run the comprehensive test suite:
./test_suite.shThe setup includes Grafana dashboards for visualizing P2P node and proxy metrics.
Access Grafana:
- URL:
http://localhost:3000 - Credentials:
admin/admin
Available Dashboards:
- Optimum Proxy Dashboard: Proxy uptime, cluster health, CPU/memory usage, goroutines
- mump2p Nodes Dashboard: P2P node system state, CPU usage, RAM utilization
Prometheus:
- URL:
http://localhost:9090 - Scrapes metrics from all P2P nodes (port 9090) and proxies (port 8080)
Use the geolocation-aware endpoints to understand how your mesh is distributed:
Proxy view of node geolocation:
curl http://localhost:8081/api/v1/node-countries | jqHealth and country info of the node:
curl http://localhost:9091/api/v1/health | jqIf you encounter issues during setup, here are common problems and solutions:
"node not found" errors:
- Ensure all P2P nodes have access to the identity file (volume mounts are configured correctly)
- Verify the
.envfile contains the correctBOOTSTRAP_PEER_ID - Check that the identity file was generated using the correct script
"checksum mismatch" errors:
- Delete the
identity/directory and regenerate using./script/generate-identity.sh - The identity file must have the proper checksum format expected by mump2p nodes
Nodes not connecting to bootstrap:
- Verify all nodes have unique
CLUSTER_IDvalues - Check that the bootstrap peer ID in
BOOTSTRAP_PEERSmatches the generated identity - Ensure the network topology allows proper communication between nodes
Proxy connection issues:
- Verify all P2P nodes are healthy and running
- Check that the proxy can reach all P2P node sidecar ports (33212)
- Ensure the
P2P_NODESenvironment variable contains correct node addresses
Follow this workflow when subscriptions are not receiving messages:
-
Confirm topic subscription
curl "http://localhost:9091/api/v1/topics?topic=<topic>&nodeinfo=true"Ensure the expected peers appear with
nodeinfo=true. -
Inspect mesh propagation
curl http://localhost:9091/api/v1/p2p-snapshot | jqCheck that the message hash appears under
seen_message_hashes(and eventuallyfully_decoded_message_hashes) and thatconnected_peersshows the nodes forwarding shards. If the arrays remain empty, publish test traffic (make publish 127.0.0.1:33221 <topic> random 5) and re-check. -
Validate proxy routing
curl http://localhost:8081/api/v1/node-countries | jqEnsure the proxy lists every node you expect it to route through. Missing entries indicate connectivity or configuration issues.
-
Review client
Confirm the subscriber process is still connected (WebSocket or gRPC) and using the same
client_idthat was supplied when calling/api/v1/subscribe.
Proxies provide the user-facing interface to OptimumP2P.
curl -X POST http://localhost:8081/api/v1/subscribe \
-H "Content-Type: application/json" \
-d '{
"client_id": "your-client-id",
"topic": "example-topic",
"threshold": 0.7
}'client_id: Used to identify the client across WebSocket sessions. Auth0 user_id recommended if JWT is used.threshold: Forward message to this client only if X% of nodes have received it.
Here, client_id is your WebSocket connection identifier. Usually, we use Auth0 user_id to have a controlled environment, but here you can use any random string. Make sure to use the same string when making the WebSocket connection to receive the message.
wscat -c "ws://localhost:8081/api/v1/ws?client_id=your-client-id"This is how clients receive messages published to their subscribed topics.
Important: WebSocket has limitations, and you may experience unreliable delivery when publishing message bursts. A gRPC connection will be provided in a future update.
curl -X POST http://localhost:8081/api/v1/publish \
-H "Content-Type: application/json" \
-d '{
"client_id": "your-client-id",
"topic": "example-topic",
"message": "Hello, world!"
}'Important: The
client_idfield is required for all publish requests. This should be the same ID used when subscribing to topics. If you're using WebSocket connections, use the sameclient_idfor consistency.
Clients can use gRPC to stream messages from the Proxy.
Protobuf: proxy_stream.proto
service ProxyStream {
rpc ClientStream (stream ProxyMessage) returns (stream ProxyMessage);
}
message ProxyMessage {
string client_id = 1;
bytes message = 2;
string topic = 3;
string message_id = 4;
string type = 5;
}- Bidirectional streaming.
- First message must include only client_id.
- All subsequent messages are sent by Proxy on subscribed topics.
sh ./script/proxy_client.sh subscribe mytopic 0.7
sh ./script/proxy_client.sh publish mytopic 0.6 10The Compose file runs two proxy instances on the host at http://localhost:8081 and http://localhost:8082. Examples below use proxy-1 (localhost:8081). If authentication is enabled, add the header Authorization: Bearer <token> to every request.
curl http://localhost:8081/api/v1/healthExample Response:
{
"country": "India",
"country_iso": "IN",
"cpu_used": "2.15",
"disk_used": "75.09",
"memory_used": "74.56",
"status": "ok"
}Returns proxy health along with geolocation for its connected P2P nodes.
curl http://localhost:8081/api/v1/versionResponse:
{
"version": "v0.0.1-rc16",
"commit_hash": "8f3057d"
}curl -X POST http://localhost:8081/api/v1/subscribe \
-H "Content-Type: application/json" \
-d '{
"client_id": "your-client-id",
"topic": "example-topic",
"threshold": 0.7
}'Response:
{
"status": "subscribed",
"client": "your-client-id"
}Parameters:
client_id: Required when auth is disabled; with JWT auth the proxy uses the token subject.topic: Topic name to subscribe to.threshold: Delivery threshold from0.1to1.0(default0.1).
curl -X POST http://localhost:8081/api/v1/publish \
-H "Content-Type: application/json" \
-d '{
"client_id": "your-client-id",
"topic": "example-topic",
"message": "Hello, world!"
}'Response:
{
"status": "published",
"topic": "example-topic",
"message_id": "630bae9baff93fd17a4e71611b2ca7da950860c4f90bcd420f7528571615e7df"
}Use the same client_id used during subscription so the proxy can correlate publish and receive flows.
wscat -c "ws://localhost:8081/api/v1/ws?client_id=your-client-id" \
-H "Authorization: Bearer <token-if-auth-enabled>"The WebSocket endpoint streams deliveries for all active subscriptions belonging to the supplied client_id.
curl http://localhost:8081/api/v1/node-countriesReturns geolocation information (country and ISO code) for every connected P2P node. Use this to verify regional coverage or feed dashboards that show geographic distribution.
Example Response:
{
"count": 4,
"countries": {
"p2pnode-1:33212": "India",
"p2pnode-2:33212": "India",
"p2pnode-3:33212": "India",
"p2pnode-4:33212": "India"
},
"country_isos": {
"p2pnode-1:33212": "IN",
"p2pnode-2:33212": "IN",
"p2pnode-3:33212": "IN",
"p2pnode-4:33212": "IN"
}
}curl http://localhost:8081/metricsPrometheus-formatted metrics are enabled by default.
Each node exposes an HTTP API on the host ports 9091-9094 (mapping to container port 9090). These endpoints surface health, topology, and protocol diagnostics.
curl http://localhost:9091/api/v1/healthResponse:
{
"country": "India",
"country_iso": "IN",
"cpu_used": "2.78",
"disk_used": "75.09",
"memory_used": "74.72",
"mode": "optimum",
"status": "ok"
}Provides node health, resource utilisation, and geolocation metadata.
curl http://localhost:9091/api/v1/node-stateResponse:
{
"pub_key": "12D3KooWMwzQYKhRvLRsiKignZ2vAtkr1nPYiDWA7yeZvRqC9ST9",
"country": "United States",
"country_iso": "US",
"addresses": ["/ip4/172.28.0.12/tcp/7070"],
"peers": [
"12D3KooWDLm7bSFnoqP4mhoJminiCixbR2Lwyqu9sU5EDKVvXM5j",
"12D3KooWJrPmTdXj9hirigHs88BHe6DApLpdXiKrwF1V8tNq9KP7"
],
"topics": ["demo", "example-topic"]
}Useful for confirming peer connectivity, advertised addresses, and current topic assignments.
curl http://localhost:9091/api/v1/configReturns the active configuration for the node (cluster ID, protocol mode, mesh parameters, etc.) with sensitive values removed. Use this to verify that environment variables are applied as expected.
curl http://localhost:9091/api/v1/p2p-snapshotProduces a detailed snapshot that includes message hashes (seen vs decoded), shard statistics, peer scores, and per-connection metadata.
Example:
{
"seen_message_hashes": [],
"fully_decoded_message_hashes": [],
"undecoded_shard_buffer_sizes": {},
"total_peer_count": 0,
"full_message_node_count": 0,
"partial_message_node_count": 0,
"connected_peers": []
}Run a publish workflow (for example make publish 127.0.0.1:33221 testtopic random 5) and re-query to see hashes move from seen_message_hashes to fully_decoded_message_hashes, plus per-peer shard stats in connected_peers.
List all topics with peer counts:
curl http://localhost:9091/api/v1/topicsInspect a single topic:
curl "http://localhost:9091/api/v1/topics?topic=example-topic"Include detailed peer information:
curl "http://localhost:9091/api/v1/topics?topic=example-topic&nodeinfo=true"The topics endpoint lets you audit subscription state across the mesh. Set nodeinfo=true to obtain peer IDs, addresses, and connection status for every subscriber on the requested topic.
The gRPC API provides high-performance streaming capabilities.
service ProxyStream {
rpc ClientStream (stream ProxyMessage) returns (stream ProxyMessage);
rpc Publish (PublishRequest) returns (PublishResponse);
rpc Subscribe (SubscribeRequest) returns (SubscribeResponse);
}Add JWT token to gRPC metadata:
ctx = metadata.AppendToOutgoingContext(ctx, "authorization", "Bearer YOUR_JWT_TOKEN")Note: The provided client code in
grpc_proxy_client/proxy_client.gois a SAMPLE implementation intended for demonstration and testing purposes only. It is not production-ready and should not be used as-is in production environments. Please review, adapt, and harden the code according to your security, reliability, and operational requirements before any production use.
A new Go-based gRPC client implementation is available in grpc_proxy_client/proxy_client.go that provides:
- Bidirectional gRPC Streaming: Establishes persistent connection with the proxy
- REST API Integration: Uses REST for subscription and publishing
- Automatic Client ID Generation: Generates unique client identifiers
- Optimized gRPC Connection: Efficient bidirectional streaming
- Message Publishing Loop: Automated message publishing with configurable delays
- Signal Handling: Graceful shutdown on interrupt
# Build the client
cd grpc_proxy_client
go build -o proxy_client proxy_client.go
# Subscribe only (receive messages)
./proxy_client -subscribeOnly -topic=test -threshold=0.7
# Subscribe and publish messages
./proxy_client -topic=test -threshold=0.7 -count=10 -delay=2s
# Custom connection settings
./proxy_client -topic=test -threshold=0.7 -count=10-topic: Topic name to subscribe/publish (default: "demo")-threshold: Delivery threshold 0.0 to 1.0 (default: 0.1)-subscribeOnly: Only subscribe and receive messages-count: Number of messages to publish (default: 5)-delay: Delay between message publishing (default: 2s)-proxy: Proxy server address (default: "localhost:33211")-rest: REST API base URL (default: "http://localhost:8081")
- Subscription: Client subscribes to topic via REST API
- gRPC Connection: Establishes bidirectional stream with proxy
- Client ID Registration: Sends client_id as first message
- Message Reception: Receives messages on subscribed topics
- Message Publishing: Publishes messages via REST API (optional)
The gRPC clients use auto-generated protobuf files:
Proxy Client:
grpc_proxy_client/grpc/proxy_stream.pb.go: Message type definitionsgrpc_proxy_client/grpc/proxy_stream_grpc.pb.go: gRPC service definitions
P2P Client:
grpc_p2p_client/grpc/p2p_stream.pb.go: Message type definitionsgrpc_p2p_client/grpc/p2p_stream_grpc.pb.go: gRPC service definitions
To regenerate these files:
Proxy Client:
cd grpc_proxy_client
protoc --go_out=. --go_opt=paths=source_relative \
--go-grpc_out=. --go-grpc_opt=paths=source_relative \
proto/proxy_stream.protoP2P Client:
cd grpc_p2p_client
protoc --go_out=. --go_opt=paths=source_relative \
--go-grpc_out=. --go-grpc_opt=paths=source_relative \
proto/p2p_stream.protoIf you prefer to interact directly with the P2P mesh, bypassing the proxy, you can use a minimal client script to publish and subscribe directly over the gRPC sidecar interface of the nodes.
This is useful for:
- Low-level integration
- Bypassing HTTP/WebSocket stack
- Simulating internal services or embedded clients
Local Docker Development:
./grpc_p2p_client/p2p-client -mode=subscribe -topic=mytopic --addr=localhost:33221Note: Here,
localhost:33221is the mapped port forp2pnode-1(33221:33212) in docker-compose.
External/Remote P2P Nodes:
./grpc_p2p_client/p2p-client -mode=subscribe -topic=mytopic --addr=34.124.246.10:33212Note: External nodes use the standard sidecar port
33212directly.
Example response:
Subscribed to topic "mytopic", waiting for messages…
Received message: "random"
Received message: "random1"
Received message: "random2"When subscribing to topics, you'll see detailed message information in this format:
Recv message: [1] [1757579641382484000 126] [1757579641203739000 100] bqhn4Yhab4KorTqcHmViooGF3gPmjSwAZon8kjMUGJY8aRoH/ogmuTZ+IHS/xwa1
meOKYWvJ37ossi5bbMGAg5TgsB0aP61x/OiMessage Format Breakdown:
-
[1]- Message Sequence Number- Incremental counter of received messages
- Shows this is the 1st, 2nd, 3rd... message received
-
[1757579641382484000 126]- Receive Timestamp & Size1757579641382484000= Unix timestamp in nanoseconds when message was received126= Total message size in bytes (including prefix)
-
[1757579641203739000 100]- Original Publish Timestamp & Content Size1757579641203739000= Unix timestamp in nanoseconds when message was originally published100= Original content size in bytes (before prefix was added)
-
bqhn4Yhab4KorTqcHmViooGF3gPmjSwAZon8kjMUGJY8aRoH/ogmuTZ+IHS/xwa1...- Message Content- The actual message data (base64 encoded random content in this example)
- This is the original message content that was published
Key Insights:
- Network Latency: ~18ms (difference between publish and receive timestamps)
- Message Integrity: Content size matches original (100 bytes)
- Real-time Delivery: Messages arrive with precise timing information
Local Docker Development:
./grpc_p2p_client/p2p-client -mode=publish -topic=mytopic -msg="Hello World" --addr=localhost:33222Note: Here,
localhost:33222is the mapped port forp2pnode-2(33222:33212) in docker-compose.
External/Remote P2P Nodes:
./grpc_p2p_client/p2p-client -mode=publish -topic=mytopic -msg="Hello World" --addr=35.197.161.77:33212Note: External nodes use the standard sidecar port
33212directly.
Example response:
Published "[1757588485852133000 26] random" to "mytopic" (took 72.042µs)- --addr refers to the sidecar gRPC port exposed by the P2P node (e.g., 33221, 33222, etc.)
- Messages published here will still follow RLNC encoding, mesh forwarding, and threshold policies
- Proxy(s) will pick these up only if enough nodes receive the shards (threshold logic)
The P2P client supports various publishing options for testing:
# Publish a single message
./grpc_p2p_client/p2p-client -mode=publish -topic=my-topic -msg="Hello World" --addr=127.0.0.1:33221
# Publish multiple messages with delay
./grpc_p2p_client/p2p-client -mode=publish -topic=my-topic -msg="Random Message" --addr=127.0.0.1:33221 -count=10 -sleep=1sFor high-volume testing with random messages:
# Publish 50 random messages with 2-second delays
for i in `seq 1 50`; do
string=$(openssl rand -base64 768 | head -c 100)
echo "Publishing message $i: $string"
./grpc_p2p_client/p2p-client -mode=publish -topic=mytopic --addr=34.40.4.192:33212 -msg="$string"
sleep 2
doneFeatures:
- Random Content: Each message contains 100 random characters
- High Volume: Publishes 50 messages in sequence
- Real-time Feedback: Shows message number and content being published
- Configurable Delay: 2-second intervals between messages
- Remote Testing: Uses remote P2P node for distributed testing
-count: Number of messages to publish (default: 1)-sleep: Delay between publishes (e.g., 100ms, 1s)
For testing and stress testing across multiple P2P nodes simultaneously, the project provides specialized multi-node clients:
-
p2p-client(single-node client) - Fromcmd/single/- Connect to one P2P node at a time
- Best for: Simple testing, development, single-node operations
- Supports both subscribe and publish modes
-
p2p-multi-publish- Fromcmd/multi-publish/- Publishes messages to multiple nodes concurrently
- Best for: Stress testing, load generation, distributed publishing
- Reads IP addresses from a file
-
p2p-multi-subscribe- Fromcmd/multi-subscribe/- Subscribes to multiple nodes concurrently
- Best for: Monitoring multiple nodes, collecting trace data, distributed subscription testing
- Supports output files for data and trace collection
Publish messages to multiple nodes simultaneously:
Create a text file (ips.txt) with IP addresses, one per line. For the Docker Compose stack, host ports are 33221–33224:
localhost:33221
localhost:33222
localhost:33223
localhost:33224
Then publish to all nodes in the file:
./grpc_p2p_client/p2p-multi-publish -topic=test-topic -ipfile=ips.txt -count=10 -sleep=500msFlags:
-topic: Topic name to publish to (required)-ipfile: File containing IP addresses, one per line (required)-start-index: Starting index in IP file for selecting a subset of IPs (default: 0)-end-index: Ending index in IP file (exclusive, default: 10000)-count: Number of messages to publish per node (default: 1, must be >= 1)-datasize: Size in bytes of random message payload (default: 100, must be >= 1)-sleep: Delay between messages (e.g.,500ms,1s)
Index Range Selection (-start-index and -end-index):
These flags allow you to select a specific range of IP addresses from your IP file, which is useful when:
- Testing with a subset of nodes from a large IP file
- Running parallel tests across different IP ranges
How it works:
- Uses zero-based indexing (first IP is at index 0)
-end-indexis exclusive (like Python slicing:ips[start:end])- If your IP file has 10 IPs and you use
-start-index=0 -end-index=5, it will use IPs at indices 0, 1, 2, 3, 4 (5 IPs total) - If
-end-indexexceeds the number of IPs in the file, it automatically caps to the file length - Both flags must be valid:
start-index >= 0,start-index < end-index, andstart-index < total IPs
Examples:
# Publish to first 5 IPs (indices 0-4)
./grpc_p2p_client/p2p-multi-publish \
-topic=test-topic \
-ipfile=ips.txt \
-start-index=0 \
-end-index=5 \
-count=10
# Publish to IPs 10-19 (indices 10-19)
./grpc_p2p_client/p2p-multi-publish \
-topic=test-topic \
-ipfile=ips.txt \
-start-index=10 \
-end-index=20 \
-count=10
# If not specified, defaults to start-index=0, end-index=10000
# (or file length if less than 10000 IPs)
./grpc_p2p_client/p2p-multi-publish -topic=test-topic -ipfile=ips.txtExample Output:
Found 4 IPs: [localhost:33221 localhost:33222 localhost:33223 localhost:33224]
Connected to node at: localhost:33221…
Connected to node at: localhost:33222…
Connected to node at: localhost:33223…
Connected to node at: localhost:33224…
Published data size 116
Published to "test-topic" (took 263.709µs)
...
Subscribe to multiple nodes and collect messages:
# Subscribe to all nodes in the file (Docker: use ips with ports 33221–33224, start-index=0, end-index=4)
./grpc_p2p_client/p2p-multi-subscribe -topic=test-topic -ipfile=ips.txt
# With output files for data analysis
./grpc_p2p_client/p2p-multi-subscribe \
-topic=test-topic \
-ipfile=ips.txt \
-output-data=received_data.tsv \
-output-trace=trace_data.tsvFlags:
-topic: Topic name to subscribe to (required)-ipfile: File containing IP addresses, one per line (required)-start-index: Starting index in IP file for selecting a subset of IPs (default: 0)-end-index: Ending index in IP file (exclusive, default: 10000)-output-data: Output file for message data (TSV format: receiver, sender, size, sha256)-output-trace: Output file for trace events (TSV format: type, peerID, receivedFrom, messageID, topic, timestamp)
Note: The -start-index and -end-index flags work the same way as described in the Multi-Node Publisher Usage section above.
Example Output:
numip 4 index 10000
Found 4 IPs: [localhost:33221 localhost:33222 localhost:33223 localhost:33224]
IP - localhost:33221
IP - localhost:33222
Connected to node at: localhost:33221…
Trying to subscribe to topic test-topic…
Subscribed to topic "test-topic", waiting for messages…
Connected to node at: localhost:33222…
Trying to subscribe to topic test-topic…
Subscribed to topic "test-topic", waiting for messages…
...
Output File Formats:
Data Output (-output-data):
receiver sender size sha256(msg)
localhost:33221 node1 116 abc123def456...
localhost:33222 node1 116 abc123def456...
Trace Output (-output-trace):
PUBLISH_MESSAGE 12D3KooW... z4aVidFb... test-topic 1764229885872324880
DELIVER_MESSAGE 12D3KooW... 12D3KooW... z4aVidFb... test-topic 1764229885872574964
Use p2p-client (single-node) when:
- Testing basic functionality
- Developing or debugging
- Simple publish/subscribe operations
- Working with a single node
Use p2p-multi-publish when:
- Stress testing the network
- Generating load across multiple nodes
- Testing message propagation
- Distributed publishing scenarios
Use p2p-multi-subscribe when:
- Monitoring multiple nodes simultaneously
- Collecting trace data for analysis
- Testing message delivery across nodes
- Performance benchmarking
curl http://localhost:9091/api/v1/healthresponse:
{
"status": "ok",
"mode": "optimum",
"cpu_used": "0.29",
"memory_used": "6.70",
"disk_used": "84.05",
"country": "United States",
"country_iso": "US"
}curl http://localhost:9091/api/v1/node-stateresponse:
{
"pub_key": "12D3KooWMwzQYKhRvLRsiKignZ2vAtkr1nPYiDWA7yeZvRqC9ST9",
"country": "United States",
"country_iso": "US",
"peers": [
"12D3KooWDLm7bSFnoqP4mhoJminiCixbR2Lwyqu9sU5EDKVvXM5j",
"12D3KooWJrPmTdXj9hirigHs88BHe6DApLpdXiKrwF1V8tNq9KP7",
"12D3KooWAykKBmimGzgFCC6EL3He3tTqcy2nbVLGCa1ENrG2x5QP"
],
"addresses": ["/ip4/172.28.0.12/tcp/7070"],
"topics": []
}curl http://localhost:9091/api/v1/versionresponse:
{
"version": "v0.0.1-rc16",
"commit_hash": "8f3057d"
}curl http://localhost:8081/api/v1/node-countriesReturns the same geolocation metadata but aggregated across all nodes that the proxy currently manages.
The gRPC P2P client includes built-in trace collection functionality that automatically parses and displays trace events from both GossipSub and mump2p protocols. This helps monitor message delivery performance and understand RLNC-enhanced shard behavior.
When you subscribe to a topic, the client automatically receives and parses trace events:
- GossipSub traces: Traditional pubsub delivery events with structured JSON output
- mump2p traces: RLNC-enhanced shard delivery events with detailed shard information
# Subscribe to a topic and collect trace data
./grpc_p2p_client/p2p-client -mode=subscribe -topic=your-topic --addr=127.0.0.1:33221You'll see structured trace output like:
Subscribed to topic "your-topic", waiting for messages…
[TRACE] mump2p type=JOIN ts=2025-09-11T15:58:04.746971127+05:30 size=66B
[TRACE] mump2p JSON (136B): {"type":9,"peerID":"ACQIARIgJUuLFt9bycr0mdXiMdJ1bQ8Nuxs2Y8NtQwPrXEVCuKM=","timestamp":1757586484746971127,"join":{"topic":"trace-test"}}
[TRACE] mump2p type=SEND_RPC ts=2025-09-11T15:58:04.73762546+05:30 size=114B
[TRACE] mump2p JSON (260B): {"type":7,"peerID":"ACQIARIgJUuLFt9bycr0mdXiMdJ1bQ8Nuxs2Y8NtQwPrXEVCuKM=","timestamp":1757586484746035127,"sendRPC":{"sendTo":"ACQIARIg46ViPpa30cOyFCgRdiW+TS/qpMkuXQsKK0w+5svzqk8=","meta":{"subscription":[{"subscribe":true,"topic":"trace-test"}]},"length":16}}
[TRACE] mump2p type=GRAFT ts=2025-09-11T15:58:28.517443638+05:30 size=106B
[TRACE] mump2p JSON (202B): {"type":11,"peerID":"ACQIARIg46ViPpa30cOyFCgRdiW+TS/qpMkuXQsKK0w+5svzqk8=","timestamp":1757586508517443638,"graft":{"peerID":"ACQIARIgJUuLFt9bycr0mdXiMdJ1bQ8Nuxs2Y8NtQwPrXEVCuKM=","topic":"trace-test"}}
[1] Received message: "Hello World"
Note: Trace events are primarily available when connecting to local Docker P2P nodes. Initial connection generates JOIN, SEND_RPC, and GRAFT events. During message flow, you'll see rich RLNC shard events (NEW_SHARD, RECV_RPC, UNNECESSARY_SHARD) that show the protocol's coding behavior. Remote nodes may not generate trace events.
The client recognizes these mump2p trace events (observed in practice):
Common Events:
- JOIN: Node joins a topic (type=9)
- SEND_RPC: Sends RPC messages to peers (type=7)
- GRAFT: Establishes mesh connections for topic (type=11)
Shard Events (when RLNC is active):
- NEW_SHARD: New RLNC shard created with message ID and coefficients (type=16)
- DUPLICATE_SHARD: Duplicate shard detected (type=13)
- UNHELPFUL_SHARD: Shard that doesn't help decode (type=14)
- UNNECESSARY_SHARD: Shard that's not needed for decoding (type=15)
Other Events:
- PUBLISH_MESSAGE: Message published to topic (type=0)
- DELIVER_MESSAGE: Message delivered to subscriber (type=3)
- ADD_PEER/REMOVE_PEER: Peer connection events (type=4/5)
- RECV_RPC: Receives RPC messages from peers (type=6)
- LEAVE: Node leaves a topic (type=10)
- PRUNE: Removes mesh connections (type=12)
The trace parsing is implemented in grpc_p2p_client/shared/utils.go:
func HandleGossipSubTrace(data []byte, writeTrace bool, traceCh chan<- string) {
evt := &pubsubpb.TraceEvent{}
if err := proto.Unmarshal(data, evt); err != nil {
fmt.Printf("[TRACE] GossipSub decode error: %v raw=%dB head=%s\n",
err, len(data), HeadHex(data, 64))
return
}
// Parses and formats trace events, optionally writing to traceCh channel
// ...
}
func HandleOptimumP2PTrace(data []byte, writeTrace bool, traceCh chan<- string) {
evt := &optsub.TraceEvent{}
if err := proto.Unmarshal(data, evt); err != nil {
fmt.Printf("[TRACE] mump2p decode error: %v\n", err)
return
}
// Parses mump2p trace events including shard information
// Supports writing to traceCh channel for file output
// ...
}These functions are used by the multi-subscribe client to parse and optionally write trace events to output files. The trace data includes message delivery events, shard information, and peer connection details.
For development, authentication is disabled by default. Enable Auth0 JWT authentication by setting environment variables:
# docker-compose-optimum.yml
environment:
ENABLE_AUTH: "false"Configure per-client rate limits via JWT claims:
{
"max_publish_per_hour": 1000,
"max_publish_per_sec": 8,
"max_message_size": 4194304,
"daily_quota": 5368709120
}Key environment variables for P2P nodes:
environment:
NODE_MODE: "optimum" # Protocol mode
OPTIMUM_SHARD_FACTOR: "4" # Shards per message
OPTIMUM_THRESHOLD: "0.75" # Shard threshold (75%)
OPTIMUM_MESH_TARGET: "6" # Target peer connections
OPTIMUM_MAX_MSG_SIZE: "1048576" # Max message size (1MB)Key environment variables for proxies:
environment:
P2P_NODES: "p2pnode-1:33212,p2pnode-2:33212,p2pnode-3:33212,p2pnode-4:33212"
SUBSCRIBER_THRESHOLD: "0.1" # Default threshold
LOG_LEVEL: "info" # Logging levelAccess metrics at /metrics endpoint:
curl http://localhost:8081/metricsThe P2P client includes built-in trace collection for performance analysis:
./grpc_p2p_client/p2p-client -mode=subscribe -topic=your-topic --addr=127.0.0.1:33221Output includes:
[TRACE] mump2p type=JOIN ts=2025-09-11T15:58:04.746971127+05:30 size=66B
[TRACE] mump2p type=SEND_RPC ts=2025-09-11T15:58:04.73762546+05:30 size=114B
[TRACE] mump2p type=GRAFT ts=2025-09-11T15:58:28.517443638+05:30 size=106B
Recv message: [1] [1757579641382484000 126] [1757579641203739000 100] Hello World
Note: Trace events appear during initial connection setup (JOIN, SEND_RPC, GRAFT) and continue during message flow with rich RLNC shard events (NEW_SHARD, RECV_RPC, UNNECESSARY_SHARD).
Problem: docker-compose -f docker-compose-optimum.yml up fails with identity errors
Solution:
# Regenerate identity
rm -rf identity/
./script/generate-identity.sh
# Set environment variable
export BOOTSTRAP_PEER_ID=<generated-peer-id>
docker-compose -f docker-compose-optimum.yml up --build -dProblem: /api/v1/subscribe returns "Cannot POST"
Solution:
# Check if services are using latest images
docker-compose -f docker-compose-optimum.yml down
docker system prune -f
docker-compose -f docker-compose-optimum.yml up --build -dProblem: Nodes show empty peer lists
Solution:
# Verify bootstrap configuration
curl http://localhost:9091/api/v1/node-state
# Check logs
docker logs optimum-dev-setup-guide-p2pnode-1-1Problem: JWT token rejection
Solution:
# Verify Auth0 configuration
# Check token format and claims
# Ensure Auth0 domain/audience match configuration- Use gRPC clients instead of REST
- Increase shard factor for better redundancy
- Tune mesh parameters for network size
- Reduce threshold values (0.1-0.3)
- Use direct P2P connections
- Optimize network topology
Check service logs for detailed debugging:
# Proxy logs
docker logs optimum-dev-setup-guide-proxy-1-1
# P2P node logs
docker logs optimum-dev-setup-guide-p2pnode-1-1
# All services
docker-compose -f docker-compose-optimum.yml logs -fFor production-ready CLI interaction, see:
- mump2p-cli - Full-featured CLI client with JWT authentication and rate limiting
Client implementations available in this repository:
grpc_proxy_client/- Go gRPC proxy client (proxy_client.go)grpc_p2p_client/- Go P2P direct clients:cmd/single/- Single-node client (p2p-client)cmd/multi-publish/- Multi-node publisher (p2p-multi-publish)cmd/multi-subscribe/- Multi-node subscriber (p2p-multi-subscribe)shared/- Shared types and utilities
scripts/- Shell script wrappers
This development setup provides a complete mump2p environment for testing, integration, and development.