Release Date: December 14, 2025 Focus: Performance, Memory Management, and Resource Optimization
Version 1.4.0 introduces comprehensive performance optimizations to the CyberChef MCP Server, enabling efficient handling of large operations (100MB+), intelligent caching, and configurable resource limits. This release focuses on stability, efficiency, and scalability for production deployments.
LRU Cache for Operation Results
- Intelligent caching of operation results to eliminate redundant computation
- Configurable cache size (100MB default) and item count (1000 default)
- Automatic eviction of oldest items when limits are reached
- Cache key generation based on operation + input + arguments
Buffer Pooling
- Reusable buffer allocation to reduce garbage collection pressure
- Automatic cleanup and reuse of buffers
- Configurable pool sizes per buffer size
Memory Monitoring
- Periodic memory usage logging (every 5 seconds)
- Heap usage and RSS tracking
- Early warning for memory pressure
Automatic Streaming Detection
- Inputs exceeding 10MB threshold automatically use streaming
- Chunked processing with 1MB chunk size
- Memory-efficient handling of 100MB+ inputs
Supported Operations
- Encoding: To Base64, From Base64, To Hex, From Hex
- Compression: Gzip, Gunzip, Bzip2 Compress, Bzip2 Decompress
- Hashing: SHA1, SHA2, SHA3, MD5, BLAKE2b, BLAKE2s
Transparent Fallback
- Non-streaming operations automatically use traditional processing
- No client changes required
Input Size Validation
- Maximum input size enforcement (100MB default)
- Clear error messages when limits are exceeded
- Prevents out-of-memory crashes
Operation Timeouts
- 30-second default timeout for all operations
- Prevents runaway operations
- Configurable per deployment needs
Request Validation
- Input validation before processing
- Early rejection of oversized requests
Comprehensive Benchmarking
- 20+ operation benchmarks across categories:
- Encoding operations (Base64, Hex)
- Hashing operations (MD5, SHA256, SHA512)
- Compression operations (Gzip)
- Cryptographic operations (AES)
- Text operations (Regex)
- Analysis operations (Entropy, Frequency)
- Multiple input sizes tested (1KB, 100KB, 1MB, 10MB)
- CI/CD integration for performance regression detection
Usage
npm run benchmarkCPU-Intensive Operation Detection
- Automatic identification of CPU-intensive operations:
- Cryptographic: AES, DES, RSA, Bcrypt, Scrypt
- Hashing: SHA*, MD*, BLAKE2*, Whirlpool
- Compression: Gzip, Bzip2
- Key generation: RSA, PGP
- Foundation for future worker pool implementation
All performance features are configurable via environment variables:
# Maximum input size (bytes)
CYBERCHEF_MAX_INPUT_SIZE=104857600 # 100MB default
# Operation timeout (milliseconds)
CYBERCHEF_OPERATION_TIMEOUT=30000 # 30s default
# Streaming threshold (bytes)
CYBERCHEF_STREAMING_THRESHOLD=10485760 # 10MB default
# Enable/disable streaming
CYBERCHEF_ENABLE_STREAMING=true # Enabled by default
# Enable/disable worker threads (future)
CYBERCHEF_ENABLE_WORKERS=true # Enabled by default
# Cache maximum size (bytes)
CYBERCHEF_CACHE_MAX_SIZE=104857600 # 100MB default
# Cache maximum items
CYBERCHEF_CACHE_MAX_ITEMS=1000 # 1000 items defaultdocker run -i \
-e CYBERCHEF_MAX_INPUT_SIZE=209715200 \
-e CYBERCHEF_OPERATION_TIMEOUT=60000 \
-e CYBERCHEF_STREAMING_THRESHOLD=5242880 \
ghcr.io/doublegate/cyberchef-mcp_v1:latest{
"mcpServers": {
"cyberchef": {
"command": "docker",
"args": [
"run", "-i", "--rm",
"-e", "CYBERCHEF_MAX_INPUT_SIZE=209715200",
"-e", "CYBERCHEF_CACHE_MAX_SIZE=209715200",
"ghcr.io/doublegate/cyberchef-mcp_v1:latest"
]
}
}
}Encoding Operations
- Base64 encoding (1MB): ~15ms
- Hex encoding (1MB): ~12ms
Hashing Operations
- MD5 (1MB): ~8ms
- SHA256 (1MB): ~20ms
- SHA512 (1MB): ~15ms
Memory Usage
- Idle server: ~50MB RSS
- Processing 10MB: ~120MB RSS (with streaming)
- Processing 100MB: ~200MB RSS (with streaming)
- Cache overhead: ~5-10MB for typical usage
- 90% reduction in memory usage for large inputs (via streaming)
- 95%+ reduction in latency for cached operations (instant)
- Zero OOM crashes with 100MB inputs
- Predictable resource usage with timeout enforcement
No migration required. All changes are backward compatible.
- Review Resource Limits: Adjust
CYBERCHEF_MAX_INPUT_SIZEbased on your deployment environment - Monitor Memory: Watch server logs for memory usage patterns
- Test Large Operations: Validate that streaming works for your use cases
- Benchmark: Run
npm run benchmarkto establish baseline performance
For deployments processing large files regularly:
# Increase limits for large file processing
CYBERCHEF_MAX_INPUT_SIZE=524288000 # 500MB
CYBERCHEF_STREAMING_THRESHOLD=52428800 # 50MB
CYBERCHEF_CACHE_MAX_SIZE=524288000 # 500MB
CYBERCHEF_OPERATION_TIMEOUT=120000 # 2 minutes
# Also increase Docker memory limit
docker run -i --memory=2g \
-e CYBERCHEF_MAX_INPUT_SIZE=524288000 \
ghcr.io/doublegate/cyberchef-mcp_v1:latestFor low-memory environments:
# Reduce limits for constrained environments
CYBERCHEF_MAX_INPUT_SIZE=10485760 # 10MB
CYBERCHEF_CACHE_MAX_SIZE=10485760 # 10MB
CYBERCHEF_CACHE_MAX_ITEMS=100 # 100 itemsAll features have been validated:
- ✅ 21 MCP validation tests passing
- ✅ 465 tools validated and functional
- ✅ Docker build successful
- ✅ 100MB input processing confirmed
- ✅ Memory monitoring operational
- ✅ Cache hit/miss logging verified
- ✅ Timeout enforcement tested
# Full test suite
npm test
# MCP validation tests
npm run test:mcp
# Performance benchmarks
npm run benchmark
# Docker test
echo '{"jsonrpc": "2.0", "id": 1, "method": "tools/list", "params": {}}' | \
docker run -i --rm ghcr.io/doublegate/cyberchef-mcp_v1:latest- Worker Thread Implementation: Infrastructure is present but actual worker pool implementation is deferred to future release
- Streaming Limitations: Only specific operations support streaming; complex recipe chains may not stream
- Cache Invalidation: Cache uses LRU eviction only; no TTL-based expiration
- Memory Monitoring: Logs to stderr only; no metrics export yet
None. This release is fully backward compatible with v1.3.0.
None.
Planned for v1.5.0 and beyond:
- Full worker thread pool implementation with Piscina
- Enhanced MCP streaming protocol support
- Metrics export (Prometheus, OpenTelemetry)
- Advanced cache strategies (TTL, compression)
- Progressive streaming for recipe chains
This release implements the performance optimization roadmap outlined in the v1.4.0 release plan. Thank you to all contributors and users who provided feedback on performance requirements.
- GitHub Issues: https://github.com/doublegate/CyberChef-MCP/issues
- Documentation: https://github.com/doublegate/CyberChef-MCP/tree/master/docs
Full Changelog: https://github.com/doublegate/CyberChef-MCP/compare/v1.3.0...v1.4.0