Version: 3.0 Last Updated: 2025-01-25 Current Status: v1.0.0 Production Release - 2,557 tests, 51.40% coverage
- Overview
- Testing Philosophy
- Test Levels
- Test Infrastructure
- Test Coverage
- Continuous Integration
- Testing Checklist
Comprehensive testing is critical for ProRT-IP WarScan due to:
- Security implications: Bugs could enable network attacks or scanner exploitation
- Cross-platform complexity: Must work correctly on Linux, Windows, macOS
- Performance requirements: Must maintain 1M+ pps without degradation
- Protocol correctness: Malformed packets lead to inaccurate results
- Correctness: All scanning modes produce accurate, repeatable results
- Safety: No memory leaks, data races, or undefined behavior
- Performance: Maintain throughput and latency targets under load
- Reliability: Graceful handling of network errors and edge cases
- Security: Resist malformed packets and DoS attacks
| Component | Target Coverage | Current Coverage (v1.0.0) |
|---|---|---|
| Core Engine | >90% | Achieved |
| Network Protocol | >85% | Achieved |
| Scanning Modules | >80% | Achieved |
| Detection Systems | >75% | Achieved |
| CLI/UI | >60% | Achieved |
| Overall | >50% | 51.40% ✅ |
Current Metrics (v1.0.0):
- Total Tests: 2,557 (100% passing)
- Line Coverage: 51.40%
- Integration Tests: 175+ tests
- Fuzz Testing: 230M+ executions, 0 crashes (5 targets)
- Test Growth: +2,342 tests from Phase 1 (+1,089% growth)
- Coverage Tool: cargo-tarpaulin with ptrace engine
- Benchmarks: Criterion.rs + hyperfine with comprehensive baselines
For critical components (packet crafting, state machines, detection engines), write tests before implementation:
// Step 1: Write failing test
#[test]
fn test_tcp_syn_packet_crafting() {
let packet = TcpPacketBuilder::new()
.source(Ipv4Addr::new(10, 0, 0, 1), 12345)
.destination(Ipv4Addr::new(10, 0, 0, 2), 80)
.flags(TcpFlags::SYN)
.build()
.expect("packet building failed");
assert_eq!(packet.get_flags(), TcpFlags::SYN);
assert!(verify_tcp_checksum(&packet));
}
// Step 2: Implement feature to make test pass
// Step 3: Refactor while keeping test greenUse proptest or quickcheck to generate random inputs and verify invariants:
use proptest::prelude::*;
proptest! {
#[test]
fn tcp_checksum_always_valid(
src_ip: u32,
dst_ip: u32,
src_port: u16,
dst_port: u16,
seq: u32,
) {
let packet = build_tcp_packet(src_ip, dst_ip, src_port, dst_port, seq);
prop_assert!(verify_tcp_checksum(&packet));
}
}Every bug fix must include a test that would have caught the bug:
// Regression test for issue #42: SYN+ACK responses with window=0 incorrectly marked closed
#[test]
fn test_issue_42_zero_window_syn_ack() {
let response = create_syn_ack_response(window_size: 0);
let state = determine_port_state(&response);
assert_eq!(state, PortState::Open); // Was incorrectly Closed before fix
}Periodically run mutation testing to verify test quality:
# Install cargo-mutants
cargo install cargo-mutants
# Run mutation tests
cargo mutants
# Should achieve >90% mutation score on core modulesScope: Individual functions and structs in isolation
Location: Inline with source code in #[cfg(test)] modules
Examples:
// src/net/tcp.rs
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_tcp_flags_parsing() {
let flags = TcpFlags::from_bits(0x02).unwrap();
assert_eq!(flags, TcpFlags::SYN);
}
#[test]
fn test_sequence_number_wrapping() {
let seq = SequenceNumber::new(0xFFFF_FFFE);
let next = seq.wrapping_add(5);
assert_eq!(next.value(), 3); // Wraps around at u32::MAX
}
#[test]
fn test_tcp_option_serialization() {
let opt = TcpOption::Mss(1460);
let bytes = opt.to_bytes();
assert_eq!(bytes, vec![2, 4, 0x05, 0xB4]);
}
}Run Commands:
# All unit tests
cargo test --lib
# Specific module
cargo test tcp::tests
# With output
cargo test -- --nocaptureScope: Multiple components working together
Location: tests/ directory (separate from source)
Examples:
// tests/integration_syn_scan.rs
use prtip::scanner::{Scanner, ScanConfig, ScanType};
use prtip::target::Target;
#[tokio::test]
async fn test_syn_scan_local_host() {
// Setup: Start local test server on port 8080
let server = spawn_test_server(8080).await;
// Execute scan
let config = ScanConfig {
scan_type: ScanType::Syn,
targets: vec![Target::single("127.0.0.1", 8080)],
timeout: Duration::from_secs(5),
..Default::default()
};
let scanner = Scanner::new(config).unwrap();
let results = scanner.execute().await.unwrap();
// Verify
assert_eq!(results.len(), 1);
assert_eq!(results[0].state, PortState::Open);
assert_eq!(results[0].port, 8080);
// Cleanup
server.shutdown().await;
}
#[tokio::test]
async fn test_syn_scan_filtered_port() {
// Port 9999 should be filtered (no response, no RST)
let config = ScanConfig {
scan_type: ScanType::Syn,
targets: vec![Target::single("127.0.0.1", 9999)],
timeout: Duration::from_millis(100),
max_retries: 1,
..Default::default()
};
let scanner = Scanner::new(config).unwrap();
let results = scanner.execute().await.unwrap();
assert_eq!(results[0].state, PortState::Filtered);
}Run Commands:
# All integration tests
cargo test --test '*'
# Specific test file
cargo test --test integration_syn_scan
# Single test
cargo test --test integration_syn_scan test_syn_scan_local_hostScope: End-to-end scenarios mimicking real-world usage
Location: tests/system/ with helper scripts
Examples:
#!/bin/bash
# tests/system/test_full_network_scan.sh
set -e
# Setup test network (requires Docker)
docker-compose -f tests/fixtures/docker-compose.yml up -d
# Wait for services to start
sleep 5
# Run full scan
cargo run --release -- \
-sS -sV -O \
-p- \
--output json \
--output-file /tmp/scan_results.json \
172.20.0.0/24
# Verify expected services found
python3 tests/system/verify_results.py /tmp/scan_results.json
# Cleanup
docker-compose -f tests/fixtures/docker-compose.yml down
echo "✓ System test passed"Run Commands:
bash tests/system/test_full_network_scan.shScope: Throughput, latency, resource usage benchmarks
Location: benches/ directory using Criterion.rs
Examples:
// benches/packet_crafting.rs
use criterion::{black_box, criterion_group, criterion_main, Criterion, BenchmarkId};
use prtip::net::TcpPacketBuilder;
fn bench_tcp_packet_building(c: &mut Criterion) {
c.bench_function("tcp_syn_packet", |b| {
b.iter(|| {
TcpPacketBuilder::new()
.source(black_box(Ipv4Addr::new(10, 0, 0, 1)), black_box(12345))
.destination(black_box(Ipv4Addr::new(10, 0, 0, 2)), black_box(80))
.flags(TcpFlags::SYN)
.build()
.unwrap()
});
});
}
fn bench_throughput(c: &mut Criterion) {
let mut group = c.benchmark_group("throughput");
for rate in [10_000, 100_000, 1_000_000] {
group.bench_with_input(BenchmarkId::new("packets_per_sec", rate), &rate, |b, &rate| {
b.iter(|| {
// Simulate sending 'rate' packets
simulate_packet_transmission(black_box(rate))
});
});
}
group.finish();
}
criterion_group!(benches, bench_tcp_packet_building, bench_throughput);
criterion_main!(benches);Run Commands:
# Run all benchmarks
cargo bench
# Specific benchmark
cargo bench tcp_packet
# With profiling (Linux)
cargo bench --bench packet_crafting -- --profile-time=5Scope: Malformed input handling and crash resistance
Location: fuzz/ directory using cargo-fuzz
Setup:
cargo install cargo-fuzz
cargo fuzz initExamples:
// fuzz/fuzz_targets/tcp_parser.rs
#![no_main]
use libfuzzer_sys::fuzz_target;
use prtip::net::parse_tcp_packet;
fuzz_target!(|data: &[u8]| {
// Should never panic, even with arbitrary input
let _ = parse_tcp_packet(data);
});Run Commands:
# Fuzz TCP parser (runs indefinitely until crash)
cargo fuzz run tcp_parser
# Run for specific duration
cargo fuzz run tcp_parser -- -max_total_time=300 # 5 minutes
# Run with corpus
cargo fuzz run tcp_parser fuzz/corpus/tcp_parser/# tests/fixtures/docker-compose.yml
version: '3.8'
services:
web-server:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
networks:
testnet:
ipv4_address: 172.20.0.10
ssh-server:
image: linuxserver/openssh-server
environment:
- PUID=1000
- PGID=1000
- PASSWORD_ACCESS=true
- USER_PASSWORD=testpass
ports:
- "2222:2222"
networks:
testnet:
ipv4_address: 172.20.0.11
ftp-server:
image: delfer/alpine-ftp-server
environment:
- USERS=testuser|testpass
ports:
- "21:21"
networks:
testnet:
ipv4_address: 172.20.0.12
database:
image: postgres:15-alpine
environment:
- POSTGRES_PASSWORD=testpass
ports:
- "5432:5432"
networks:
testnet:
ipv4_address: 172.20.0.13
networks:
testnet:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/24Usage:
# Start test environment
docker-compose -f tests/fixtures/docker-compose.yml up -d
# Run tests
cargo test --test integration_service_detection
# Cleanup
docker-compose -f tests/fixtures/docker-compose.yml down// tests/helpers/mock_server.rs
use tokio::net::TcpListener;
/// Spawn a TCP server that responds with custom behavior
pub async fn spawn_mock_tcp_server(
port: u16,
response_handler: impl Fn(&[u8]) -> Vec<u8> + Send + 'static,
) -> MockServer {
let listener = TcpListener::bind(format!("127.0.0.1:{}", port))
.await
.unwrap();
let handle = tokio::spawn(async move {
while let Ok((mut socket, _)) = listener.accept().await {
let mut buf = vec![0u8; 1024];
if let Ok(n) = socket.read(&mut buf).await {
let response = response_handler(&buf[..n]);
socket.write_all(&response).await.ok();
}
}
});
MockServer { handle, port }
}
pub struct MockServer {
handle: JoinHandle<()>,
port: u16,
}
impl MockServer {
pub async fn shutdown(self) {
self.handle.abort();
}
}// tests/fixtures/mod.rs
pub mod pcap_samples {
/// Load PCAP file for replay testing
pub fn load_syn_scan_capture() -> Vec<u8> {
include_bytes!("pcaps/syn_scan.pcap").to_vec()
}
pub fn load_os_fingerprint_capture() -> Vec<u8> {
include_bytes!("pcaps/os_fingerprint.pcap").to_vec()
}
}
pub mod fingerprints {
/// Sample OS fingerprint database for testing
pub fn test_fingerprints() -> Vec<OsFingerprint> {
vec![
OsFingerprint {
name: "Linux 5.x",
signature: "...",
// ...
},
OsFingerprint {
name: "Windows 10",
signature: "...",
// ...
},
]
}
}# Install tarpaulin (Linux only)
cargo install cargo-tarpaulin
# Generate HTML coverage report
cargo tarpaulin --out Html --output-dir coverage
# View report
firefox coverage/index.html
# CI mode (exit with error if below threshold)
cargo tarpaulin --fail-under 80// Core modules (>90% coverage)
// - packet crafting
// - checksum calculation
// - state machines
// Medium-priority modules (>80% coverage)
// - scanning algorithms
// - rate limiting
// - result aggregation
// Lower-priority modules (>60% coverage)
// - CLI parsing
// - output formatters
// - TUI components# .github/workflows/ci.yml
name: CI
on: [push, pull_request]
jobs:
test:
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macos-latest]
rust: [stable, beta]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v3
- name: Install Rust
uses: actions-rust-lang/setup-rust-toolchain@v1
with:
toolchain: ${{ matrix.rust }}
- name: Install dependencies (Linux)
if: runner.os == 'Linux'
run: |
sudo apt-get update
sudo apt-get install -y libpcap-dev libssl-dev
- name: Install dependencies (macOS)
if: runner.os == 'macOS'
run: brew install libpcap openssl@3
- name: Install dependencies (Windows)
if: runner.os == 'Windows'
run: |
choco install npcap
# Download Npcap SDK
- name: Check formatting
run: cargo fmt --check
- name: Lint
run: cargo clippy -- -D warnings
- name: Build
run: cargo build --verbose
- name: Run tests
run: cargo test --verbose
- name: Run integration tests
run: cargo test --test '*' --verbose
coverage:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions-rust-lang/setup-rust-toolchain@v1
- name: Install dependencies
run: |
sudo apt-get update
sudo apt-get install -y libpcap-dev libssl-dev
cargo install cargo-tarpaulin
- name: Generate coverage
run: cargo tarpaulin --out Xml
- name: Upload to codecov
uses: codecov/codecov-action@v3
with:
files: ./cobertura.xml
security:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Security audit
run: |
cargo install cargo-audit
cargo audit- Code passes
cargo fmt --check - Code passes
cargo clippy -- -D warnings - All unit tests pass (
cargo test --lib) - New code has accompanying tests
- Coverage hasn't decreased
- All tests pass on all platforms (CI green)
- Integration tests pass (
cargo test --test '*') - Benchmarks show no regression (
cargo bench) - Documentation updated for new features
- Changelog updated
- Full system tests pass
- Security audit clean (
cargo audit) - Fuzz testing run for 24+ hours without crashes
- Performance benchmarks meet targets
- Cross-platform testing complete
- Memory leak testing clean (valgrind)
- Release notes written
// BAD: Race condition in test
#[tokio::test]
async fn flaky_test() {
spawn_server().await;
// No wait for server to be ready!
let client = connect().await.unwrap(); // May fail randomly
}
// GOOD: Deterministic test
#[tokio::test]
async fn reliable_test() {
let server = spawn_server().await;
server.wait_until_ready().await;
let client = connect().await.unwrap();
}// BAD: Depends on external file
#[test]
fn test_config_loading() {
let config = load_config("/etc/prtip/config.toml"); // Fails in CI
}
// GOOD: Use fixtures
#[test]
fn test_config_loading() {
let config = load_config("tests/fixtures/test_config.toml");
}// BAD: No verification
#[test]
fn test_scan() {
let scanner = Scanner::new();
scanner.scan("192.168.1.1").unwrap();
// Test passes even if scan did nothing!
}
// GOOD: Verify behavior
#[test]
fn test_scan() {
let scanner = Scanner::new();
let results = scanner.scan("192.168.1.1").unwrap();
assert!(!results.is_empty());
assert_eq!(results[0].ip, "192.168.1.1");
}Added 122 tests validating error handling infrastructure across 6 categories:
| Category | Tests | Coverage | Key Features |
|---|---|---|---|
| Error Injection | 22 | Framework | 11 failure modes, deterministic simulation |
| Circuit Breaker | 18 | 90%+ | State transitions, per-target isolation |
| Retry Logic | 14 | 85%+ | Exponential backoff, transient detection |
| Resource Monitor | 15 | 80%+ | Memory/CPU thresholds, adaptive degradation |
| Error Messages | 20 | User-facing | Clarity, recovery suggestions, context |
| Integration | 15 | End-to-end | CLI scenarios, exit codes, input validation |
| Edge Cases | 18 | Boundaries | Port/CIDR/timeout/parallelism limits |
Error Injection Framework (crates/prtip-scanner/tests/common/error_injection.rs):
- 11 failure modes (ConnectionRefused, Timeout, NetworkUnreachable, etc.)
- Deterministic simulation with attempt tracking
- Retriability classification (transient vs permanent)
- Helper methods for scanner error conversion
Usage Example:
use common::error_injection::{ErrorInjector, FailureMode};
let injector = ErrorInjector::new(
target_addr,
FailureMode::Timeout(Duration::from_millis(100))
);
// Simulate failure
let result = injector.inject();
assert!(matches!(result, Err(ScannerError::Timeout(_))));# All error handling tests
cargo test --workspace | grep -E "(error|circuit|retry|resource)"
# Specific categories
cargo test -p prtip-scanner test_error_injection
cargo test -p prtip-core test_circuit_breaker
cargo test -p prtip-core test_retry
cargo test -p prtip-core test_resource_monitor
cargo test -p prtip-cli test_error_messages
cargo test -p prtip-cli test_error_integration
cargo test -p prtip-cli test_edge_casesError handling infrastructure overhead: <5% (acceptable)
- Retry logic: ~1-2% overhead (exponential backoff calculation)
- Circuit breaker: ~1% overhead (HashMap lookup + atomic operations)
- Resource monitor: ~1-2% overhead (periodic system checks)
- Total: 3-5% combined overhead
- Total tests: 1,338
- Error handling tests: 122
- Percentage: 9.1% of test suite
crates/
├── prtip-scanner/tests/
│ ├── common/
│ │ ├── mod.rs # Module declarations
│ │ └── error_injection.rs # 270 lines, 11 failure modes
│ └── test_error_injection.rs # 120 lines, 11 integration tests
├── prtip-core/tests/
│ ├── test_circuit_breaker.rs # 520 lines, 18 tests
│ ├── test_retry.rs # 440 lines, 14 tests
│ └── test_resource_monitor.rs # 290 lines, 15 tests
└── prtip-cli/tests/
├── test_error_messages.rs # 520 lines, 20 tests
├── test_error_integration.rs # 385 lines, 15 tests
└── test_edge_cases.rs # 370 lines, 18 tests
Sprint 5.1 added comprehensive IPv6 test coverage (40 new tests across Phases 4.1-4.2).
IPv6 CLI Flags: crates/prtip-cli/tests/test_ipv6_cli_flags.rs (452 lines, 29 tests)
Tests all IPv6 protocol preference and enforcement flags:
-6/--ipv6flag parsing and behavior-4/--ipv4flag parsing and behavior--prefer-ipv6/--prefer-ipv4with fallback--ipv6-only/--ipv4-onlystrict modes- Flag conflict detection (e.g.,
-6+-4together) - Hostname resolution with protocol preference
- Error handling for protocol mismatches
Cross-Scanner IPv6: crates/prtip-scanner/tests/test_cross_scanner_ipv6.rs (309 lines, 11 tests)
Tests all 6 scanner types against IPv6 loopback (::1):
- TCP Connect Scanner (IPv6 port states)
- SYN Scanner (IPv6 SYN/ACK handling)
- UDP Scanner (IPv6 + ICMPv6 Port Unreachable)
- Stealth Scanners (FIN/NULL/Xmas/ACK on IPv6)
- Discovery Engine (ICMPv6 Echo + NDP)
- Decoy Scanner (IPv6 /64 IID generation)
# All IPv6 tests
cargo test ipv6
# CLI flag tests
cargo test -p prtip-cli test_ipv6_cli_flags
# Cross-scanner tests
cargo test -p prtip-scanner test_cross_scanner_ipv6
# Specific scanner test
cargo test -p prtip-scanner test_tcp_connect_ipv6_loopback| Component | Tests | Lines | Coverage |
|---|---|---|---|
| IPv6 CLI Flags | 29 | 452 | 75%+ |
| Cross-Scanner IPv6 | 11 | 309 | 85%+ |
| TCP Connect IPv6 | Integrated | - | 90%+ |
| SYN Scanner IPv6 | Integrated | - | 85%+ |
| UDP Scanner IPv6 | Integrated | - | 80%+ |
| Stealth Scanners IPv6 | Integrated | - | 80%+ |
| Discovery Engine IPv6 | 7 | 158 | 85%+ |
| Decoy Scanner IPv6 | 7 | 144 | 80%+ |
// Cross-scanner IPv6 consistency test
#[tokio::test]
async fn test_all_scanners_support_ipv6_loopback() {
let loopback_v6 = "::1".parse::<IpAddr>().unwrap();
let ports = vec![22, 80, 443, 3306, 5432];
// Test all 6 scanner types
let scanners = vec![
ScanType::TcpConnect,
ScanType::Syn,
ScanType::Udp,
ScanType::Fin,
ScanType::Discovery,
ScanType::Decoy,
];
for scan_type in scanners {
let config = ScanConfig {
scan_type,
targets: vec![Target::Single(SocketAddr::new(loopback_v6, 0))],
ports: ports.clone(),
timeout: Duration::from_secs(5),
..Default::default()
};
let results = run_scan(config).await.unwrap();
// Verify scan completed without errors
assert!(!results.is_empty(), "Scanner {:?} failed on IPv6", scan_type);
// Verify protocol consistency
for result in results {
assert_eq!(result.target.ip(), loopback_v6);
}
}
}IPv6 loopback performance (6 ports):
- TCP Connect: ~5ms (parity with IPv4)
- SYN: ~10ms (slightly slower due to larger packets)
- UDP: ~50ms (timeout-dependent)
- Stealth: ~10-15ms each
- Discovery (ICMPv6+NDP): ~50ms
- Decoy: ~20ms (5 decoys)
All scanners complete <100ms on loopback, validating zero regressions.
Sprint 4.22 Phase 7 added comprehensive error handling test coverage (122 tests).
Located: crates/prtip-scanner/tests/common/error_injection.rs
Provides deterministic failure simulation for testing error paths:
- 11 Failure Modes: ConnectionRefused, Timeout, NetworkUnreachable, HostUnreachable, ConnectionReset, ConnectionAborted, WouldBlock, Interrupted, TooManyOpenFiles, MalformedResponse, InvalidEncoding, SuccessAfter, Probabilistic
- Retriability Classification: Automatic detection of transient vs permanent errors
- Attempt Tracking: Reset-able counters for retry testing
Example usage:
use common::error_injection::{ErrorInjector, FailureMode};
let target = "127.0.0.1:80".parse().unwrap();
let injector = ErrorInjector::new(target, FailureMode::Timeout(Duration::from_secs(5)));
let result = injector.inject_connection_error();
assert!(result.is_err());Circuit Breaker (18 tests): State transitions, threshold detection, cooldown timing, per-target isolation
Retry Logic (14 tests): Max attempts, exponential backoff, transient error detection, permanent error handling
Resource Monitor (15 tests): Memory thresholds, FD limits, graceful degradation, alert generation
Error Messages (20 tests): User-facing clarity, recovery suggestions, context completeness, no stack traces
Integration (15 tests): End-to-end CLI scenarios, exit codes, input validation, permission handling
Edge Cases (18 tests): Boundary conditions (port 0/65535/65536), CIDR extremes (/0, /31, /32), resource limits
- Error modules: 85%+ (crates/*/src/error.rs)
- Circuit breaker: 90%+ (crates/prtip-core/src/circuit_breaker.rs)
- Retry logic: 85%+ (crates/prtip-core/src/retry.rs)
- Resource monitor: 80%+ (crates/prtip-core/src/resource_monitor.rs)
- Overall: 61.92%+ maintained
- Total tests: 1,216 → 1,338 (+122 = +10%)
- Success rate: 100% (all passing, zero regressions)
- Performance: < 5% overhead
- Zero clippy warnings
- Review Performance Baselines for benchmark targets
- Consult Security Guide for security testing requirements
- See Architecture for component testing boundaries