Skip to content

JustInTheCode/LangPerformanceEval

Repository files navigation

LangPerformanceEval

A baseline performance comparison of C#, Go, and Rust covering startup time, execution time, memory usage, CPU time, and binary size.

Context

I built this to support a programming language evaluation at work. We were deciding which language to use for a new platform running on resource-constrained hardware, and one of the key concerns was whether .NET's baseline memory footprint would be too high when splitting our application into many smaller containerized services.

This benchmark was never meant to be an exhaustive or definitive performance comparison. It served a specific purpose: get a reasonable baseline to compare the three languages under straightforward, equivalent workloads with no tuning or optimization on either side. The goal was relative insight, not absolute precision.

Performance was one aspect of a broader evaluation. Building these benchmarks also gave me hands-on time with Go and Rust (which I had no prior experience with), and helped evaluate their IDEs, build systems, package managers, and ecosystems more generally.

This project was originally put together in early 2025.

Important caveats:

  • This is a deliberately narrow benchmark. It only tests CPU-bound work and memory allocation to get a feel for baseline performance. It says nothing about I/O, networking, concurrency, etc., where the results could look completely different.
  • No performance tuning was applied. Collections aren't pre-sized, Rust doesn't use arena allocation, and .NET's GC isn't configured. The point was to compare baseline, out-of-the-box behavior.
  • Results were gathered on my Windows machine (native) and in a Debian-based Docker container (Linux) on the same machine. They are not meant to be reproducible across different hardware.

What It Benchmarks

Two types of apps were created for each language:

Light apps do almost nothing (print process info, then sleep). They exist to measure the base memory footprint and startup overhead of each language runtime, which matters when you're thinking about running 10+ small services on constrained hardware.

Workload apps perform identical CPU and memory intensive work:

  • Compute sqrt() across all positive 32-bit integers (~2.1 billion iterations)
  • Create 1 million person objects (heap-allocated) as a list, dictionary, and JSON-serialized list
  • Repeat the above with value-type structs

Since Go and Rust don't have classes, heap allocation is simulated using pointers (Go) and Box<T> (Rust).

How It Works

A separate PerformanceProfiler app (written in C#) acts as the benchmarking harness. It:

  1. Executes each target app a configurable number of times
  2. Measures startup time, execution time, peak memory, CPU time, and binary size
  3. Parses structured JSON output from each app to separate startup from execution time
  4. Outputs results to a CSV file

For .NET, both JIT and AOT compiled versions were tested.

Benchmarks were run 30 times each on:

  • Windows (host machine)
  • Linux (Debian-based Docker container)

Results (Averages)

Binary Size (KiB)

Light apps:

.NET JIT .NET AOT Go Rust
Windows 65,953 2,624 2,741 162
Linux 65,281 2,939 2,635 458

Workload apps:

.NET JIT .NET AOT Go Rust
Windows 65,965 2,757 3,216 234
Linux 65,293 3,096 3,125 548

Peak Memory (MiB)

Light apps:

.NET JIT .NET AOT Go Rust
Windows 23 9 6 4
Linux 33 9 3 1

Workload apps:

.NET JIT .NET AOT Go Rust
Windows 543 488 554 381
Linux 507 472 528 410

Startup Time (ms)

Light apps:

.NET JIT .NET AOT Go Rust
Windows 39 13 10 9
Linux 72 1 1 <1

Workload apps:

.NET JIT .NET AOT Go Rust
Windows 29 14 13 11
Linux 67 2 2 1

Execution Time (ms)

Light apps:

.NET JIT .NET AOT Go Rust
Windows 4 <1 <1 <1
Linux 10 1 <1 <1

Workload apps:

.NET JIT .NET AOT Go Rust
Windows 5,191 5,325 6,529 5,080
Linux 7,870 8,357 9,233 7,532

Total Time (ms)

Light apps:

.NET JIT .NET AOT Go Rust
Windows 42 13 10 9
Linux 82 2 1 <1

Workload apps:

.NET JIT .NET AOT Go Rust
Windows 5,220 5,339 6,542 5,091
Linux 7,937 8,359 9,235 7,533

CPU Time (ms)

Light apps:

.NET JIT .NET AOT Go Rust
Windows 33 6 3 2
Linux 21 <1 <1 <1

Workload apps:

.NET JIT .NET AOT Go Rust
Windows 5,092 5,174 7,114 4,910
Linux 8,150 8,596 10,216 7,523

CPU Load (%) - Workload Apps

.NET JIT .NET AOT Go Rust
Windows 98% 97% 109% 96%
Linux 103% 103% 111% 100%

Full results (mean, min, max) are in benchmark_results_windows.csv and benchmark_results_linux.csv.

Key Takeaways

  • Rust was the clear winner across all metrics: smallest binaries, lowest memory, fastest execution.
  • .NET AOT performed surprisingly well and outperformed Go in execution time and peak memory under load. Its main downside is a higher baseline memory footprint compared to Go.
  • .NET JIT has the highest startup time (due to compiling IL to native code at runtime) and the largest binaries (the full runtime and IL are bundled, without the aggressive trimming that AOT applies). This makes it less suitable for heavily modularized deployments on constrained hardware.
  • Go had the simplest learning curve but was the slowest in execution and used the most CPU time. Its advantage is low baseline memory, sitting between .NET AOT and Rust.

Interesting Findings

Liveness analysis changes when memory is freed. In .NET and Go, the compiler tracks which variables are still in use and marks them as dead once they're no longer read, even if they haven't gone out of scope yet. The GC can then collect them early. Rust has no equivalent — memory is freed when the owning variable goes out of scope, no sooner. I initially wrote the benchmarks returning all results back to main, which didn't affect .NET or Go (the GC cleaned up early anyway), but in Rust it kept everything allocated simultaneously. This made Rust appear to use 3x more memory until I restructured the code to drop allocations within each function instead.

OS allocators matter. Rust's memory usage on Linux was unexpectedly high because glibc's malloc holds onto freed memory for potential reuse. On Windows, the heap API returns it immediately. Adding explicit malloc_trim() calls on Linux brought the numbers in line. In .NET and Go you never have to think about this since the runtime handles it.

AOT has a hidden memory advantage over JIT. With AOT, the compiled code lives in the binary on disk, so the OS treats those memory pages as clean (file-backed) and can evict them under memory pressure and reload them later. With JIT, the compiled code is generated at runtime into anonymous memory pages (dirty memory) that the kernel cannot evict.

Conclusion

From a pure performance perspective, switching from .NET to Go didn't make sense. .NET AOT was nearly identical to Go across most metrics and even outperformed it in execution time and memory under load. .NET also offers the flexibility of choosing between AOT and JIT, since JIT with PGO can outperform AOT in execution time depending on the workload. Combined with existing team experience in .NET, the case for Go was weak.

Rust was the clear performance winner, but its steep learning curve and the additional complexity it demands from developers made it hard to justify without knowing exactly how constrained the target environment would be.

Project Structure

DotNetLight/          # .NET 8 minimal app
DotNetWorkload/       # .NET 8 workload app
GolangLight/          # Go minimal app
GolangWorkload/       # Go workload app
RustLight/            # Rust minimal app
RustWorkload/         # Rust workload app
PerformanceProfiler/  # C# benchmarking harness
Publish.ps1           # Windows build script
publish.sh            # Linux build script
Dockerfile            # Multi-stage build for Linux

Building

Windows:

.\Publish.ps1

Linux / Docker:

docker build -t lang-perf-eval .

The build scripts compile all apps (including .NET JIT, .NET AOT, Go, and Rust) and output the binaries to the project root.

License

MIT

About

Baseline performance comparison of C#, Go, and Rust — startup time, memory, execution time, and binary size

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors