Skip to main content

Performance Benchmarking: Rust vs Go for Microservices

Ryan Dahlberg
Ryan Dahlberg
October 18, 2025 10 min read
Share:
Performance Benchmarking: Rust vs Go for Microservices

Performance Benchmarking: Rust vs Go for Microservices

Rust and Go are both popular choices for building microservices, each with passionate advocates claiming performance superiority. We decided to move beyond anecdotes and conduct rigorous benchmarks comparing both languages for typical microservice workloads.

This research compares Rust and Go across multiple dimensions: latency, throughput, memory usage, CPU efficiency, and real-world considerations like developer experience and operational complexity.

Methodology

Test Services

We implemented identical HTTP REST APIs in both languages:

Service functionality:

  • JSON request parsing
  • Database queries (PostgreSQL)
  • Business logic (data transformation, validation)
  • Cache operations (Redis)
  • External API calls
  • JSON response serialization

Frameworks used:

  • Rust: Actix-web (widely used, high performance)
  • Go: Gin (popular, production-proven)

Both implementations followed idiomatic patterns for each language and used equivalent libraries for database access, caching, and HTTP clients.

Benchmark Environment

Hardware:

  • AWS c5.2xlarge instances (8 vCPU, 16GB RAM)
  • PostgreSQL: db.m5.large
  • Redis: cache.m5.large
  • All in same VPC/AZ to minimize network variance

Load testing:

  • Vegeta for HTTP load generation
  • Gradual ramp-up to target throughput
  • Sustained load for 10 minutes
  • Repeated 5 times, results averaged

Metrics collected:

  • Latency (p50, p95, p99, p99.9)
  • Throughput (requests/second)
  • CPU utilization
  • Memory usage (RSS)
  • Error rate
  • Connection count

Benchmark Scenarios

We tested five scenarios representing common microservice patterns:

Scenario 1: Simple CRUD

  • Single database query
  • Minimal business logic
  • Tests basic request handling

Scenario 2: Complex query aggregation

  • Multiple database queries
  • Data joining and aggregation
  • Tests CPU-intensive operations

Scenario 3: High concurrency

  • Many concurrent requests
  • Connection pool pressure
  • Tests concurrency model efficiency

Scenario 4: External API fanout

  • Multiple concurrent external API calls
  • Timeout and error handling
  • Tests I/O multiplexing

Scenario 5: Memory-intensive processing

  • Large JSON payloads
  • Complex data structures
  • Tests memory allocation patterns

Results: Scenario 1 - Simple CRUD

Simple GET request fetching a user record from PostgreSQL.

Latency (milliseconds)

MetricRustGoWinner
p503.24.1Rust 22% faster
p955.87.4Rust 22% faster
p998.211.3Rust 27% faster
p99.915.123.7Rust 36% faster

Analysis: Rust showed consistently lower latency across all percentiles. The gap widened at higher percentiles, suggesting better tail latency characteristics.

Throughput

LoadRust RPSGo RPSWinner
Light (100 RPS)100100Tie
Medium (1K RPS)1,0001,000Tie
Heavy (10K RPS)10,0009,200Rust 8% higher
Maximum22,50018,300Rust 23% higher

Analysis: Both handled expected loads easily. At maximum throughput, Rust served more requests before degradation.

Resource Usage

MetricRustGoWinner
CPU @ 10K RPS45%52%Rust 13% lower
Memory (steady state)12 MB45 MBRust 73% lower
Memory (peak)18 MB78 MBRust 77% lower

Analysis: Rust used significantly less memory, primarily due to no garbage collection and smaller runtime. CPU usage was also lower.

Results: Scenario 2 - Complex Aggregation

Multiple database queries with data aggregation and transformation.

Latency (milliseconds)

MetricRustGoWinner
p5045.252.8Rust 14% faster
p9578.394.1Rust 17% faster
p99112.5145.2Rust 23% faster
p99.9187.3268.5Rust 30% faster

Analysis: CPU-intensive work favored Rust. Zero-cost abstractions and lack of GC pauses resulted in better tail latency.

CPU Efficiency

MetricRustGoDifference
CPU @ 1K RPS38%47%Rust 19% lower
CPU @ 5K RPS72%89%Rust 19% lower

Analysis: Rust’s efficiency advantage grew with load. At high CPU utilization, Go’s GC contributed noticeable overhead.

Results: Scenario 3 - High Concurrency

10,000 concurrent connections with sustained requests.

Connection Handling

MetricRustGoWinner
Max connections10,00010,000Tie
Memory @ 10K conns85 MB420 MBRust 80% lower
Latency impact+15%+45%Rust

Analysis: Go’s goroutine-per-connection model used more memory than Rust’s async/await. Latency degraded more in Go under extreme concurrency.

Context Switching

Rust’s async runtime showed fewer context switches (measured via perf):

  • Rust: ~15K context switches/second
  • Go: ~45K context switches/second

This contributed to Rust’s better CPU efficiency under high concurrency.

Results: Scenario 4 - External API Fanout

Making 10 concurrent external API calls per request.

Latency (milliseconds)

MetricRustGoWinner
p50125.3128.7Rust 3% faster
p95168.2172.5Rust 2% faster
p99215.4234.8Rust 8% faster

Analysis: Both handled I/O-bound work well. Rust maintained slight advantage, particularly in tail latencies.

Connection Pooling

Both implementations used connection pooling for external APIs:

  • Rust (reqwest): Efficient connection reuse
  • Go (net/http): Mature connection pooling

Performance was comparable, though Rust showed slightly better connection reuse rates.

Results: Scenario 5 - Memory-Intensive Processing

Processing 10MB JSON payloads with complex nested structures.

Memory Usage

MetricRustGoWinner
Allocation rate850 MB/s1,200 MB/sRust 29% lower
Peak memory180 MB520 MBRust 65% lower
GC pausesN/A5-12msRust (no GC)

Analysis: Manual memory management in Rust resulted in lower allocation rates and no GC pauses. Go’s GC introduced periodic latency spikes.

Latency Distribution

Rust showed consistent latency. Go had periodic spikes during GC:

Rust: Steady 95-105ms (p99) Go: Mostly 90-100ms (p99), but periodic spikes to 150-200ms during GC

Developer Experience Comparison

Performance isn’t everything. Developer experience matters for long-term maintainability.

Learning Curve

Go:

  • Simple, easy to learn
  • Few concepts to master
  • Productive quickly (days to weeks)
  • Straightforward concurrency model

Rust:

  • Steep learning curve
  • Borrow checker takes time to master
  • Productive slowly (weeks to months)
  • Complex concurrency model

Winner: Go for time-to-productivity

Code Verbosity

Equivalent service implementations:

  • Rust: ~1,200 lines
  • Go: ~900 lines

Go’s simplicity resulted in less code. Rust required more type annotations and explicit error handling.

Compile Times

Development iteration speed:

  • Rust: 25-35 seconds (clean build), 3-8 seconds (incremental)
  • Go: 3-5 seconds (clean build), <1 second (incremental)

Go’s fast compilation enabled faster development iteration.

Error Handling

Go:

result, err := doSomething()
if err != nil {
    return err
}

Simple but verbose. Errors easily ignored or mishandled.

Rust:

let result = do_something()?;

Forced error handling via Result types. More type-safe but requires understanding monadic patterns.

Ecosystem Maturity

Go:

  • Mature standard library
  • Rich ecosystem
  • Stable, backward compatible
  • Excellent tooling

Rust:

  • Growing ecosystem
  • Some areas lack mature libraries
  • Breaking changes more common
  • Improving but less mature tooling

Winner: Go for ecosystem maturity

Operational Considerations

Binary Size

  • Rust: 8.5 MB (stripped)
  • Go: 12.3 MB

Both produced small, statically-linked binaries. Rust slightly smaller.

Startup Time

  • Rust: 15ms
  • Go: 18ms

Both started nearly instantly. Effectively equivalent.

Memory Footprint (Idle)

  • Rust: 2.5 MB
  • Go: 8.2 MB

Rust’s minimal runtime resulted in lower baseline memory usage.

Observability

Go:

  • Excellent profiling tools (pprof)
  • Built-in tracing
  • Mature monitoring integrations
  • Easy to instrument

Rust:

  • Good profiling tools (perf, flamegraph)
  • Improving tracing (tokio-console)
  • Growing monitoring integrations
  • More effort to instrument

Winner: Go for observability maturity

Deployment Complexity

Both:

  • Single static binary
  • Container-friendly
  • No runtime dependencies
  • Easy to deploy

Winner: Tie

Cost Analysis

Based on AWS pricing, serving 1 million requests/day:

Rust Deployment

  • Instances: 2x c5.large ($70/month)
  • Total: $70/month

Go Deployment

  • Instances: 3x c5.large ($105/month)
  • Total: $105/month

Cost difference: Rust 33% cheaper due to higher efficiency.

At scale (1 billion requests/day):

  • Rust: ~$2,100/month
  • Go: ~$3,150/month

Annual savings with Rust: ~$12,600

Trade-off Analysis

Choose Rust When:

Performance is critical:

  • High-traffic services (>10K RPS)
  • Latency-sensitive applications
  • CPU-bound workloads

Resource efficiency matters:

  • Cost-sensitive deployments
  • Memory-constrained environments
  • Energy-efficient computing

Long-term maintenance:

  • Type safety prevents bugs
  • Performance won’t degrade over time
  • Worth the upfront investment

Choose Go When:

Development speed matters:

  • Rapid prototyping
  • Small teams
  • Frequent changes

Team expertise:

  • Team knows Go
  • Don’t want to train on Rust
  • Need to hire quickly

Ecosystem requirements:

  • Need specific Go libraries
  • Integration with Go services
  • Mature tooling required

Good-enough performance:

  • Traffic < 5K RPS
  • Not latency-critical
  • Infrastructure costs acceptable

Hybrid Approach

Many teams use both:

Go for:

  • API gateways
  • CRUD services
  • Admin tools
  • Internal services

Rust for:

  • High-performance components
  • Data processing pipelines
  • Latency-critical services
  • Resource-intensive tasks

This pragmatic approach uses each language’s strengths.

Recommendations

Based on our research:

For Startups

Use Go. Development speed and time-to-market matter more than marginal performance gains. Switch to Rust for specific components only when performance becomes a bottleneck.

For Established Companies

Evaluate based on specific needs:

  • High traffic? Consider Rust
  • Cost-sensitive? Consider Rust
  • Rapid iteration needed? Consider Go
  • Team expertise in Go? Probably stick with Go

For New Projects

Start with Go, profile, optimize:

  1. Build in Go (faster development)
  2. Deploy and measure
  3. Identify bottlenecks
  4. Rewrite critical paths in Rust if needed

This incremental approach balances speed and performance.

Limitations of This Study

What we didn’t test:

  • Websocket performance
  • gRPC services
  • GraphQL servers
  • Long-running background jobs
  • Container orchestration at scale

Variables not controlled:

  • Developer skill (both implementations by same team)
  • Framework maturity differences
  • Library ecosystem gaps

Generalizability: Results apply to our specific workloads. Your mileage may vary. Benchmark your specific use case.

Conclusion

Performance winner: Rust consistently outperformed Go across latency, throughput, memory usage, and CPU efficiency. The advantage was most pronounced under high load, with CPU-intensive workloads, and in tail latencies.

Developer experience winner: Go’s simplicity, fast compilation, and mature ecosystem provided superior development experience. Teams can be productive in Go much faster than Rust.

The pragmatic choice: For most teams, Go is the better starting point. Rust’s performance advantages matter when you reach scale, have cost constraints, or face specific performance bottlenecks. Use Rust deliberately, not by default.

The “best” language depends on your context, team, and requirements. Both Rust and Go are excellent choices for microservices - just for different reasons.


All benchmark code and raw results are available at github.com/acme/rust-vs-go-benchmarks

#Research #Performance #Rust #Go #Benchmarking #Microservices