Benchmarks

Configuration

I would like to thanks vincentbernat from exoscale.ch who kindly provided the infrastructure needed for the benchmarks.

I used 4 VMs for the tests with the following configuration:

  • 32 GB RAM
  • 8 CPU Cores
  • 10 GB SSD
  • Ubuntu 14.04 LTS 64-bit

Setup

  • One VM used to launch the benchmarking tool wrk
  • One VM for traefik (v1.0.0-beta.416) / nginx (v1.4.6)
  • Two VMs for 2 backend servers in go whoami
    Each VM has been tuned using the following limits:
  1. sysctl -w fs.file-max="9999999"
  2. sysctl -w fs.nr_open="9999999"
  3. sysctl -w net.core.netdev_max_backlog="4096"
  4. sysctl -w net.core.rmem_max="16777216"
  5. sysctl -w net.core.somaxconn="65535"
  6. sysctl -w net.core.wmem_max="16777216"
  7. sysctl -w net.ipv4.ip_local_port_range="1025 65535"
  8. sysctl -w net.ipv4.tcp_fin_timeout="30"
  9. sysctl -w net.ipv4.tcp_keepalive_time="30"
  10. sysctl -w net.ipv4.tcp_max_syn_backlog="20480"
  11. sysctl -w net.ipv4.tcp_max_tw_buckets="400000"
  12. sysctl -w net.ipv4.tcp_no_metrics_save="1"
  13. sysctl -w net.ipv4.tcp_syn_retries="2"
  14. sysctl -w net.ipv4.tcp_synack_retries="2"
  15. sysctl -w net.ipv4.tcp_tw_recycle="1"
  16. sysctl -w net.ipv4.tcp_tw_reuse="1"
  17. sysctl -w vm.min_free_kbytes="65536"
  18. sysctl -w vm.overcommit_memory="1"
  19. ulimit -n 9999999

Nginx

Here is the config Nginx file use /etc/nginx/nginx.conf:

  1. user www-data;
  2. worker_processes auto;
  3. worker_rlimit_nofile 200000;
  4. pid /var/run/nginx.pid;
  5. events {
  6. worker_connections 10000;
  7. use epoll;
  8. multi_accept on;
  9. }
  10. http {
  11. sendfile on;
  12. tcp_nopush on;
  13. tcp_nodelay on;
  14. keepalive_timeout 300;
  15. keepalive_requests 10000;
  16. types_hash_max_size 2048;
  17. open_file_cache max=200000 inactive=300s;
  18. open_file_cache_valid 300s;
  19. open_file_cache_min_uses 2;
  20. open_file_cache_errors on;
  21. server_tokens off;
  22. dav_methods off;
  23. include /etc/nginx/mime.types;
  24. default_type application/octet-stream;
  25. access_log /var/log/nginx/access.log combined;
  26. error_log /var/log/nginx/error.log warn;
  27. gzip off;
  28. gzip_vary off;
  29. include /etc/nginx/conf.d/*.conf;
  30. include /etc/nginx/sites-enabled/*.conf;
  31. }

Here is the Nginx vhost file used:

  1. upstream whoami {
  2. server IP-whoami1:80;
  3. server IP-whoami2:80;
  4. keepalive 300;
  5. }
  6. server {
  7. listen 8001;
  8. server_name test.traefik;
  9. access_log off;
  10. error_log /dev/null crit;
  11. if ($host != "test.traefik") {
  12. return 404;
  13. }
  14. location / {
  15. proxy_pass http://whoami;
  16. proxy_http_version 1.1;
  17. proxy_set_header Connection "";
  18. proxy_set_header X-Forwarded-Host $host;
  19. }
  20. }

Traefik

Here is the traefik.toml file used:

  1. MaxIdleConnsPerHost = 100000
  2. defaultEntryPoints = ["http"]
  3. [entryPoints]
  4. [entryPoints.http]
  5. address = ":8000"
  6. [file]
  7. [backends]
  8. [backends.backend1]
  9. [backends.backend1.servers.server1]
  10. url = "http://IP-whoami1:80"
  11. weight = 1
  12. [backends.backend1.servers.server2]
  13. url = "http://IP-whoami2:80"
  14. weight = 1
  15. [frontends]
  16. [frontends.frontend1]
  17. backend = "backend1"
  18. [frontends.frontend1.routes.test_1]
  19. rule = "Host: test.traefik"

Results

whoami:

  1. wrk -t20 -c1000 -d60s -H "Host: test.traefik" --latency http://IP-whoami:80/bench
  2. Running 1m test @ http://IP-whoami:80/bench
  3. 20 threads and 1000 connections
  4. Thread Stats Avg Stdev Max +/- Stdev
  5. Latency 70.28ms 134.72ms 1.91s 89.94%
  6. Req/Sec 2.92k 742.42 8.78k 68.80%
  7. Latency Distribution
  8. 50% 10.63ms
  9. 75% 75.64ms
  10. 90% 205.65ms
  11. 99% 668.28ms
  12. 3476705 requests in 1.00m, 384.61MB read
  13. Socket errors: connect 0, read 0, write 0, timeout 103
  14. Requests/sec: 57894.35
  15. Transfer/sec: 6.40MB

nginx:

  1. wrk -t20 -c1000 -d60s -H "Host: test.traefik" --latency http://IP-nginx:8001/bench
  2. Running 1m test @ http://IP-nginx:8001/bench
  3. 20 threads and 1000 connections
  4. Thread Stats Avg Stdev Max +/- Stdev
  5. Latency 101.25ms 180.09ms 1.99s 89.34%
  6. Req/Sec 1.69k 567.69 9.39k 72.62%
  7. Latency Distribution
  8. 50% 15.46ms
  9. 75% 129.11ms
  10. 90% 302.44ms
  11. 99% 846.59ms
  12. 2018427 requests in 1.00m, 298.36MB read
  13. Socket errors: connect 0, read 0, write 0, timeout 90
  14. Requests/sec: 33591.67
  15. Transfer/sec: 4.97MB

traefik:

  1. wrk -t20 -c1000 -d60s -H "Host: test.traefik" --latency http://IP-traefik:8000/bench
  2. Running 1m test @ http://IP-traefik:8000/bench
  3. 20 threads and 1000 connections
  4. Thread Stats Avg Stdev Max +/- Stdev
  5. Latency 91.72ms 150.43ms 2.00s 90.50%
  6. Req/Sec 1.43k 266.37 2.97k 69.77%
  7. Latency Distribution
  8. 50% 19.74ms
  9. 75% 121.98ms
  10. 90% 237.39ms
  11. 99% 687.49ms
  12. 1705073 requests in 1.00m, 188.63MB read
  13. Socket errors: connect 0, read 0, write 0, timeout 7
  14. Requests/sec: 28392.44
  15. Transfer/sec: 3.14MB

Conclusion

Traefik is obviously slower than Nginx, but not so much: Traefik can serve 28392 requests/sec and Nginx 33591 requests/sec which gives a ratio of 85%. Not bad for young project :) !

Some areas of possible improvements:

  • Use GO_REUSEPORT listener
  • Run a separate server instance per CPU core with GOMAXPROCS=1 (it appears during benchmarks that there is a lot more context switches with traefik than with nginx)