Services

Configuring How to Reach the Services

services

The Services are responsible for configuring how to reach the actual services that will eventually handle the incoming requests.

Configuration Examples

Declaring an HTTP Service with Two Servers — Using the File Provider

TOML

  1. ## Dynamic configuration
  2. [http.services]
  3. [http.services.my-service.loadBalancer]
  4. [[http.services.my-service.loadBalancer.servers]]
  5. url = "http://<private-ip-server-1>:<private-port-server-1>/"
  6. [[http.services.my-service.loadBalancer.servers]]
  7. url = "http://<private-ip-server-2>:<private-port-server-2>/"

YAML

  1. ## Dynamic configuration
  2. http:
  3. services:
  4. my-service:
  5. loadBalancer:
  6. servers:
  7. - url: "http://<private-ip-server-1>:<private-port-server-1>/"
  8. - url: "http://<private-ip-server-2>:<private-port-server-2>/"

Declaring a TCP Service with Two Servers — Using the File Provider

TOML

  1. ## Dynamic configuration
  2. [tcp.services]
  3. [tcp.services.my-service.loadBalancer]
  4. [[tcp.services.my-service.loadBalancer.servers]]
  5. address = "<private-ip-server-1>:<private-port-server-1>"
  6. [[tcp.services.my-service.loadBalancer.servers]]
  7. address = "<private-ip-server-2>:<private-port-server-2>"

YAML

  1. tcp:
  2. services:
  3. my-service:
  4. loadBalancer:
  5. servers:
  6. - address: "<private-ip-server-1>:<private-port-server-1>"
  7. - address: "<private-ip-server-2>:<private-port-server-2>"

Declaring a UDP Service with Two Servers — Using the File Provider

TOML

  1. ## Dynamic configuration
  2. [udp.services]
  3. [udp.services.my-service.loadBalancer]
  4. [[udp.services.my-service.loadBalancer.servers]]
  5. address = "<private-ip-server-1>:<private-port-server-1>"
  6. [[udp.services.my-service.loadBalancer.servers]]
  7. address = "<private-ip-server-2>:<private-port-server-2>"

YAML

  1. udp:
  2. services:
  3. my-service:
  4. loadBalancer:
  5. servers:
  6. - address: "<private-ip-server-1>:<private-port-server-1>"
  7. - address: "<private-ip-server-2>:<private-port-server-2>"

Configuring HTTP Services

Servers Load Balancer

The load balancers are able to load balance the requests between multiple instances of your programs.

Each service has a load-balancer, even if there is only one server to forward traffic to.

Declaring a Service with Two Servers (with Load Balancing) — Using the File Provider

TOML

  1. ## Dynamic configuration
  2. [http.services]
  3. [http.services.my-service.loadBalancer]
  4. [[http.services.my-service.loadBalancer.servers]]
  5. url = "http://private-ip-server-1/"
  6. [[http.services.my-service.loadBalancer.servers]]
  7. url = "http://private-ip-server-2/"

YAML

  1. http:
  2. services:
  3. my-service:
  4. loadBalancer:
  5. servers:
  6. - url: "http://private-ip-server-1/"
  7. - url: "http://private-ip-server-2/"

Servers

Servers declare a single instance of your program. The url option point to a specific instance.

Paths in the servers’ url have no effect. If you want the requests to be sent to a specific path on your servers, configure your routers to use a corresponding middleware (e.g. the AddPrefix or ReplacePath) middlewares.

A Service with One Server — Using the File Provider

TOML

  1. ## Dynamic configuration
  2. [http.services]
  3. [http.services.my-service.loadBalancer]
  4. [[http.services.my-service.loadBalancer.servers]]
  5. url = "http://private-ip-server-1/"

YAML

  1. ## Dynamic configuration
  2. http:
  3. services:
  4. my-service:
  5. loadBalancer:
  6. servers:
  7. - url: "http://private-ip-server-1/"

Load-balancing

For now, only round robin load balancing is supported:

Load Balancing — Using the File Provider

TOML

  1. ## Dynamic configuration
  2. [http.services]
  3. [http.services.my-service.loadBalancer]
  4. [[http.services.my-service.loadBalancer.servers]]
  5. url = "http://private-ip-server-1/"
  6. [[http.services.my-service.loadBalancer.servers]]
  7. url = "http://private-ip-server-2/"

YAML

  1. ## Dynamic configuration
  2. http:
  3. services:
  4. my-service:
  5. loadBalancer:
  6. servers:
  7. - url: "http://private-ip-server-1/"
  8. - url: "http://private-ip-server-2/"

Sticky sessions

When sticky sessions are enabled, a cookie is set on the initial request and response to let the client know which server handles the first response. On subsequent requests, to keep the session alive with the same server, the client should resend the same cookie.

Stickiness on multiple levels

When chaining or mixing load-balancers (e.g. a load-balancer of servers is one of the “children” of a load-balancer of services), for stickiness to work all the way, the option needs to be specified at all required levels. Which means the client needs to send a cookie with as many key/value pairs as there are sticky levels.

Stickiness & Unhealthy Servers

If the server specified in the cookie becomes unhealthy, the request will be forwarded to a new server (and the cookie will keep track of the new server).

Cookie Name

The default cookie name is an abbreviation of a sha1 (ex: _1d52e).

Secure & HTTPOnly & SameSite flags

By default, the affinity cookie is created without those flags. One however can change that through configuration.

SameSite can be none, lax, strict or empty.

Adding Stickiness — Using the File Provider

TOML

  1. ## Dynamic configuration
  2. [http.services]
  3. [http.services.my-service]
  4. [http.services.my-service.loadBalancer.sticky.cookie]

YAML

  1. ## Dynamic configuration
  2. http:
  3. services:
  4. my-service:
  5. loadBalancer:
  6. sticky:
  7. cookie: {}

Adding Stickiness with custom Options — Using the File Provider

TOML

  1. ## Dynamic configuration
  2. [http.services]
  3. [http.services.my-service]
  4. [http.services.my-service.loadBalancer.sticky.cookie]
  5. name = "my_sticky_cookie_name"
  6. secure = true
  7. httpOnly = true
  8. sameSite = "none"

YAML

  1. ## Dynamic configuration
  2. http:
  3. services:
  4. my-service:
  5. loadBalancer:
  6. sticky:
  7. cookie:
  8. name: my_sticky_cookie_name
  9. secure: true
  10. httpOnly: true

Setting Stickiness on all the required levels — Using the File Provider

TOML

  1. ## Dynamic configuration
  2. [http.services]
  3. [http.services.wrr1]
  4. [http.services.wrr1.weighted.sticky.cookie]
  5. name = "lvl1"
  6. [[http.services.wrr1.weighted.services]]
  7. name = "whoami1"
  8. weight = 1
  9. [[http.services.wrr1.weighted.services]]
  10. name = "whoami2"
  11. weight = 1
  12. [http.services.whoami1]
  13. [http.services.whoami1.loadBalancer]
  14. [http.services.whoami1.loadBalancer.sticky.cookie]
  15. name = "lvl2"
  16. [[http.services.whoami1.loadBalancer.servers]]
  17. url = "http://127.0.0.1:8081"
  18. [[http.services.whoami1.loadBalancer.servers]]
  19. url = "http://127.0.0.1:8082"
  20. [http.services.whoami2]
  21. [http.services.whoami2.loadBalancer]
  22. [http.services.whoami2.loadBalancer.sticky.cookie]
  23. name = "lvl2"
  24. [[http.services.whoami2.loadBalancer.servers]]
  25. url = "http://127.0.0.1:8083"
  26. [[http.services.whoami2.loadBalancer.servers]]
  27. url = "http://127.0.0.1:8084"

YAML

  1. ## Dynamic configuration
  2. http:
  3. services:
  4. wrr1:
  5. weighted:
  6. sticky:
  7. cookie:
  8. name: lvl1
  9. services:
  10. - name: whoami1
  11. weight: 1
  12. - name: whoami2
  13. weight: 1
  14. whoami1:
  15. loadBalancer:
  16. sticky:
  17. cookie:
  18. name: lvl2
  19. servers:
  20. - url: http://127.0.0.1:8081
  21. - url: http://127.0.0.1:8082
  22. whoami2:
  23. loadBalancer:
  24. sticky:
  25. cookie:
  26. name: lvl2
  27. servers:
  28. - url: http://127.0.0.1:8083
  29. - url: http://127.0.0.1:8084

To keep a session open with the same server, the client would then need to specify the two levels within the cookie for each request, e.g. with curl:

  1. curl -b "lvl1=whoami1; lvl2=http://127.0.0.1:8081" http://localhost:8000

Health Check

Configure health check to remove unhealthy servers from the load balancing rotation. Traefik will consider your servers healthy as long as they return status codes between 2XX and 3XX to the health check requests (carried out every interval).

Below are the available options for the health check mechanism:

  • path is appended to the server URL to set the health check endpoint.
  • scheme, if defined, will replace the server URL scheme for the health check endpoint
  • hostname, if defined, will apply Host header hostname to the health check request.
  • port, if defined, will replace the server URL port for the health check endpoint.
  • interval defines the frequency of the health check calls.
  • timeout defines the maximum duration Traefik will wait for a health check request before considering the server failed (unhealthy).
  • headers defines custom headers to be sent to the health check endpoint.
  • followRedirects defines whether redirects should be followed during the health check calls (default: true).

Interval & Timeout Format

Interval and timeout are to be given in a format understood by time.ParseDuration. The interval must be greater than the timeout. If configuration doesn’t reflect this, the interval will be set to timeout + 1 second.

Recovering Servers

Traefik keeps monitoring the health of unhealthy servers. If a server has recovered (returning 2xx -> 3xx responses again), it will be added back to the load balacer rotation pool.

Health check in Kubernetes

The Traefik health check is not available for kubernetesCRD and kubernetesIngress providers because Kubernetes already has a health check mechanism. Unhealthy pods will be removed by kubernetes. (cf liveness documentation)

Custom Interval & Timeout — Using the File Provider

TOML

  1. ## Dynamic configuration
  2. [http.services]
  3. [http.services.Service-1]
  4. [http.services.Service-1.loadBalancer.healthCheck]
  5. path = "/health"
  6. interval = "10s"
  7. timeout = "3s"

YAML

  1. ## Dynamic configuration
  2. http:
  3. services:
  4. Service-1:
  5. loadBalancer:
  6. healthCheck:
  7. path: /health
  8. interval: "10s"
  9. timeout: "3s"

Custom Port — Using the File Provider

TOML

  1. ## Dynamic configuration
  2. [http.services]
  3. [http.services.Service-1]
  4. [http.services.Service-1.loadBalancer.healthCheck]
  5. path = "/health"
  6. port = 8080

YAML

  1. ## Dynamic configuration
  2. http:
  3. services:
  4. Service-1:
  5. loadBalancer:
  6. healthCheck:
  7. path: /health
  8. port: 8080

Custom Scheme — Using the File Provider

TOML

  1. ## Dynamic configuration
  2. [http.services]
  3. [http.services.Service-1]
  4. [http.services.Service-1.loadBalancer.healthCheck]
  5. path = "/health"
  6. scheme = "http"

YAML

  1. ## Dynamic configuration
  2. http:
  3. services:
  4. Service-1:
  5. loadBalancer:
  6. healthCheck:
  7. path: /health
  8. scheme: http

Additional HTTP Headers — Using the File Provider

TOML

  1. ## Dynamic configuration
  2. [http.services]
  3. [http.services.Service-1]
  4. [http.services.Service-1.loadBalancer.healthCheck]
  5. path = "/health"
  6. [http.services.Service-1.loadBalancer.healthCheck.headers]
  7. My-Custom-Header = "foo"
  8. My-Header = "bar"

YAML

  1. ## Dynamic configuration
  2. http:
  3. services:
  4. Service-1:
  5. loadBalancer:
  6. healthCheck:
  7. path: /health
  8. headers:
  9. My-Custom-Header: foo
  10. My-Header: bar

Pass Host Header

The passHostHeader allows to forward client Host header to server.

By default, passHostHeader is true.

Don’t forward the host header — Using the File Provider

TOML

  1. ## Dynamic configuration
  2. [http.services]
  3. [http.services.Service01]
  4. [http.services.Service01.loadBalancer]
  5. passHostHeader = false

YAML

  1. ## Dynamic configuration
  2. http:
  3. services:
  4. Service01:
  5. loadBalancer:
  6. passHostHeader: false

ServersTransport

serversTransport allows to reference a ServersTransport configuration for the communication between Traefik and your servers.

Specify a transport — Using the File Provider

TOML

  1. ## Dynamic configuration
  2. [http.services]
  3. [http.services.Service01]
  4. [http.services.Service01.loadBalancer]
  5. serversTransport = "mytransport"

YAML

  1. ## Dynamic configuration
  2. http:
  3. services:
  4. Service01:
  5. loadBalancer:
  6. serversTransport = "mytransport"

Info

If no serversTransport is specified, the default@internal will be used. The default@internal serversTransport is created from the static configuration.

Response Forwarding

This section is about configuring how Traefik forwards the response from the backend server to the client.

Below are the available options for the Response Forwarding mechanism:

  • FlushInterval specifies the interval in between flushes to the client while copying the response body. It is a duration in milliseconds, defaulting to 100. A negative value means to flush immediately after each write to the client. The FlushInterval is ignored when ReverseProxy recognizes a response as a streaming response; for such responses, writes are flushed to the client immediately.

Using a custom FlushInterval — Using the File Provider

TOML

  1. ## Dynamic configuration
  2. [http.services]
  3. [http.services.Service-1]
  4. [http.services.Service-1.loadBalancer.responseForwarding]
  5. flushInterval = "1s"

YAML

  1. ## Dynamic configuration
  2. http:
  3. services:
  4. Service-1:
  5. loadBalancer:
  6. responseForwarding:
  7. flushInterval: 1s

ServersTransport

ServersTransport allows to configure the transport between Traefik and your servers.

ServerName

Optional

serverName configure the server name that will be used for SNI.

File (TOML)

  1. ## Dynamic configuration
  2. [http.serversTransports.mytransport]
  3. serverName = "myhost"

File (YAML)

  1. ## Dynamic configuration
  2. http:
  3. serversTransports:
  4. mytransport:
  5. serverName: "myhost"

Kubernetes

  1. apiVersion: traefik.containo.us/v1alpha1
  2. kind: ServersTransport
  3. metadata:
  4. name: mytransport
  5. namespace: default
  6. spec:
  7. serverName: "test"

Certificates

Optional

certificates is the list of certificates (as file paths, or data bytes) that will be set as client certificates for mTLS.

File (TOML)

  1. ## Dynamic configuration
  2. [[http.serversTransports.mytransport.certificates]]
  3. certFile = "foo.crt"
  4. keyFile = "bar.crt"

File (YAML)

  1. ## Dynamic configuration
  2. http:
  3. serversTransports:
  4. mytransport:
  5. certificates:
  6. - certFile: foo.crt
  7. keyFile: bar.crt

Kubernetes

  1. apiVersion: traefik.containo.us/v1alpha1
  2. kind: ServersTransport
  3. metadata:
  4. name: mytransport
  5. namespace: default
  6. spec:
  7. certificatesSecrets:
  8. - mycert
  9. ---
  10. apiVersion: v1
  11. kind: Secret
  12. metadata:
  13. name: mycert
  14. data:
  15. tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0=
  16. tls.key: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCi0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0=

insecureSkipVerify

Optional

insecureSkipVerify disables SSL certificate verification.

File (TOML)

  1. ## Dynamic configuration
  2. [http.serversTransports.mytransport]
  3. insecureSkipVerify = true

File (YAML)

  1. ## Dynamic configuration
  2. http:
  3. serversTransports:
  4. mytransport:
  5. insecureSkipVerify: true

Kubernetes

  1. apiVersion: traefik.containo.us/v1alpha1
  2. kind: ServersTransport
  3. metadata:
  4. name: mytransport
  5. namespace: default
  6. spec:
  7. insecureSkipVerify: true

rootCAs

Optional

rootCAs is the list of certificates (as file paths, or data bytes) that will be set as Root Certificate Authorities when using a self-signed TLS certificate.

File (TOML)

  1. ## Dynamic configuration
  2. [http.serversTransports.mytransport]
  3. rootCAs = ["foo.crt", "bar.crt"]

File (YAML)

  1. ## Dynamic configuration
  2. http:
  3. serversTransports:
  4. mytransport:
  5. rootCAs:
  6. - foo.crt
  7. - bar.crt

Kubernetes

  1. apiVersion: traefik.containo.us/v1alpha1
  2. kind: ServersTransport
  3. metadata:
  4. name: mytransport
  5. namespace: default
  6. spec:
  7. rootCAsSecrets:
  8. - myca
  9. ---
  10. apiVersion: v1
  11. kind: Secret
  12. metadata:
  13. name: myca
  14. data:
  15. tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0=

maxIdleConnsPerHost

Optional, Default=2

If non-zero, maxIdleConnsPerHost controls the maximum idle (keep-alive) connections to keep per-host.

File (TOML)

  1. ## Dynamic configuration
  2. [http.serversTransports.mytransport]
  3. maxIdleConnsPerHost = 7

File (YAML)

  1. ## Dynamic configuration
  2. http:
  3. serversTransports:
  4. mytransport:
  5. maxIdleConnsPerHost: 7

Kubernetes

  1. apiVersion: traefik.containo.us/v1alpha1
  2. kind: ServersTransport
  3. metadata:
  4. name: mytransport
  5. namespace: default
  6. spec:
  7. maxIdleConnsPerHost: 7

forwardingTimeouts

forwardingTimeouts is about a number of timeouts relevant to when forwarding requests to the backend servers.

forwardingTimeouts.dialTimeout

Optional, Default=30s

dialTimeout is the maximum duration allowed for a connection to a backend server to be established. Zero means no timeout.

File (TOML)

  1. ## Dynamic configuration
  2. [http.serversTransports.mytransport.forwardingTimeouts]
  3. dialTimeout = "1s"

File (YAML)

  1. ## Dynamic configuration
  2. http:
  3. serversTransports:
  4. mytransport:
  5. forwardingTimeouts:
  6. dialTimeout: "1s"

Kubernetes

  1. apiVersion: traefik.containo.us/v1alpha1
  2. kind: ServersTransport
  3. metadata:
  4. name: mytransport
  5. namespace: default
  6. spec:
  7. forwardingTimeouts:
  8. dialTimeout: "1s"
forwardingTimeouts.responseHeaderTimeout

Optional, Default=0s

responseHeaderTimeout, if non-zero, specifies the amount of time to wait for a server’s response headers after fully writing the request (including its body, if any). This time does not include the time to read the response body. Zero means no timeout.

File (TOML)

  1. ## Dynamic configuration
  2. [http.serversTransports.mytransport.forwardingTimeouts]
  3. responseHeaderTimeout = "1s"

File (YAML)

  1. ## Dynamic configuration
  2. http:
  3. serversTransports:
  4. mytransport:
  5. forwardingTimeouts:
  6. responseHeaderTimeout: "1s"

Kubernetes

  1. apiVersion: traefik.containo.us/v1alpha1
  2. kind: ServersTransport
  3. metadata:
  4. name: mytransport
  5. namespace: default
  6. spec:
  7. forwardingTimeouts:
  8. responseHeaderTimeout: "1s"
forwardingTimeouts.idleConnTimeout

Optional, Default=90s

idleConnTimeout, is the maximum amount of time an idle (keep-alive) connection will remain idle before closing itself. Zero means no limit.

File (TOML)

  1. ## Dynamic configuration
  2. [http.serversTransports.mytransport.forwardingTimeouts]
  3. idleConnTimeout = "1s"

File (YAML)

  1. ## Dynamic configuration
  2. http:
  3. serversTransports:
  4. mytransport:
  5. forwardingTimeouts:
  6. idleConnTimeout: "1s"

Kubernetes

  1. apiVersion: traefik.containo.us/v1alpha1
  2. kind: ServersTransport
  3. metadata:
  4. name: mytransport
  5. namespace: default
  6. spec:
  7. forwardingTimeouts:
  8. idleConnTimeout: "1s"

Weighted Round Robin (service)

The WRR is able to load balance the requests between multiple services based on weights.

This strategy is only available to load balance between services and not between servers.

Supported Providers

This strategy can be defined currently with the File or IngressRoute providers.

TOML

  1. ## Dynamic configuration
  2. [http.services]
  3. [http.services.app]
  4. [[http.services.app.weighted.services]]
  5. name = "appv1"
  6. weight = 3
  7. [[http.services.app.weighted.services]]
  8. name = "appv2"
  9. weight = 1
  10. [http.services.appv1]
  11. [http.services.appv1.loadBalancer]
  12. [[http.services.appv1.loadBalancer.servers]]
  13. url = "http://private-ip-server-1/"
  14. [http.services.appv2]
  15. [http.services.appv2.loadBalancer]
  16. [[http.services.appv2.loadBalancer.servers]]
  17. url = "http://private-ip-server-2/"

YAML

  1. ## Dynamic configuration
  2. http:
  3. services:
  4. app:
  5. weighted:
  6. services:
  7. - name: appv1
  8. weight: 3
  9. - name: appv2
  10. weight: 1
  11. appv1:
  12. loadBalancer:
  13. servers:
  14. - url: "http://private-ip-server-1/"
  15. appv2:
  16. loadBalancer:
  17. servers:
  18. - url: "http://private-ip-server-2/"

Mirroring (service)

The mirroring is able to mirror requests sent to a service to other services. Please note that by default the whole request is buffered in memory while it is being mirrored. See the maxBodySize option in the example below for how to modify this behaviour.

Supported Providers

This strategy can be defined currently with the File or IngressRoute providers.

TOML

  1. ## Dynamic configuration
  2. [http.services]
  3. [http.services.mirrored-api]
  4. [http.services.mirrored-api.mirroring]
  5. service = "appv1"
  6. # maxBodySize is the maximum size in bytes allowed for the body of the request.
  7. # If the body is larger, the request is not mirrored.
  8. # Default value is -1, which means unlimited size.
  9. maxBodySize = 1024
  10. [[http.services.mirrored-api.mirroring.mirrors]]
  11. name = "appv2"
  12. percent = 10
  13. [http.services.appv1]
  14. [http.services.appv1.loadBalancer]
  15. [[http.services.appv1.loadBalancer.servers]]
  16. url = "http://private-ip-server-1/"
  17. [http.services.appv2]
  18. [http.services.appv2.loadBalancer]
  19. [[http.services.appv2.loadBalancer.servers]]
  20. url = "http://private-ip-server-2/"

YAML

  1. ## Dynamic configuration
  2. http:
  3. services:
  4. mirrored-api:
  5. mirroring:
  6. service: appv1
  7. # maxBodySize is the maximum size allowed for the body of the request.
  8. # If the body is larger, the request is not mirrored.
  9. # Default value is -1, which means unlimited size.
  10. maxBodySize: 1024
  11. mirrors:
  12. - name: appv2
  13. percent: 10
  14. appv1:
  15. loadBalancer:
  16. servers:
  17. - url: "http://private-ip-server-1/"
  18. appv2:
  19. loadBalancer:
  20. servers:
  21. - url: "http://private-ip-server-2/"

Configuring TCP Services

General

Each of the fields of the service section represents a kind of service. Which means, that for each specified service, one of the fields, and only one, has to be enabled to define what kind of service is created. Currently, the two available kinds are LoadBalancer, and Weighted.

Servers Load Balancer

The servers load balancer is in charge of balancing the requests between the servers of the same service.

Declaring a Service with Two Servers — Using the File Provider

TOML

  1. ## Dynamic configuration
  2. [tcp.services]
  3. [tcp.services.my-service.loadBalancer]
  4. [[tcp.services.my-service.loadBalancer.servers]]
  5. address = "xx.xx.xx.xx:xx"
  6. [[tcp.services.my-service.loadBalancer.servers]]
  7. address = "xx.xx.xx.xx:xx"

YAML

  1. ## Dynamic configuration
  2. tcp:
  3. services:
  4. my-service:
  5. loadBalancer:
  6. servers:
  7. - address: "xx.xx.xx.xx:xx"
  8. - address: "xx.xx.xx.xx:xx"

Servers

Servers declare a single instance of your program. The address option (IP:Port) point to a specific instance.

A Service with One Server — Using the File Provider

TOML

  1. ## Dynamic configuration
  2. [tcp.services]
  3. [tcp.services.my-service.loadBalancer]
  4. [[tcp.services.my-service.loadBalancer.servers]]
  5. address = "xx.xx.xx.xx:xx"

YAML

  1. ## Dynamic configuration
  2. tcp:
  3. services:
  4. my-service:
  5. loadBalancer:
  6. servers:
  7. - address: "xx.xx.xx.xx:xx"

PROXY Protocol

Traefik supports PROXY Protocol version 1 and 2 on TCP Services. It can be enabled by setting proxyProtocol on the load balancer.

Below are the available options for the PROXY protocol:

  • version specifies the version of the protocol to be used. Either 1 or 2.

Version

Specifying a version is optional. By default the version 2 will be used.

A Service with Proxy Protocol v1 — Using the File Provider

TOML

  1. ## Dynamic configuration
  2. [tcp.services]
  3. [tcp.services.my-service.loadBalancer]
  4. [tcp.services.my-service.loadBalancer.proxyProtocol]
  5. version = 1

YAML

  1. ## Dynamic configuration
  2. tcp:
  3. services:
  4. my-service:
  5. loadBalancer:
  6. proxyProtocol:
  7. version: 1

Termination Delay

As a proxy between a client and a server, it can happen that either side (e.g. client side) decides to terminate its writing capability on the connection (i.e. issuance of a FIN packet). The proxy needs to propagate that intent to the other side, and so when that happens, it also does the same on its connection with the other side (e.g. backend side).

However, if for some reason (bad implementation, or malicious intent) the other side does not eventually do the same as well, the connection would stay half-open, which would lock resources for however long.

To that end, as soon as the proxy enters this termination sequence, it sets a deadline on fully terminating the connections on both sides.

The termination delay controls that deadline. It is a duration in milliseconds, defaulting to 100. A negative value means an infinite deadline (i.e. the connection is never fully terminated by the proxy itself).

A Service with a termination delay — Using the File Provider

TOML

  1. ## Dynamic configuration
  2. [tcp.services]
  3. [tcp.services.my-service.loadBalancer]
  4. [[tcp.services.my-service.loadBalancer]]
  5. terminationDelay = 200

YAML

  1. ## Dynamic configuration
  2. tcp:
  3. services:
  4. my-service:
  5. loadBalancer:
  6. terminationDelay: 200

Weighted Round Robin

The Weighted Round Robin (alias WRR) load-balancer of services is in charge of balancing the requests between multiple services based on provided weights.

This strategy is only available to load balance between services and not between servers.

Supported Providers

This strategy can be defined currently with the File or IngressRoute providers.

TOML

  1. ## Dynamic configuration
  2. [tcp.services]
  3. [tcp.services.app]
  4. [[tcp.services.app.weighted.services]]
  5. name = "appv1"
  6. weight = 3
  7. [[tcp.services.app.weighted.services]]
  8. name = "appv2"
  9. weight = 1
  10. [tcp.services.appv1]
  11. [tcp.services.appv1.loadBalancer]
  12. [[tcp.services.appv1.loadBalancer.servers]]
  13. address = "private-ip-server-1:8080/"
  14. [tcp.services.appv2]
  15. [tcp.services.appv2.loadBalancer]
  16. [[tcp.services.appv2.loadBalancer.servers]]
  17. address = "private-ip-server-2:8080/"

YAML

  1. ## Dynamic configuration
  2. tcp:
  3. services:
  4. app:
  5. weighted:
  6. services:
  7. - name: appv1
  8. weight: 3
  9. - name: appv2
  10. weight: 1
  11. appv1:
  12. loadBalancer:
  13. servers:
  14. - address: "xxx.xxx.xxx.xxx:8080"
  15. appv2:
  16. loadBalancer:
  17. servers:
  18. - address: "xxx.xxx.xxx.xxx:8080"

Configuring UDP Services

General

Each of the fields of the service section represents a kind of service. Which means, that for each specified service, one of the fields, and only one, has to be enabled to define what kind of service is created. Currently, the two available kinds are LoadBalancer, and Weighted.

Servers Load Balancer

The servers load balancer is in charge of balancing the requests between the servers of the same service.

Declaring a Service with Two Servers — Using the File Provider

TOML

  1. ## Dynamic configuration
  2. [udp.services]
  3. [udp.services.my-service.loadBalancer]
  4. [[udp.services.my-service.loadBalancer.servers]]
  5. address = "xx.xx.xx.xx:xx"
  6. [[udp.services.my-service.loadBalancer.servers]]
  7. address = "xx.xx.xx.xx:xx"

YAML

  1. ## Dynamic configuration
  2. udp:
  3. services:
  4. my-service:
  5. loadBalancer:
  6. servers:
  7. - address: "xx.xx.xx.xx:xx"
  8. - address: "xx.xx.xx.xx:xx"

Servers

The Servers field defines all the servers that are part of this load-balancing group, i.e. each address (IP:Port) on which an instance of the service’s program is deployed.

A Service with One Server — Using the File Provider

TOML

  1. ## Dynamic configuration
  2. [udp.services]
  3. [udp.services.my-service.loadBalancer]
  4. [[udp.services.my-service.loadBalancer.servers]]
  5. address = "xx.xx.xx.xx:xx"

YAML

  1. ## Dynamic configuration
  2. udp:
  3. services:
  4. my-service:
  5. loadBalancer:
  6. servers:
  7. - address: "xx.xx.xx.xx:xx"

Weighted Round Robin

The Weighted Round Robin (alias WRR) load-balancer of services is in charge of balancing the requests between multiple services based on provided weights.

This strategy is only available to load balance between services and not between servers.

This strategy can only be defined with File.

TOML

  1. ## Dynamic configuration
  2. [udp.services]
  3. [udp.services.app]
  4. [[udp.services.app.weighted.services]]
  5. name = "appv1"
  6. weight = 3
  7. [[udp.services.app.weighted.services]]
  8. name = "appv2"
  9. weight = 1
  10. [udp.services.appv1]
  11. [udp.services.appv1.loadBalancer]
  12. [[udp.services.appv1.loadBalancer.servers]]
  13. address = "private-ip-server-1:8080/"
  14. [udp.services.appv2]
  15. [udp.services.appv2.loadBalancer]
  16. [[udp.services.appv2.loadBalancer.servers]]
  17. address = "private-ip-server-2:8080/"

YAML

  1. ## Dynamic configuration
  2. udp:
  3. services:
  4. app:
  5. weighted:
  6. services:
  7. - name: appv1
  8. weight: 3
  9. - name: appv2
  10. weight: 1
  11. appv1:
  12. loadBalancer:
  13. servers:
  14. - address: "xxx.xxx.xxx.xxx:8080"
  15. appv2:
  16. loadBalancer:
  17. servers:
  18. - address: "xxx.xxx.xxx.xxx:8080"