Node-local load balancing

Note: This feature is experimental! Expect instabilities and/or breaking changes.

For clusters that don’t have an externally managed load balancer for the k0s control plane, there is another option to get a highly available control plane, at least from within the cluster. K0s calls this “node-local load balancing”. In contrast to an externally managed load balancer, node-local load balancing takes place exclusively on the worker nodes. It does not contribute to making the control plane highly available to the outside world (e.g. humans interacting with the cluster using management tools such as Lens or kubectl), but rather makes the cluster itself internally resilient to controller node outages.

Technical functionality

The k0s worker process manages a load balancer on each worker node’s loopback interface and configures the relevant components to use that load balancer. This allows for requests from worker components to the control plane to be distributed among all currently available controller nodes, rather than being directed to the controller node that has been used to join a particular worker into the cluster. This improves the reliability and fault tolerance of the cluster in case a controller node becomes unhealthy.

Envoy is the only load balancer that is supported so far. Please note that Envoy is not available on ARMv7, so node-local load balancing is currently unavailable on that platform.

Enabling in a cluster

In order to use node-local load balancing, the cluster needs to comply with the following:

  • The cluster doesn’t use an externally managed load balancer, i.e. the cluster configuration doesn’t specify a non-empty spec.api.externalAddress.
  • The cluster doesn’t use tunneled networking mode, i.e. the cluster configuration doesn’t specify spec.api.tunneledNetworkingMode as true.
  • K0s isn’t running as a single node, i.e. it isn’t started using the --single flag.
  • The cluster should have multiple controller nodes. Node-local load balancing also works with a single controller node, but is only useful in conjunction with a highly available control plane.

Add the following to the cluster configuration (k0s.yaml):

  1. spec:
  2. network:
  3. nodeLocalLoadBalancing:
  4. enabled: true
  5. type: EnvoyProxy

Or alternatively, if using k0sctl, add the following to the k0sctl configuration (k0sctl.yaml):

  1. spec:
  2. k0s:
  3. config:
  4. spec:
  5. network:
  6. nodeLocalLoadBalancing:
  7. enabled: true
  8. type: EnvoyProxy

All newly added worker nodes will then use node-local load balancing. The k0s worker process on worker nodes that are already running must be restarted for the new configuration to take effect.

Full example using k0sctl

The following example shows a full k0sctl configuration file featuring three controllers and two workers with node-local load balancing enabled:

  1. apiVersion: k0sctl.k0sproject.io/v1beta1
  2. kind: Cluster
  3. metadata:
  4. name: k0s-cluster
  5. spec:
  6. k0s:
  7. version: v1.26.8+k0s.0
  8. config:
  9. spec:
  10. network:
  11. nodeLocalLoadBalancing:
  12. enabled: true
  13. type: EnvoyProxy
  14. hosts:
  15. - role: controller
  16. ssh:
  17. address: 10.81.146.254
  18. keyPath: k0s-ssh-private-key.pem
  19. port: 22
  20. user: k0s
  21. - role: controller
  22. ssh:
  23. address: 10.81.146.184
  24. keyPath: k0s-ssh-private-key.pem
  25. port: 22
  26. user: k0s
  27. - role: controller
  28. ssh:
  29. address: 10.81.146.113
  30. keyPath: k0s-ssh-private-key.pem
  31. port: 22
  32. user: k0s
  33. - role: worker
  34. ssh:
  35. address: 10.81.146.198
  36. keyPath: k0s-ssh-private-key.pem
  37. port: 22
  38. user: k0s
  39. - role: worker
  40. ssh:
  41. address: 10.81.146.51
  42. keyPath: k0s-ssh-private-key.pem
  43. port: 22
  44. user: k0s

Save the above configuration into a file called k0sctl.yaml and apply it in order to bootstrap the cluster:

  1. $ k0sctl apply
  2. ⣿⣿⡇⠀⠀⢀⣴⣾⣿⠟⠁⢸⣿⣿⣿⣿⣿⣿⣿⡿⠛⠁⠀⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀█████████ █████████ ███
  3. ⣿⣿⡇⣠⣶⣿⡿⠋⠀⠀⠀⢸⣿⡇⠀⠀⠀⣠⠀⠀⢀⣠⡆⢸⣿⣿⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀███ ███ ███
  4. ⣿⣿⣿⣿⣟⠋⠀⠀⠀⠀⠀⢸⣿⡇⠀⢰⣾⣿⠀⠀⣿⣿⡇⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀███ ███ ███
  5. ⣿⣿⡏⠻⣿⣷⣤⡀⠀⠀⠀⠸⠛⠁⠀⠸⠋⠁⠀⠀⣿⣿⡇⠈⠉⠉⠉⠉⠉⠉⠉⠉⢹⣿⣿⠀███ ███ ███
  6. ⣿⣿⡇⠀⠀⠙⢿⣿⣦⣀⠀⠀⠀⣠⣶⣶⣶⣶⣶⣶⣿⣿⡇⢰⣶⣶⣶⣶⣶⣶⣶⣶⣾⣿⣿⠀█████████ ███ ██████████
  7. k0sctl 0.15.0 Copyright 2022, k0sctl authors.
  8. By continuing to use k0sctl you agree to these terms:
  9. https://k0sproject.io/licenses/eula
  10. level=info msg="==> Running phase: Connect to hosts"
  11. level=info msg="[ssh] 10.81.146.254:22: connected"
  12. level=info msg="[ssh] 10.81.146.184:22: connected"
  13. level=info msg="[ssh] 10.81.146.113:22: connected"
  14. level=info msg="[ssh] 10.81.146.51:22: connected"
  15. level=info msg="[ssh] 10.81.146.198:22: connected"
  16. level=info msg="==> Running phase: Detect host operating systems"
  17. level=info msg="[ssh] 10.81.146.254:22: is running Alpine Linux v3.17"
  18. level=info msg="[ssh] 10.81.146.113:22: is running Alpine Linux v3.17"
  19. level=info msg="[ssh] 10.81.146.184:22: is running Alpine Linux v3.17"
  20. level=info msg="[ssh] 10.81.146.198:22: is running Alpine Linux v3.17"
  21. level=info msg="[ssh] 10.81.146.51:22: is running Alpine Linux v3.17"
  22. level=info msg="==> Running phase: Acquire exclusive host lock"
  23. level=info msg="==> Running phase: Prepare hosts"
  24. level=info msg="[ssh] 10.81.146.113:22: installing packages (curl)"
  25. level=info msg="[ssh] 10.81.146.198:22: installing packages (curl, iptables)"
  26. level=info msg="[ssh] 10.81.146.254:22: installing packages (curl)"
  27. level=info msg="[ssh] 10.81.146.51:22: installing packages (curl, iptables)"
  28. level=info msg="[ssh] 10.81.146.184:22: installing packages (curl)"
  29. level=info msg="==> Running phase: Gather host facts"
  30. level=info msg="[ssh] 10.81.146.184:22: using k0s-controller-1 as hostname"
  31. level=info msg="[ssh] 10.81.146.51:22: using k0s-worker-1 as hostname"
  32. level=info msg="[ssh] 10.81.146.198:22: using k0s-worker-0 as hostname"
  33. level=info msg="[ssh] 10.81.146.113:22: using k0s-controller-2 as hostname"
  34. level=info msg="[ssh] 10.81.146.254:22: using k0s-controller-0 as hostname"
  35. level=info msg="[ssh] 10.81.146.184:22: discovered eth0 as private interface"
  36. level=info msg="[ssh] 10.81.146.51:22: discovered eth0 as private interface"
  37. level=info msg="[ssh] 10.81.146.198:22: discovered eth0 as private interface"
  38. level=info msg="[ssh] 10.81.146.113:22: discovered eth0 as private interface"
  39. level=info msg="[ssh] 10.81.146.254:22: discovered eth0 as private interface"
  40. level=info msg="==> Running phase: Download k0s binaries to local host"
  41. level=info msg="==> Running phase: Validate hosts"
  42. level=info msg="==> Running phase: Gather k0s facts"
  43. level=info msg="==> Running phase: Validate facts"
  44. level=info msg="==> Running phase: Upload k0s binaries to hosts"
  45. level=info msg="[ssh] 10.81.146.254:22: uploading k0s binary from /home/k0sctl/.cache/k0sctl/k0s/linux/amd64/k0s-v1.26.8+k0s.0"
  46. level=info msg="[ssh] 10.81.146.113:22: uploading k0s binary from /home/k0sctl/.cache/k0sctl/k0s/linux/amd64/k0s-v1.26.8+k0s.0"
  47. level=info msg="[ssh] 10.81.146.51:22: uploading k0s binary from /home/k0sctl/.cache/k0sctl/k0s/linux/amd64/k0s-v1.26.8+k0s.0"
  48. level=info msg="[ssh] 10.81.146.198:22: uploading k0s binary from /home/k0sctl/.cache/k0sctl/k0s/linux/amd64/k0s-v1.26.8+k0s.0"
  49. level=info msg="[ssh] 10.81.146.184:22: uploading k0s binary from /home/k0sctl/.cache/k0sctl/k0s/linux/amd64/k0s-v1.26.8+k0s.0"
  50. level=info msg="==> Running phase: Configure k0s"
  51. level=info msg="[ssh] 10.81.146.254:22: validating configuration"
  52. level=info msg="[ssh] 10.81.146.184:22: validating configuration"
  53. level=info msg="[ssh] 10.81.146.113:22: validating configuration"
  54. level=info msg="[ssh] 10.81.146.113:22: configuration was changed"
  55. level=info msg="[ssh] 10.81.146.184:22: configuration was changed"
  56. level=info msg="[ssh] 10.81.146.254:22: configuration was changed"
  57. level=info msg="==> Running phase: Initialize the k0s cluster"
  58. level=info msg="[ssh] 10.81.146.254:22: installing k0s controller"
  59. level=info msg="[ssh] 10.81.146.254:22: waiting for the k0s service to start"
  60. level=info msg="[ssh] 10.81.146.254:22: waiting for kubernetes api to respond"
  61. level=info msg="==> Running phase: Install controllers"
  62. level=info msg="[ssh] 10.81.146.254:22: generating token"
  63. level=info msg="[ssh] 10.81.146.184:22: writing join token"
  64. level=info msg="[ssh] 10.81.146.184:22: installing k0s controller"
  65. level=info msg="[ssh] 10.81.146.184:22: starting service"
  66. level=info msg="[ssh] 10.81.146.184:22: waiting for the k0s service to start"
  67. level=info msg="[ssh] 10.81.146.184:22: waiting for kubernetes api to respond"
  68. level=info msg="[ssh] 10.81.146.254:22: generating token"
  69. level=info msg="[ssh] 10.81.146.113:22: writing join token"
  70. level=info msg="[ssh] 10.81.146.113:22: installing k0s controller"
  71. level=info msg="[ssh] 10.81.146.113:22: starting service"
  72. level=info msg="[ssh] 10.81.146.113:22: waiting for the k0s service to start"
  73. level=info msg="[ssh] 10.81.146.113:22: waiting for kubernetes api to respond"
  74. level=info msg="==> Running phase: Install workers"
  75. level=info msg="[ssh] 10.81.146.51:22: validating api connection to https://10.81.146.254:6443"
  76. level=info msg="[ssh] 10.81.146.198:22: validating api connection to https://10.81.146.254:6443"
  77. level=info msg="[ssh] 10.81.146.254:22: generating token"
  78. level=info msg="[ssh] 10.81.146.198:22: writing join token"
  79. level=info msg="[ssh] 10.81.146.51:22: writing join token"
  80. level=info msg="[ssh] 10.81.146.198:22: installing k0s worker"
  81. level=info msg="[ssh] 10.81.146.51:22: installing k0s worker"
  82. level=info msg="[ssh] 10.81.146.198:22: starting service"
  83. level=info msg="[ssh] 10.81.146.51:22: starting service"
  84. level=info msg="[ssh] 10.81.146.198:22: waiting for node to become ready"
  85. level=info msg="[ssh] 10.81.146.51:22: waiting for node to become ready"
  86. level=info msg="==> Running phase: Release exclusive host lock"
  87. level=info msg="==> Running phase: Disconnect from hosts"
  88. level=info msg="==> Finished in 3m30s"
  89. level=info msg="k0s cluster version v1.26.8+k0s.0 is now installed"
  90. level=info msg="Tip: To access the cluster you can now fetch the admin kubeconfig using:"
  91. level=info msg=" k0sctl kubeconfig"

The cluster with the two nodes should be available by now. Setup the kubeconfig file in order to interact with it:

  1. k0sctl kubeconfig > k0s-kubeconfig
  2. export KUBECONFIG=$(pwd)/k0s-kubeconfig

The three controllers are available and provide API Server endpoints:

  1. $ kubectl -n kube-node-lease get \
  2. lease/k0s-ctrl-k0s-controller-0 \
  3. lease/k0s-ctrl-k0s-controller-1 \
  4. lease/k0s-ctrl-k0s-controller-2 \
  5. lease/k0s-endpoint-reconciler
  6. NAME HOLDER AGE
  7. k0s-ctrl-k0s-controller-0 9ec2b221890e5ed6f4cc70377bfe809fef5be541a2774dc5de81db7acb2786f1 2m37s
  8. k0s-ctrl-k0s-controller-1 fe45284924abb1bfce674e5a9aa8d647f17c81e53bbab17cf28288f13d5e8f97 2m18s
  9. k0s-ctrl-k0s-controller-2 5ab43278e63fc863b2a7f0fe1aab37316a6db40c5a3d8a17b9d35b5346e23b3d 2m9s
  10. k0s-endpoint-reconciler 9ec2b221890e5ed6f4cc70377bfe809fef5be541a2774dc5de81db7acb2786f1 2m37s
  11. $ kubectl -n default get endpoints
  12. NAME ENDPOINTS AGE
  13. kubernetes 10.81.146.113:6443,10.81.146.184:6443,10.81.146.254:6443 2m49s

The first controller is the current k0s leader. The two worker nodes can be listed, too:

  1. $ kubectl get nodes -owide
  2. NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
  3. k0s-worker-0 Ready <none> 2m16s v1.26.8+k0s 10.81.146.198 <none> Alpine Linux v3.17 5.15.83-0-virt containerd://1.6.18
  4. k0s-worker-1 Ready <none> 2m15s v1.26.8+k0s 10.81.146.51 <none> Alpine Linux v3.17 5.15.83-0-virt containerd://1.6.18

There is one node-local load balancer pod running for each worker node:

  1. $ kubectl -n kube-system get pod -owide -l app.kubernetes.io/managed-by=k0s,app.kubernetes.io/component=nllb
  2. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  3. nllb-k0s-worker-0 1/1 Running 0 81s 10.81.146.198 k0s-worker-0 <none> <none>
  4. nllb-k0s-worker-1 1/1 Running 0 85s 10.81.146.51 k0s-worker-1 <none> <none>

The cluster is using node-local load balancing and is able to tolerate the outage of one controller node. Shutdown the first controller to simulate a failure condition:

  1. $ ssh -i k0s-ssh-private-key.pem k0s@10.81.146.254 'echo "Powering off $(hostname) ..." && sudo poweroff'
  2. Powering off k0s-controller-0 ...

Node-local load balancing provides high availability from within the cluster, not from the outside. The generated kubeconfig file lists the first controller’s IP as the Kubernetes API server address by default. As this controller is gone by now, a subsequent call to kubectl will fail:

  1. $ kubectl get nodes
  2. Unable to connect to the server: dial tcp 10.81.146.254:6443: connect: no route to host

Changing the server address in k0s-kubeconfig from the first controller to another one makes the cluster accessible again. Pick one of the other controller IP addresses and put that into the kubeconfig file. The addresses are listed both in k0sctl.yaml as well as in the output of kubectl -n default get endpoints above.

  1. $ ssh -i k0s-ssh-private-key.pem k0s@10.81.146.184 hostname
  2. k0s-controller-1
  3. $ sed -i s#https://10\\.81\\.146\\.254:6443#https://10.81.146.184:6443#g k0s-kubeconfig
  4. $ kubectl get nodes -owide
  5. NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
  6. k0s-worker-0 Ready <none> 3m35s v1.26.8+k0s 10.81.146.198 <none> Alpine Linux v3.17 5.15.83-0-virt containerd://1.6.18
  7. k0s-worker-1 Ready <none> 3m34s v1.26.8+k0s 10.81.146.51 <none> Alpine Linux v3.17 5.15.83-0-virt containerd://1.6.18
  8. $ kubectl -n kube-system get pods -owide -l app.kubernetes.io/managed-by=k0s,app.kubernetes.io/component=nllb
  9. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  10. nllb-k0s-worker-0 1/1 Running 0 2m31s 10.81.146.198 k0s-worker-0 <none> <none>
  11. nllb-k0s-worker-1 1/1 Running 0 2m35s 10.81.146.51 k0s-worker-1 <none> <none>

The first controller is no longer active. Its IP address is not listed in the default/kubernetes Endpoints resource and its k0s controller lease is orphaned:

  1. $ kubectl -n default get endpoints
  2. NAME ENDPOINTS AGE
  3. kubernetes 10.81.146.113:6443,10.81.146.184:6443 3m56s
  4. $ kubectl -n kube-node-lease get \
  5. lease/k0s-ctrl-k0s-controller-0 \
  6. lease/k0s-ctrl-k0s-controller-1 \
  7. lease/k0s-ctrl-k0s-controller-2 \
  8. lease/k0s-endpoint-reconciler
  9. NAME HOLDER AGE
  10. k0s-ctrl-k0s-controller-0 4m47s
  11. k0s-ctrl-k0s-controller-1 fe45284924abb1bfce674e5a9aa8d647f17c81e53bbab17cf28288f13d5e8f97 4m28s
  12. k0s-ctrl-k0s-controller-2 5ab43278e63fc863b2a7f0fe1aab37316a6db40c5a3d8a17b9d35b5346e23b3d 4m19s
  13. k0s-endpoint-reconciler 5ab43278e63fc863b2a7f0fe1aab37316a6db40c5a3d8a17b9d35b5346e23b3d 4m47s

Despite that controller being unavailable, the cluster remains operational. The third controller has become the new k0s leader. Workloads will run just fine:

  1. $ kubectl -n default run nginx --image=nginx
  2. pod/nginx created
  3. $ kubectl -n default get pods -owide
  4. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  5. nginx 1/1 Running 0 16s 10.244.0.5 k0s-worker-1 <none> <none>
  6. $ kubectl -n default logs nginx
  7. /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
  8. /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
  9. /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
  10. 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
  11. 10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
  12. /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
  13. /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
  14. /docker-entrypoint.sh: Configuration complete; ready for start up
  15. [notice] 1#1: using the "epoll" event method
  16. [notice] 1#1: nginx/1.23.3
  17. [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
  18. [notice] 1#1: OS: Linux 5.15.83-0-virt
  19. [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
  20. [notice] 1#1: start worker processes
  21. [notice] 1#1: start worker process 28