Deploying a k0s cluster using k0sctl

k0sctl is a command-line tool for bootstrapping and management of k0s clusters. Installation instructions can be found in the k0sctl github repository.

k0sctl will connect to provided host using ssh and gather information about the host. Based on the finding it will proceed to configure the host in question and install k0s binary.

Using k0sctl

First create a k0sctl configuration file:

  1. $ k0sctl init > k0sctl.yaml

A k0sctl.yaml file will be created in the current directory:

  1. apiVersion: k0sctl.k0sproject.io/v1beta1
  2. kind: Cluster
  3. metadata:
  4. name: k0s-cluster
  5. spec:
  6. hosts:
  7. - role: controller
  8. ssh:
  9. address: 10.0.0.1 # replace with the controller's IP address
  10. user: root
  11. keyPath: ~/.ssh/id_rsa
  12. - role: worker
  13. ssh:
  14. address: 10.0.0.2 # replace with the worker's IP address
  15. user: root
  16. keyPath: ~/.ssh/id_rsa

k0sctl configuration specifications can be found in k0sctl documentation

Next step is to run k0sctl apply to perform the cluster deployment:

  1. $ k0sctl apply --config k0sctl.yaml
  2. ⠀⣿⣿⡇⠀⠀⢀⣴⣾⣿⠟⠁⢸⣿⣿⣿⣿⣿⣿⣿⡿⠛⠁⠀⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀█████████ █████████ ███
  3. ⠀⣿⣿⡇⣠⣶⣿⡿⠋⠀⠀⠀⢸⣿⡇⠀⠀⠀⣠⠀⠀⢀⣠⡆⢸⣿⣿⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀███ ███ ███
  4. ⠀⣿⣿⣿⣿⣟⠋⠀⠀⠀⠀⠀⢸⣿⡇⠀⢰⣾⣿⠀⠀⣿⣿⡇⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀███ ███ ███
  5. ⠀⣿⣿⡏⠻⣿⣷⣤⡀⠀⠀⠀⠸⠛⠁⠀⠸⠋⠁⠀⠀⣿⣿⡇⠈⠉⠉⠉⠉⠉⠉⠉⠉⢹⣿⣿⠀███ ███ ███
  6. ⠀⣿⣿⡇⠀⠀⠙⢿⣿⣦⣀⠀⠀⠀⣠⣶⣶⣶⣶⣶⣶⣿⣿⡇⢰⣶⣶⣶⣶⣶⣶⣶⣶⣾⣿⣿⠀█████████ ███ ██████████
  7. INFO k0sctl 0.0.0 Copyright 2021, Mirantis Inc.
  8. INFO Anonymized telemetry will be sent to Mirantis.
  9. INFO By continuing to use k0sctl you agree to these terms:
  10. INFO https://k0sproject.io/licenses/eula
  11. INFO ==> Running phase: Connect to hosts
  12. INFO [ssh] 10.0.0.1:22: connected
  13. INFO [ssh] 10.0.0.2:22: connected
  14. INFO ==> Running phase: Detect host operating systems
  15. INFO [ssh] 10.0.0.1:22: is running Ubuntu 20.10
  16. INFO [ssh] 10.0.0.2:22: is running Ubuntu 20.10
  17. INFO ==> Running phase: Prepare hosts
  18. INFO [ssh] 10.0.0.1:22: installing kubectl
  19. INFO ==> Running phase: Gather host facts
  20. INFO [ssh] 10.0.0.1:22: discovered 10.12.18.133 as private address
  21. INFO ==> Running phase: Validate hosts
  22. INFO ==> Running phase: Gather k0s facts
  23. INFO ==> Running phase: Download K0s on the hosts
  24. INFO [ssh] 10.0.0.2:22: downloading k0s 0.11.0
  25. INFO [ssh] 10.0.0.1:22: downloading k0s 0.11.0
  26. INFO ==> Running phase: Configure K0s
  27. WARN [ssh] 10.0.0.1:22: generating default configuration
  28. INFO [ssh] 10.0.0.1:22: validating configuration
  29. INFO [ssh] 10.0.0.1:22: configuration was changed
  30. INFO ==> Running phase: Initialize K0s Cluster
  31. INFO [ssh] 10.0.0.1:22: installing k0s controller
  32. INFO [ssh] 10.0.0.1:22: waiting for the k0s service to start
  33. INFO [ssh] 10.0.0.1:22: waiting for kubernetes api to respond
  34. INFO ==> Running phase: Install workers
  35. INFO [ssh] 10.0.0.1:22: generating token
  36. INFO [ssh] 10.0.0.2:22: writing join token
  37. INFO [ssh] 10.0.0.2:22: installing k0s worker
  38. INFO [ssh] 10.0.0.2:22: starting service
  39. INFO [ssh] 10.0.0.2:22: waiting for node to become ready
  40. INFO ==> Running phase: Disconnect from hosts
  41. INFO ==> Finished in 2m2s
  42. INFO k0s cluster version 0.11.0 is now installed
  43. INFO Tip: To access the cluster you can now fetch the admin kubeconfig using:
  44. INFO k0sctl kubeconfig

And — presto! Your k0s cluster is up and running.

Get kubeconfig:

  1. $ k0sctl kubeconfig > kubeconfig
  2. $ kubectl get pods --kubeconfig kubeconfig -A
  3. NAMESPACE NAME READY STATUS RESTARTS AGE
  4. kube-system calico-kube-controllers-5f6546844f-w8x27 1/1 Running 0 3m50s
  5. kube-system calico-node-vd7lx 1/1 Running 0 3m44s
  6. kube-system coredns-5c98d7d4d8-tmrwv 1/1 Running 0 4m10s
  7. kube-system konnectivity-agent-d9xv2 1/1 Running 0 3m31s
  8. kube-system kube-proxy-xp9r9 1/1 Running 0 4m4s
  9. kube-system metrics-server-6fbcd86f7b-5frtn 1/1 Running 0 3m51s

Upgrade a k0s cluster using k0sctl

There’s no dedicated upgrade sub-command in k0sctl, the configuration file describes the desired state of the cluster and when passed to k0sctl apply, it will perform a discovery of the current state and do what ever is needed to bring the cluster to the desired state, by for example performing an upgrade.

K0sctl cluster upgrade process

The following steps will be performed during a k0sctl cluster upgrade:

  1. Upgrade each controller one-by-one; As long as there’s multiple controllers configured there’s no downtime
  2. Upgrade workers in batches; 10% of the worker nodes are upgraded at a time
  3. Each worker is first drained allowing the workload to “move” to other nodes before the actual upgrade of the worker node components
  4. The process continues after we see the upgraded nodes back in “Ready” state
  5. Drain can be skipped with a —no-drain option

The desired cluster version can be configured In the k0sctl configuration by setting the value of spec.k0s.version:

  1. spec:
  2. k0s:
  3. version: 0.11.0

When a version has not been specified, k0sctl will check online for the latest version and default to using that.

  1. $ k0sctl apply
  2. ...
  3. ...
  4. INFO[0001] ==> Running phase: Upgrade controllers
  5. INFO[0001] [ssh] 10.0.0.23:22: starting upgrade
  6. INFO[0001] [ssh] 10.0.0.23:22: Running with legacy service name, migrating...
  7. INFO[0011] [ssh] 10.0.0.23:22: waiting for the k0s service to start
  8. INFO[0016] ==> Running phase: Upgrade workers
  9. INFO[0016] Upgrading 1 workers in parallel
  10. INFO[0016] [ssh] 10.0.0.17:22: upgrade starting
  11. INFO[0027] [ssh] 10.0.0.17:22: waiting for node to become ready again
  12. INFO[0027] [ssh] 10.0.0.17:22: upgrade successful
  13. INFO[0027] ==> Running phase: Disconnect from hosts
  14. INFO[0027] ==> Finished in 27s
  15. INFO[0027] k0s cluster version 0.11.0 is now installed
  16. INFO[0027] Tip: To access the cluster you can now fetch the admin kubeconfig using:
  17. INFO[0027] k0sctl kubeconfig

Known limitations

  • k0sctl will not perform any discovery of hosts, it only operates on the hosts listed in the provided configuration
  • k0sctl can currently only add more nodes to the cluster but cannot remove existing ones