Creating a cluster with Ansible Playbook

Using Ansible and the k0s-ansible playbook, you can install a multi-node Kubernetes Cluster in a couple of minutes. Ansible is a popular infrastructure as code tool which helps you automate tasks to achieve the desired state in a system.

This guide shows how you can install k0s on local virtual machines. In this guide, the following tools are used:

Before following this tutorial, you should have a general understanding of Ansible. A great way to start is the official Ansible User Guide.

Please note: k0s users created k0s-ansible. Please send your feedback, bug reports, and pull requests to github.com/movd/k0s-ansible.

Without further ado, let’s jump right in.

Download k0s-ansible

On your local machine clone the k0s-ansible repository:

  1. $ git clone https://github.com/movd/k0s-ansible.git
  2. $ cd k0s-ansible

Create virtual machines

For this tutorial, multipass was used. However, there is no interdependence. This playbook should also work with VMs created in alternative ways or Raspberry Pis.

Next, create a couple of virtual machines. For the automation to work, each instance must have passwordless SSH access. To achieve this, we provision each instance with a cloud-init manifest that imports your current users’ public SSH key and into a user k0s. For your convenience, a bash script is included that does just that:

./tools/multipass_create_instances.sh 7 ◀️ this creates 7 virtual machines

  1. $ ./tools/multipass_create_instances.sh 7
  2. Create cloud-init to import ssh key...
  3. [1/7] Creating instance k0s-1 with multipass...
  4. Launched: k0s-1
  5. [2/7] Creating instance k0s-2 with multipass...
  6. Launched: k0s-2
  7. [3/7] Creating instance k0s-3 with multipass...
  8. Launched: k0s-3
  9. [4/7] Creating instance k0s-4 with multipass...
  10. Launched: k0s-4
  11. [5/7] Creating instance k0s-5 with multipass...
  12. Launched: k0s-5
  13. [6/7] Creating instance k0s-6 with multipass...
  14. Launched: k0s-6
  15. [7/7] Creating instance k0s-7 with multipass...
  16. Launched: k0s-7
  17. Name State IPv4 Image
  18. k0s-1 Running 192.168.64.32 Ubuntu 20.04 LTS
  19. k0s-2 Running 192.168.64.33 Ubuntu 20.04 LTS
  20. k0s-3 Running 192.168.64.56 Ubuntu 20.04 LTS
  21. k0s-4 Running 192.168.64.57 Ubuntu 20.04 LTS
  22. k0s-5 Running 192.168.64.58 Ubuntu 20.04 LTS
  23. k0s-6 Running 192.168.64.60 Ubuntu 20.04 LTS
  24. k0s-7 Running 192.168.64.61 Ubuntu 20.04 LTS

Create Ansible inventory

After that, we create our inventory directory by copying the sample:

  1. $ cp -rfp inventory/sample inventory/multipass

Now we need to create our inventory. The before built virtual machines need to be assigned to the different host groups required by the playbook’s logic.

  • initial_controller = must contain a single node that creates the worker and controller tokens needed by the other nodes.
  • controller = can contain nodes that, together with the host from initial_controller form a highly available isolated control plane.
  • worker = must contain at least one node so that we can deploy Kubernetes objects.

We could fill inventory/multipass/inventory.yml by hand with the metadata provided by multipass list, but since we are lazy and want to automate as much as possible, we can use the included Python script multipass_generate_inventory.py:

To automatically fill our inventory run:

  1. $ ./tools/multipass_generate_inventory.py
  2. Designate first three instances as control plane
  3. Created Ansible Inventory at: /Users/dev/k0s-ansible/tools/inventory.yml
  4. $ cp tools/inventory.yml inventory/multipass/inventory.yml

Now inventory/multipass/inventory.yml should look like this (Of course, your IP addresses might differ):

  1. ---
  2. all:
  3. children:
  4. initial_controller:
  5. hosts:
  6. k0s-1:
  7. controller:
  8. hosts:
  9. k0s-2:
  10. k0s-3:
  11. worker:
  12. hosts:
  13. k0s-4:
  14. k0s-5:
  15. k0s-6:
  16. k0s-7:
  17. hosts:
  18. k0s-1:
  19. ansible_host: 192.168.64.32
  20. k0s-2:
  21. ansible_host: 192.168.64.33
  22. k0s-3:
  23. ansible_host: 192.168.64.56
  24. k0s-4:
  25. ansible_host: 192.168.64.57
  26. k0s-5:
  27. ansible_host: 192.168.64.58
  28. k0s-6:
  29. ansible_host: 192.168.64.60
  30. k0s-7:
  31. ansible_host: 192.168.64.61
  32. vars:
  33. ansible_user: k0s

Test the connection to the virtual machines

To test the connection to your hosts just run:

  1. $ ansible -i inventory/multipass/inventory.yml -m ping
  2. k0s-4 | SUCCESS => {
  3. "ansible_facts": {
  4. "discovered_interpreter_python": "/usr/bin/python3"
  5. },
  6. "changed": false,
  7. "ping": "pong"
  8. }
  9. ...

If all is green and successful, you can proceed.

Provision the cluster with Ansible

Finally, we can start provisioning the cluster. Applying the playbook, k0s will get downloaded and set up on all nodes, tokens will get exchanged, and a kubeconfig will get dumped to your local deployment environment.

  1. $ ansible-playbook site.yml -i inventory/multipass/inventory.yml
  2. ...
  3. TASK [k0s/initial_controller : print kubeconfig command] *******************************************************
  4. Tuesday 22 December 2020 17:43:20 +0100 (0:00:00.257) 0:00:41.287 ******
  5. ok: [k0s-1] => {
  6. "msg": "To use Cluster: export KUBECONFIG=/Users/dev/k0s-ansible/inventory/multipass/artifacts/k0s-kubeconfig.yml"
  7. }
  8. ...
  9. PLAY RECAP *****************************************************************************************************
  10. k0s-1 : ok=21 changed=11 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
  11. k0s-2 : ok=10 changed=5 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
  12. k0s-3 : ok=10 changed=5 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
  13. k0s-4 : ok=9 changed=5 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
  14. k0s-5 : ok=9 changed=5 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
  15. k0s-6 : ok=9 changed=5 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
  16. k0s-7 : ok=9 changed=5 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
  17. Tuesday 22 December 2020 17:43:36 +0100 (0:00:01.204) 0:00:57.478 ******
  18. ===============================================================================
  19. prereq : Install apt packages -------------------------------------------------------------------------- 22.70s
  20. k0s/controller : Wait for k8s apiserver ----------------------------------------------------------------- 4.30s
  21. k0s/initial_controller : Create worker join token ------------------------------------------------------- 3.38s
  22. k0s/initial_controller : Wait for k8s apiserver --------------------------------------------------------- 3.36s
  23. download : Download k0s binary k0s-v0.9.0-rc1-amd64 ----------------------------------------------------- 3.11s
  24. Gathering Facts ----------------------------------------------------------------------------------------- 2.85s
  25. Gathering Facts ----------------------------------------------------------------------------------------- 1.95s
  26. prereq : Create k0s Directories ------------------------------------------------------------------------- 1.53s
  27. k0s/worker : Enable and check k0s service --------------------------------------------------------------- 1.20s
  28. prereq : Write the k0s config file ---------------------------------------------------------------------- 1.09s
  29. k0s/initial_controller : Enable and check k0s service --------------------------------------------------- 0.94s
  30. k0s/controller : Enable and check k0s service ----------------------------------------------------------- 0.73s
  31. Gathering Facts ----------------------------------------------------------------------------------------- 0.71s
  32. Gathering Facts ----------------------------------------------------------------------------------------- 0.66s
  33. Gathering Facts ----------------------------------------------------------------------------------------- 0.64s
  34. k0s/worker : Write the k0s token file on worker --------------------------------------------------------- 0.64s
  35. k0s/worker : Copy k0s service file ---------------------------------------------------------------------- 0.53s
  36. k0s/controller : Write the k0s token file on controller ------------------------------------------------- 0.41s
  37. k0s/controller : Copy k0s service file ------------------------------------------------------------------ 0.40s
  38. k0s/initial_controller : Copy k0s service file ---------------------------------------------------------- 0.36s

Use the cluster with kubectl

While the playbook ran, a kubeconfig got copied to your local machine. You can use it to get simple access to your new Kubernetes cluster:

  1. $ export KUBECONFIG=/Users/dev/k0s-ansible/inventory/multipass/artifacts/k0s-kubeconfig.yml
  2. $ kubectl cluster-info
  3. Kubernetes control plane is running at https://192.168.64.32:6443
  4. CoreDNS is running at https://192.168.64.32:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
  5. Metrics-server is running at https://192.168.64.32:6443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
  6. $ kubectl get nodes -o wide
  7. NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
  8. k0s-4 Ready <none> 21s v1.20.1-k0s1 192.168.64.57 <none> Ubuntu 20.04.1 LTS 5.4.0-54-generic containerd://1.4.3
  9. k0s-5 Ready <none> 21s v1.20.1-k0s1 192.168.64.58 <none> Ubuntu 20.04.1 LTS 5.4.0-54-generic containerd://1.4.3
  10. k0s-6 NotReady <none> 21s v1.20.1-k0s1 192.168.64.60 <none> Ubuntu 20.04.1 LTS 5.4.0-54-generic containerd://1.4.3
  11. k0s-7 NotReady <none> 21s v1.20.1-k0s1 192.168.64.61 <none> Ubuntu 20.04.1 LTS 5.4.0-54-generic containerd://1.4.3

⬆️ Of course, the first three control plane nodes won’t show up here because the control plane is fully isolated. You can check on the distributed etcd cluster by running this ad-hoc command (or ssh’ing directly into a controller node):

  1. $ ansible k0s-1 -a "k0s etcd member-list -c /etc/k0s/k0s.yaml" -i inventory/multipass/inventory.yml | tail -1 | jq
  2. {
  3. "level": "info",
  4. "members": {
  5. "k0s-1": "https://192.168.64.32:2380",
  6. "k0s-2": "https://192.168.64.33:2380",
  7. "k0s-3": "https://192.168.64.56:2380"
  8. },
  9. "msg": "done",
  10. "time": "2020-12-23T00:21:22+01:00"
  11. }

After a while, all worker nodes become Ready. Your cluster is now waiting to get used. We can test by creating a simple nginx deployment.

  1. $ kubectl create deployment nginx --image=gcr.io/google-containers/nginx --replicas=5
  2. deployment.apps/nginx created
  3. $ kubectl expose deployment nginx --target-port=80 --port=8100
  4. service/nginx exposed
  5. $ kubectl run hello-k0s --image=quay.io/prometheus/busybox --rm -it --restart=Never --command -- wget -qO- nginx:8100
  6. <!DOCTYPE html>
  7. <html>
  8. <head>
  9. <title>Welcome to nginx on Debian!</title>
  10. ...
  11. pod "hello-k0s" deleted