Creating a cluster with an Ansible Playbook

Ansible is a popular infrastructure-as-code tool that can use to automate tasks for the purpose of achieving the desired state in a system. With Ansible (and the k0s-Ansible playbook) you can quickly install a multi-node Kubernetes Cluster.

Note: Before using Ansible to create a cluster, you should have a general understanding of Ansible (refer to the official Ansible User Guide.

Prerequisites

You will require the following tools to install k0s on local virtual machines:

ToolDetail
multipassA lightweight VM manager that uses KVM on Linux, Hyper-V on Windows, and hypervisor.framework on macOS. Installation information
ansibleAn infrastructure as code tool. Installation Guide
kubectlCommand line tool for running commands against Kubernetes clusters. Kubernetes Install Tools

Create the cluster

  1. Download k0s-ansible

    Clone the k0s-ansible repository on your local machine:

    1. git clone https://github.com/movd/k0s-ansible.git
    2. cd k0s-ansible
  2. Create virtual machines

    Note: Though multipass is the VM manager in use here, there is no interdependence.

    Create a number of virtual machines. For the automation to work, each instance must have passwordless SSH access. To achieve this, provision each instance with a cloud-init manifest that imports your current users’ public SSH key and into a user k0s (refer to the bash script below).

    This creates 7 virtual machines:

    1. ./tools/multipass_create_instances.sh 7
    1. Create cloud-init to import ssh key...
    2. [1/7] Creating instance k0s-1 with multipass...
    3. Launched: k0s-1
    4. [2/7] Creating instance k0s-2 with multipass...
    5. Launched: k0s-2
    6. [3/7] Creating instance k0s-3 with multipass...
    7. Launched: k0s-3
    8. [4/7] Creating instance k0s-4 with multipass...
    9. Launched: k0s-4
    10. [5/7] Creating instance k0s-5 with multipass...
    11. Launched: k0s-5
    12. [6/7] Creating instance k0s-6 with multipass...
    13. Launched: k0s-6
    14. [7/7] Creating instance k0s-7 with multipass...
    15. Launched: k0s-7
    16. Name State IPv4 Image
    17. k0s-1 Running 192.168.64.32 Ubuntu 20.04 LTS
    18. k0s-2 Running 192.168.64.33 Ubuntu 20.04 LTS
    19. k0s-3 Running 192.168.64.56 Ubuntu 20.04 LTS
    20. k0s-4 Running 192.168.64.57 Ubuntu 20.04 LTS
    21. k0s-5 Running 192.168.64.58 Ubuntu 20.04 LTS
    22. k0s-6 Running 192.168.64.60 Ubuntu 20.04 LTS
    23. k0s-7 Running 192.168.64.61 Ubuntu 20.04 LTS
  3. Create Ansible inventory

    1. Copy the sample to create the inventory directory:

    1. ```shell
    2. cp -rfp inventory/sample inventory/multipass
    1. 2\. Create the inventory.

    Assign the virtual machines to the different host groups, as required by the playbook logic.

    | Host group | Detail | |:———————————|:—————————————————————| | initial_controller | Must contain a single node that creates the worker and controller tokens needed by the other nodes| | controller | Can contain nodes that, together with the host from initial_controller, form a highly available isolated control plane | | worker | Must contain at least one node, to allow for the deployment of Kubernetes objects |

    1. 3\. Fill in `inventory/multipass/inventory.yml`. This can be done by direct entry using the metadata provided by `multipass list,`, or you can use the following Python script `multipass_generate_inventory.py`:
    1. ./tools/multipass_generate_inventory.py
    1. Designate first three instances as control plane
    2. Created Ansible Inventory at: /Users/dev/k0s-ansible/tools/inventory.yml
    3. $ cp tools/inventory.yml inventory/multipass/inventory.yml

    Your inventory/multipass/inventory.yml should resemble the example below:

    1. ---
    2. all:
    3. children:
    4. initial_controller:
    5. hosts:
    6. k0s-1:
    7. controller:
    8. hosts:
    9. k0s-2:
    10. k0s-3:
    11. worker:
    12. hosts:
    13. k0s-4:
    14. k0s-5:
    15. k0s-6:
    16. k0s-7:
    17. hosts:
    18. k0s-1:
    19. ansible_host: 192.168.64.32
    20. k0s-2:
    21. ansible_host: 192.168.64.33
    22. k0s-3:
    23. ansible_host: 192.168.64.56
    24. k0s-4:
    25. ansible_host: 192.168.64.57
    26. k0s-5:
    27. ansible_host: 192.168.64.58
    28. k0s-6:
    29. ansible_host: 192.168.64.60
    30. k0s-7:
    31. ansible_host: 192.168.64.61
    32. vars:
    33. ansible_user: k0s

    ```

  4. Test the virtual machine connections

    Run the following command to test the connection to your hosts:

    1. ansible -i inventory/multipass/inventory.yml -m ping
    1. k0s-4 | SUCCESS => {
    2. "ansible_facts": {
    3. "discovered_interpreter_python": "/usr/bin/python3"
    4. },
    5. "changed": false,
    6. "ping": "pong"
    7. }
    8. ...

    If the test result indicates success, you can proceed.

  5. Provision the cluster with Ansible

    Applying the playbook, k0s download and be set up on all nodes, tokens will be exchanged, and a kubeconfig will be dumped to your local deployment environment.

    1. ansible-playbook site.yml -i inventory/multipass/inventory.yml
    1. TASK [k0s/initial_controller : print kubeconfig command] *******************************************************
    2. Tuesday 22 December 2020 17:43:20 +0100 (0:00:00.257) 0:00:41.287 ******
    3. ok: [k0s-1] => {
    4. "msg": "To use Cluster: export KUBECONFIG=/Users/dev/k0s-ansible/inventory/multipass/artifacts/k0s-kubeconfig.yml"
    5. }
    6. ...
    7. PLAY RECAP *****************************************************************************************************
    8. k0s-1 : ok=21 changed=11 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
    9. k0s-2 : ok=10 changed=5 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
    10. k0s-3 : ok=10 changed=5 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
    11. k0s-4 : ok=9 changed=5 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
    12. k0s-5 : ok=9 changed=5 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
    13. k0s-6 : ok=9 changed=5 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
    14. k0s-7 : ok=9 changed=5 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
    15. Tuesday 22 December 2020 17:43:36 +0100 (0:00:01.204) 0:00:57.478 ******
    16. ===============================================================================
    17. prereq : Install apt packages -------------------------------------------------------------------------- 22.70s
    18. k0s/controller : Wait for k8s apiserver ----------------------------------------------------------------- 4.30s
    19. k0s/initial_controller : Create worker join token ------------------------------------------------------- 3.38s
    20. k0s/initial_controller : Wait for k8s apiserver --------------------------------------------------------- 3.36s
    21. download : Download k0s binary k0s-v0.9.0-rc1-amd64 ----------------------------------------------------- 3.11s
    22. Gathering Facts ----------------------------------------------------------------------------------------- 2.85s
    23. Gathering Facts ----------------------------------------------------------------------------------------- 1.95s
    24. prereq : Create k0s Directories ------------------------------------------------------------------------- 1.53s
    25. k0s/worker : Enable and check k0s service --------------------------------------------------------------- 1.20s
    26. prereq : Write the k0s config file ---------------------------------------------------------------------- 1.09s
    27. k0s/initial_controller : Enable and check k0s service --------------------------------------------------- 0.94s
    28. k0s/controller : Enable and check k0s service ----------------------------------------------------------- 0.73s
    29. Gathering Facts ----------------------------------------------------------------------------------------- 0.71s
    30. Gathering Facts ----------------------------------------------------------------------------------------- 0.66s
    31. Gathering Facts ----------------------------------------------------------------------------------------- 0.64s
    32. k0s/worker : Write the k0s token file on worker --------------------------------------------------------- 0.64s
    33. k0s/worker : Copy k0s service file ---------------------------------------------------------------------- 0.53s
    34. k0s/controller : Write the k0s token file on controller ------------------------------------------------- 0.41s
    35. k0s/controller : Copy k0s service file ------------------------------------------------------------------ 0.40s
    36. k0s/initial_controller : Copy k0s service file ---------------------------------------------------------- 0.36s

Use the cluster with kubectl

A kubeconfig was copied to your local machine while the playbook was running which you can use to gain access to your new Kubernetes cluster:

  1. export KUBECONFIG=/Users/dev/k0s-ansible/inventory/multipass/artifacts/k0s-kubeconfig.yml
  2. kubectl cluster-info
  1. Kubernetes control plane is running at https://192.168.64.32:6443
  2. CoreDNS is running at https://192.168.64.32:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
  3. Metrics-server is running at https://192.168.64.32:6443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
  4. $ kubectl get nodes -o wide
  5. NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
  6. k0s-4 Ready <none> 21s v1.20.1-k0s1 192.168.64.57 <none> Ubuntu 20.04.1 LTS 5.4.0-54-generic containerd://1.4.3
  7. k0s-5 Ready <none> 21s v1.20.1-k0s1 192.168.64.58 <none> Ubuntu 20.04.1 LTS 5.4.0-54-generic containerd://1.4.3
  8. k0s-6 NotReady <none> 21s v1.20.1-k0s1 192.168.64.60 <none> Ubuntu 20.04.1 LTS 5.4.0-54-generic containerd://1.4.3
  9. k0s-7 NotReady <none> 21s v1.20.1-k0s1 192.168.64.61 <none> Ubuntu 20.04.1 LTS 5.4.0-54-generic containerd://1.4.3

Note: The first three control plane nodes will not display, as the control plane is fully isolated. To check on the distributed etcd cluster, you can use ssh to securely log a controller node, or you can run the following ad-hoc command:

  1. ansible k0s-1 -a "k0s etcd member-list -c /etc/k0s/k0s.yaml" -i inventory/multipass/inventory.yml | tail -1 | jq
  1. {
  2. "level": "info",
  3. "members": {
  4. "k0s-1": "https://192.168.64.32:2380",
  5. "k0s-2": "https://192.168.64.33:2380",
  6. "k0s-3": "https://192.168.64.56:2380"
  7. },
  8. "msg": "done",
  9. "time": "2020-12-23T00:21:22+01:00"
  10. }

Once all worker nodes are at Ready state you can use the cluster. You can test the cluster state by creating a simple nginx deployment.

  1. kubectl create deployment nginx --image=gcr.io/google-containers/nginx --replicas=5
  1. deployment.apps/nginx created
  1. kubectl expose deployment nginx --target-port=80 --port=8100
  1. service/nginx exposed
  1. kubectl run hello-k0s --image=quay.io/prometheus/busybox --rm -it --restart=Never --command -- wget -qO- nginx:8100
  1. <!DOCTYPE html>
  2. <html>
  3. <head>
  4. <title>Welcome to nginx on Debian!</title>
  5. ...
  6. pod "hello-k0s" deleted

Note: k0s users are the developers of k0s-ansible. Please send your feedback, bug reports, and pull requests to github.com/movd/k0s-ansible._