Ansible variables

Inventory

The inventory is composed of 3 groups:

  • kube-node : list of kubernetes nodes where the pods will run.
  • kube-master : list of servers where kubernetes master components (apiserver, scheduler, controller) will run.
  • etcd: list of servers to compose the etcd server. You should have at least 3 servers for failover purpose.

Note: do not modify the children of k8s-cluster, like puttingthe etcd group into the k8s-cluster, unless you are certainto do that and you have it fully contained in the latter:

  1. k8s-cluster etcd => kube-node etcd = etcd

When kube-node contains etcd, you define your etcd cluster to be as well schedulable for Kubernetes workloads.If you want it a standalone, make sure those groups do not intersect.If you want the server to act both as master and node, the server must be definedon both groups kube-master and kube-node. If you want a standalone andunschedulable master, the server must be defined only in the kube-master andnot kube-node.

There are also two special groups:

Below is a complete inventory example:

  1. ## Configure 'ip' variable to bind kubernetes services on a
  2. ## different ip than the default iface
  3. node1 ansible_host=95.54.0.12 ip=10.3.0.1
  4. node2 ansible_host=95.54.0.13 ip=10.3.0.2
  5. node3 ansible_host=95.54.0.14 ip=10.3.0.3
  6. node4 ansible_host=95.54.0.15 ip=10.3.0.4
  7. node5 ansible_host=95.54.0.16 ip=10.3.0.5
  8. node6 ansible_host=95.54.0.17 ip=10.3.0.6
  9. [kube-master]
  10. node1
  11. node2
  12. [etcd]
  13. node1
  14. node2
  15. node3
  16. [kube-node]
  17. node2
  18. node3
  19. node4
  20. node5
  21. node6
  22. [k8s-cluster:children]
  23. kube-node
  24. kube-master

Group vars and overriding variables precedence

The group variables to control main deployment options are located in the directory inventory/sample/group_vars.Optional variables are located in the inventory/sample/group_vars/all.yml.Mandatory variables that are common for at least one role (or a node group) can be found in theinventory/sample/group_vars/k8s-cluster.yml.There are also role vars for docker, kubernetes preinstall and master roles.According to the ansible docs,those cannot be overridden from the group vars. In order to override, one should usethe -e runtime flags (most simple way) or other layers described in the docs.

Kubespray uses only a few layers to override things (or expect them tobe overridden for roles):

Layer Comment
role defaults provides best UX to override things for Kubespray deployments
inventory vars Unused
inventory group_vars Expects users to use all.yml,k8s-cluster.yml etc. to override things
inventory host_vars Unused
playbook group_vars Unused
playbook host_vars Unused
host facts Kubespray overrides for internal roles’ logic, like state flags
play vars Unused
play vars_prompt Unused
play vars_files Unused
registered vars Unused
set_facts Kubespray overrides those, for some places
role and include vars Provides bad UX to override things! Use extra vars to enforce
block vars (only for tasks in block) Kubespray overrides for internal roles’ logic
task vars (only for the task) Unused for roles, but only for helper scripts
extra vars (always win precedence) override with ansible-playbook -e @foo.yml

Ansible tags

The following tags are defined in playbooks:

Tag name Used for
apps K8s apps definitions
azure Cloud-provider Azure
bastion Setup ssh config for bastion
bootstrap-os Anything related to host OS configuration
calico Network plugin Calico
canal Network plugin Canal
cloud-provider Cloud-provider related tasks
docker Configuring docker for hosts
download Fetching container images to a delegate host
etcd Configuring etcd cluster
etcd-pre-upgrade Upgrading etcd cluster
etcd-secrets Configuring etcd certs/keys
etchosts Configuring /etc/hosts entries for hosts
facts Gathering facts and misc check results
flannel Network plugin flannel
gce Cloud-provider GCP
hyperkube Manipulations with K8s hyperkube image
k8s-pre-upgrade Upgrading K8s cluster
k8s-secrets Configuring K8s certs/keys
kube-apiserver Configuring static pod kube-apiserver
kube-controller-manager Configuring static pod kube-controller-manager
kubectl Installing kubectl and bash completion
kubelet Configuring kubelet service
kube-proxy Configuring static pod kube-proxy
kube-scheduler Configuring static pod kube-scheduler
localhost Special steps for the localhost (ansible runner)
master Configuring K8s master node role
netchecker Installing netchecker K8s app
network Configuring networking plugins for K8s
nginx Configuring LB for kube-apiserver instances
node Configuring K8s minion (compute) node role
openstack Cloud-provider OpenStack
preinstall Preliminary configuration steps
resolvconf Configuring /etc/resolv.conf for hosts/apps
upgrade Upgrading, f.e. container images/binaries
upload Distributing images/binaries across hosts
weave Network plugin Weave

Note: Use the bash scripts/gen_tags.sh command to generate a list of alltags found in the codebase. New tags will be listed with the empty “Used for”field.

Example commands

Example command to filter and apply only DNS configuration tasks and skipeverything else related to host OS configuration and downloading images of containers:

  1. ansible-playbook -i inventory/sample/hosts.ini cluster.yml --tags preinstall,facts --skip-tags=download,bootstrap-os

And this play only removes the K8s cluster DNS resolver IP from hosts’ /etc/resolv.conf files:

  1. ansible-playbook -i inventory/sample/hosts.ini -e dns_mode='none' cluster.yml --tags resolvconf

And this prepares all container images locally (at the ansible runner node) without installingor upgrading related stuff or trying to upload container to K8s cluster nodes:

  1. ansible-playbook -i inventory/sample/hosts.ini cluster.yml \
  2. -e download_run_once=true -e download_localhost=true \
  3. --tags download --skip-tags upload,upgrade

Note: use --tags and --skip-tags wise and only if you’re 100% sure what you’re doing.

Bastion host

If you prefer to not make your nodes publicly accessible (nodes with private IPs only),you can use a so called bastion host to connect to your nodes. To specify and use a bastion,simply add a line to your inventory, where you have to replace x.x.x.x with the public IP of thebastion host.

  1. [bastion]
  2. bastion ansible_host=x.x.x.x

For more information about Ansible and bastion hosts, readRunning Ansible Through an SSH Bastion Host