Multi-Node Test Environment

Using KVM/QEMU and Kubespray

Setup expectation

There are a bunch of pre-requisites to be able to deploy the following environment. Such as:

  • A Linux workstation (CentOS or Fedora)
  • KVM/QEMU installation
  • docker service allowing insecure local repository

For other Linux distribution, there is no guarantee the following will work. However adapting commands (apt/yum/dnf) could just work.

Prerequisites installation

On your host machine, execute tests/scripts/multi-node/rpm-system-prerequisites.sh (or do the equivalent for your distribution)

Edit /etc/docker/daemon.json to add insecure-registries:

  1. {
  2. "insecure-registries": ["172.17.8.1:5000"]
  3. }

Deploy Kubernetes with Kubespray

Clone it:

  1. git clone https://github.com/kubernetes-sigs/kubespray/
  2. cd kubespray

In order to successfully deploy Kubernetes with Kubespray, you must have this code: https://github.com/kubernetes-incubator/kubespray/pull/2153 and https://github.com/kubernetes-incubator/kubespray/pull/2271.

Edit inventory/sample/group_vars/k8s-cluster/k8s-cluster.yml with:

  1. docker_options: "--insecure-registry=172.17.8.1:5000 --insecure-registry={{ kube_service_addresses }} --data-root={{ docker_daemon_graph }} {{ docker_log_opts }}"

FYI: 172.17.8.1 is the libvirt bridge IP, so it’s reachable from all your virtual machines. This means a registry running on the host machine is reachable from the virtual machines running the Kubernetes cluster.

Create Vagrant’s variable directory:

  1. mkdir vagrant/

Put tests/scripts/multi-node/config.rb in vagrant/. You can adapt it at will. Feel free to adapt num_instances.

Deploy!

  1. vagrant up --no-provision ; vagrant provision

Go grab a coffee:

  1. PLAY RECAP *********************************************************************
  2. k8s-01 : ok=351 changed=111 unreachable=0 failed=0
  3. k8s-02 : ok=230 changed=65 unreachable=0 failed=0
  4. k8s-03 : ok=230 changed=65 unreachable=0 failed=0
  5. k8s-04 : ok=229 changed=65 unreachable=0 failed=0
  6. k8s-05 : ok=229 changed=65 unreachable=0 failed=0
  7. k8s-06 : ok=229 changed=65 unreachable=0 failed=0
  8. k8s-07 : ok=229 changed=65 unreachable=0 failed=0
  9. k8s-08 : ok=229 changed=65 unreachable=0 failed=0
  10. k8s-09 : ok=229 changed=65 unreachable=0 failed=0
  11. Friday 12 January 2018 10:25:45 +0100 (0:00:00.017) 0:17:24.413 ********
  12. ===============================================================================
  13. download : container_download | Download containers if pull is required or told to always pull (all nodes) - 192.44s
  14. kubernetes/preinstall : Update package management cache (YUM) --------- 178.26s
  15. download : container_download | Download containers if pull is required or told to always pull (all nodes) - 102.24s
  16. docker : ensure docker packages are installed -------------------------- 57.20s
  17. download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 52.33s
  18. kubernetes/preinstall : Install packages requirements ------------------ 25.18s
  19. download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 23.74s
  20. download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 18.90s
  21. download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 15.39s
  22. kubernetes/master : Master | wait for the apiserver to be running ------ 12.44s
  23. download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 11.83s
  24. download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 11.66s
  25. kubernetes/node : install | Copy kubelet from hyperkube container ------ 11.44s
  26. download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 11.41s
  27. download : container_download | Download containers if pull is required or told to always pull (all nodes) -- 11.00s
  28. docker : Docker | pause while Docker restarts -------------------------- 10.22s
  29. kubernetes/secrets : Check certs | check if a cert already exists on node --- 6.05s
  30. kubernetes-apps/network_plugin/flannel : Flannel | Wait for flannel subnet.env file presence --- 5.33s
  31. kubernetes/master : Master | wait for kube-scheduler -------------------- 5.30s
  32. kubernetes/master : Copy kubectl from hyperkube container --------------- 4.77s
  33. [leseb@tarox kubespray]$
  34. [leseb@tarox kubespray]$
  35. [leseb@tarox kubespray]$ vagrant ssh k8s-01
  36. Last login: Fri Jan 12 09:22:18 2018 from 192.168.121.1
  37. [vagrant@k8s-01 ~]$ kubectl get nodes
  38. NAME STATUS ROLES AGE VERSION
  39. k8s-01 Ready master,node 2m v1.9.0+coreos.0
  40. k8s-02 Ready node 2m v1.9.0+coreos.0
  41. k8s-03 Ready node 2m v1.9.0+coreos.0
  42. k8s-04 Ready node 2m v1.9.0+coreos.0
  43. k8s-05 Ready node 2m v1.9.0+coreos.0
  44. k8s-06 Ready node 2m v1.9.0+coreos.0
  45. k8s-07 Ready node 2m v1.9.0+coreos.0
  46. k8s-08 Ready node 2m v1.9.0+coreos.0
  47. k8s-09 Ready node 2m v1.9.0+coreos.0

Running the Kubernetes Dashboard UI

kubespray sets up the Dashboard pod by default, but you must authenticate with a bearer token, even for localhost access with kubectl proxy. To allow access, one possible solution is to:

1) Create an admin user by creating admin-user.yaml with these contents (and using kubectl -f create admin-user.yaml):

  1. apiVersion: v1
  2. kind: ServiceAccount
  3. metadata:
  4. name: admin-user
  5. namespace: kube-system

2) Grant that user the ClusterRole authorization by creating and applying admin-user-cluster.role.yaml:

  1. apiVersion: rbac.authorization.k8s.io/v1
  2. kind: ClusterRoleBinding
  3. metadata:
  4. name: admin-user
  5. roleRef:
  6. apiGroup: rbac.authorization.k8s.io
  7. kind: ClusterRole
  8. name: cluster-admin
  9. subjects:
  10. - kind: ServiceAccount
  11. name: admin-user
  12. namespace: kube-system

3) Find the admin-user token in the kube-system namespace:

  1. kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

and you can use that token to log into the UI at http://localhost:8001/ui.

(See https://github.com/kubernetes/dashboard/wiki/Creating-sample-user)

Development workflow on the host

Everything should happen on the host, your development environment will reside on the host machine NOT inside the virtual machines running the Kubernetes cluster.

Now, please refer to https://rook.io/docs/rook/master/development-flow.html to setup your development environment (go, git etc).

At this stage, Rook should be cloned on your host.

From your Rook repository (should be $GOPATH/src/github.com/rook) location execute bash tests/scripts/multi-node/build-rook.sh. During its execution, build-rook.sh will purge all running Rook pods from the cluster, so that your latest container image can be deployed. Furthermore, all Ceph data and config will be purged as well. Ensure that you are done with all existing state on your test cluster before executing build-rook.sh as it will clear everything.

Each time you build and deploy with build-rook.sh, the virtual machines (k8s-0X) will pull the new container image and run your new Rook code. You can run bash tests/scripts/multi-node/build-rook.sh as many times as you want to rebuild your new rook image and redeploy a cluster that is running your new code.

From here, resume your dev, change your code and test it by running bash tests/scripts/multi-node/build-rook.sh.

Teardown

Typically, to flush your environment you will run the following from within kubespray’s git repository. This action will be performed on the host:

  1. [user@host-machine kubespray]$ vagrant destroy -f

Also, if you were using kubectl on that host machine, you can resurrect your old configuration by renaming $HOME/.kube/config.before.rook.$TIMESTAMP with $HOME/.kube/config.

If you were not using kubectl, feel free to simply remove $HOME/.kube/config.rook.

Using VirtualBox and k8s-vagrant-multi-node

Prerequisites

Be sure to follow the prerequisites here: https://github.com/galexrt/k8s-vagrant-multi-node/tree/master#prerequisites.

Quickstart

To start up the environment just run ./tests/scripts/k8s-vagrant-multi-node.sh up. This will bring up one master and 2 workers by default.

To change the amount of workers to bring up and their resources, be sure to checkout the galexrt/k8s-vagrant-multi-node project README Variables section. Just set or export the variables as you need on the script, e.g., either NODE_COUNT=5 ./tests/scripts/k8s-vagrant-multi-node.sh up, or export NODE_COUNT=5 and then ./tests/scripts/k8s-vagrant-multi-node.sh up.

For more information or if you are experiencing issues, please create an issue at GitHub galexrt/k8s-vagrant-multi-node.

Using Vagrant on Linux with libvirt

See https://github.com/noahdesu/kubensis.