Bootstrapping the Kubernetes Worker Nodes

In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: runc, container networking plugins, cri-o, kubelet, and kube-proxy.

Prerequisites

The commands in this lab must be run on each worker instance: worker-0, worker-1, and worker-2. Login to each worker instance using the gcloud command. Example:

  1. gcloud compute ssh worker-0

Provisioning a Kubernetes Worker Node

Install the cri-o OS Dependencies

Add the alexlarsson/flatpak PPA which hosts the libostree package:

  1. sudo add-apt-repository -y ppa:alexlarsson/flatpak
  1. sudo apt-get update

Install the OS dependencies required by the cri-o container runtime:

  1. sudo apt-get install -y socat libgpgme11 libostree-1-1

Download and Install Worker Binaries

  1. wget -q --show-progress --https-only --timestamping \
  2. https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz \
  3. https://github.com/opencontainers/runc/releases/download/v1.0.0-rc4/runc.amd64 \
  4. https://storage.googleapis.com/kubernetes-the-hard-way/crio-amd64-v1.0.0-beta.0.tar.gz \
  5. https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kubectl \
  6. https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kube-proxy \
  7. https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kubelet

Create the installation directories:

  1. sudo mkdir -p \
  2. /etc/containers \
  3. /etc/cni/net.d \
  4. /etc/crio \
  5. /opt/cni/bin \
  6. /usr/local/libexec/crio \
  7. /var/lib/kubelet \
  8. /var/lib/kube-proxy \
  9. /var/lib/kubernetes \
  10. /var/run/kubernetes

Install the worker binaries:

  1. sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/
  1. tar -xvf crio-amd64-v1.0.0-beta.0.tar.gz
  1. chmod +x kubectl kube-proxy kubelet runc.amd64
  1. sudo mv runc.amd64 /usr/local/bin/runc
  1. sudo mv crio crioctl kpod kubectl kube-proxy kubelet /usr/local/bin/
  1. sudo mv conmon pause /usr/local/libexec/crio/

Configure CNI Networking

Retrieve the Pod CIDR range for the current compute instance:

  1. POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \
  2. http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr)

Create the bridge network configuration file:

  1. cat > 10-bridge.conf <<EOF
  2. {
  3. "cniVersion": "0.3.1",
  4. "name": "bridge",
  5. "type": "bridge",
  6. "bridge": "cnio0",
  7. "isGateway": true,
  8. "ipMasq": true,
  9. "ipam": {
  10. "type": "host-local",
  11. "ranges": [
  12. [{"subnet": "${POD_CIDR}"}]
  13. ],
  14. "routes": [{"dst": "0.0.0.0/0"}]
  15. }
  16. }
  17. EOF

Create the loopback network configuration file:

  1. cat > 99-loopback.conf <<EOF
  2. {
  3. "cniVersion": "0.3.1",
  4. "type": "loopback"
  5. }
  6. EOF

Move the network configuration files to the CNI configuration directory:

  1. sudo mv 10-bridge.conf 99-loopback.conf /etc/cni/net.d/

Configure the CRI-O Container Runtime

  1. sudo mv crio.conf seccomp.json /etc/crio/
  1. sudo mv policy.json /etc/containers/
  1. cat > crio.service <<EOF
  2. [Unit]
  3. Description=CRI-O daemon
  4. Documentation=https://github.com/kubernetes-incubator/cri-o
  5. [Service]
  6. ExecStart=/usr/local/bin/crio
  7. Restart=always
  8. RestartSec=10s
  9. [Install]
  10. WantedBy=multi-user.target
  11. EOF

Configure the Kubelet

  1. sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
  1. sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
  1. sudo mv ca.pem /var/lib/kubernetes/

Create the kubelet.service systemd unit file:

  1. cat > kubelet.service <<EOF
  2. [Unit]
  3. Description=Kubernetes Kubelet
  4. Documentation=https://github.com/GoogleCloudPlatform/kubernetes
  5. After=crio.service
  6. Requires=crio.service
  7. [Service]
  8. ExecStart=/usr/local/bin/kubelet \\
  9. --allow-privileged=true \\
  10. --cluster-dns=10.32.0.10 \\
  11. --cluster-domain=cluster.local \\
  12. --container-runtime=remote \\
  13. --container-runtime-endpoint=unix:///var/run/crio.sock \\
  14. --enable-custom-metrics \\
  15. --image-pull-progress-deadline=2m \\
  16. --image-service-endpoint=unix:///var/run/crio.sock \\
  17. --kubeconfig=/var/lib/kubelet/kubeconfig \\
  18. --network-plugin=cni \\
  19. --pod-cidr=${POD_CIDR} \\
  20. --register-node=true \\
  21. --require-kubeconfig \\
  22. --runtime-request-timeout=10m \\
  23. --tls-cert-file=/var/lib/kubelet/${HOSTNAME}.pem \\
  24. --tls-private-key-file=/var/lib/kubelet/${HOSTNAME}-key.pem \\
  25. --v=2
  26. Restart=on-failure
  27. RestartSec=5
  28. [Install]
  29. WantedBy=multi-user.target
  30. EOF

Configure the Kubernetes Proxy

  1. sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig

Create the kube-proxy.service systemd unit file:

  1. cat > kube-proxy.service <<EOF
  2. [Unit]
  3. Description=Kubernetes Kube Proxy
  4. Documentation=https://github.com/GoogleCloudPlatform/kubernetes
  5. [Service]
  6. ExecStart=/usr/local/bin/kube-proxy \\
  7. --cluster-cidr=10.200.0.0/16 \\
  8. --kubeconfig=/var/lib/kube-proxy/kubeconfig \\
  9. --proxy-mode=iptables \\
  10. --v=2
  11. Restart=on-failure
  12. RestartSec=5
  13. [Install]
  14. WantedBy=multi-user.target
  15. EOF

Start the Worker Services

  1. sudo mv crio.service kubelet.service kube-proxy.service /etc/systemd/system/
  1. sudo systemctl daemon-reload
  1. sudo systemctl enable crio kubelet kube-proxy
  1. sudo systemctl start crio kubelet kube-proxy

Remember to run the above commands on each worker node: worker-0, worker-1, and worker-2.

Verification

Login to one of the controller nodes:

  1. gcloud compute ssh controller-0

List the registered Kubernetes nodes:

  1. kubectl get nodes

output

  1. NAME STATUS AGE VERSION
  2. worker-0 Ready 5m v1.7.4
  3. worker-1 Ready 3m v1.7.4
  4. worker-2 Ready 7s v1.7.4

Next: Configuring kubectl for Remote Access