Bootstrapping the Kubernetes Worker Nodes

In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: runc, gVisor, container networking plugins, containerd, kubelet, and kube-proxy.

Prerequisites

The commands in this lab must be run on each worker instance: worker-0, worker-1, and worker-2. Login to each worker instance using the gcloud command. Example:

  1. gcloud compute ssh worker-0

Running commands in parallel with tmux

tmux can be used to run commands on multiple compute instances at the same time. See the Running commands in parallel with tmux section in the Prerequisites lab.

Provisioning a Kubernetes Worker Node

Install the OS dependencies:

  1. {
  2. sudo apt-get update
  3. sudo apt-get -y install socat conntrack ipset
  4. }

The socat binary enables support for the kubectl port-forward command.

Download and Install Worker Binaries

  1. wget -q --show-progress --https-only --timestamping \
  2. https://github.com/kubernetes-incubator/cri-tools/releases/download/v1.0.0-beta.0/crictl-v1.0.0-beta.0-linux-amd64.tar.gz \
  3. https://storage.googleapis.com/kubernetes-the-hard-way/runsc \
  4. https://github.com/opencontainers/runc/releases/download/v1.0.0-rc5/runc.amd64 \
  5. https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz \
  6. https://github.com/containerd/containerd/releases/download/v1.1.0/containerd-1.1.0.linux-amd64.tar.gz \
  7. https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kubectl \
  8. https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kube-proxy \
  9. https://storage.googleapis.com/kubernetes-release/release/v1.10.2/bin/linux/amd64/kubelet

Create the installation directories:

  1. sudo mkdir -p \
  2. /etc/cni/net.d \
  3. /opt/cni/bin \
  4. /var/lib/kubelet \
  5. /var/lib/kube-proxy \
  6. /var/lib/kubernetes \
  7. /var/run/kubernetes

Install the worker binaries:

  1. {
  2. chmod +x kubectl kube-proxy kubelet runc.amd64 runsc
  3. sudo mv runc.amd64 runc
  4. sudo mv kubectl kube-proxy kubelet runc runsc /usr/local/bin/
  5. sudo tar -xvf crictl-v1.0.0-beta.0-linux-amd64.tar.gz -C /usr/local/bin/
  6. sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/
  7. sudo tar -xvf containerd-1.1.0.linux-amd64.tar.gz -C /
  8. }

Configure CNI Networking

Retrieve the Pod CIDR range for the current compute instance:

  1. POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \
  2. http://metadata.google.internal/computeMetadata/v1/instance/attributes/pod-cidr)

Create the bridge network configuration file:

  1. cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf
  2. {
  3. "cniVersion": "0.3.1",
  4. "name": "bridge",
  5. "type": "bridge",
  6. "bridge": "cnio0",
  7. "isGateway": true,
  8. "ipMasq": true,
  9. "ipam": {
  10. "type": "host-local",
  11. "ranges": [
  12. [{"subnet": "${POD_CIDR}"}]
  13. ],
  14. "routes": [{"dst": "0.0.0.0/0"}]
  15. }
  16. }
  17. EOF

Create the loopback network configuration file:

  1. cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf
  2. {
  3. "cniVersion": "0.3.1",
  4. "type": "loopback"
  5. }
  6. EOF

Configure containerd

Create the containerd configuration file:

  1. sudo mkdir -p /etc/containerd/
  1. cat << EOF | sudo tee /etc/containerd/config.toml
  2. [plugins]
  3. [plugins.cri.containerd]
  4. snapshotter = "overlayfs"
  5. [plugins.cri.containerd.default_runtime]
  6. runtime_type = "io.containerd.runtime.v1.linux"
  7. runtime_engine = "/usr/local/bin/runc"
  8. runtime_root = ""
  9. [plugins.cri.containerd.untrusted_workload_runtime]
  10. runtime_type = "io.containerd.runtime.v1.linux"
  11. runtime_engine = "/usr/local/bin/runsc"
  12. runtime_root = "/run/containerd/runsc"
  13. EOF

Untrusted workloads will be run using the gVisor (runsc) runtime.

Create the containerd.service systemd unit file:

  1. cat <<EOF | sudo tee /etc/systemd/system/containerd.service
  2. [Unit]
  3. Description=containerd container runtime
  4. Documentation=https://containerd.io
  5. After=network.target
  6. [Service]
  7. ExecStartPre=/sbin/modprobe overlay
  8. ExecStart=/bin/containerd
  9. Restart=always
  10. RestartSec=5
  11. Delegate=yes
  12. KillMode=process
  13. OOMScoreAdjust=-999
  14. LimitNOFILE=1048576
  15. LimitNPROC=infinity
  16. LimitCORE=infinity
  17. [Install]
  18. WantedBy=multi-user.target
  19. EOF

Configure the Kubelet

  1. {
  2. sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
  3. sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
  4. sudo mv ca.pem /var/lib/kubernetes/
  5. }

Create the kubelet-config.yaml configuration file:

  1. cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
  2. kind: KubeletConfiguration
  3. apiVersion: kubelet.config.k8s.io/v1beta1
  4. authentication:
  5. anonymous:
  6. enabled: false
  7. webhook:
  8. enabled: true
  9. x509:
  10. clientCAFile: "/var/lib/kubernetes/ca.pem"
  11. authorization:
  12. mode: Webhook
  13. clusterDomain: "cluster.local"
  14. clusterDNS:
  15. - "10.32.0.10"
  16. podCIDR: "${POD_CIDR}"
  17. runtimeRequestTimeout: "15m"
  18. tlsCertFile: "/var/lib/kubelet/${HOSTNAME}.pem"
  19. tlsPrivateKeyFile: "/var/lib/kubelet/${HOSTNAME}-key.pem"
  20. EOF

Create the kubelet.service systemd unit file:

  1. cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
  2. [Unit]
  3. Description=Kubernetes Kubelet
  4. Documentation=https://github.com/kubernetes/kubernetes
  5. After=containerd.service
  6. Requires=containerd.service
  7. [Service]
  8. ExecStart=/usr/local/bin/kubelet \\
  9. --config=/var/lib/kubelet/kubelet-config.yaml \\
  10. --container-runtime=remote \\
  11. --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
  12. --image-pull-progress-deadline=2m \\
  13. --kubeconfig=/var/lib/kubelet/kubeconfig \\
  14. --network-plugin=cni \\
  15. --register-node=true \\
  16. --v=2
  17. Restart=on-failure
  18. RestartSec=5
  19. [Install]
  20. WantedBy=multi-user.target
  21. EOF

Configure the Kubernetes Proxy

  1. sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig

Create the kube-proxy-config.yaml configuration file:

  1. cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
  2. kind: KubeProxyConfiguration
  3. apiVersion: kubeproxy.config.k8s.io/v1alpha1
  4. clientConnection:
  5. kubeconfig: "/var/lib/kube-proxy/kubeconfig"
  6. mode: "iptables"
  7. clusterCIDR: "10.200.0.0/16"
  8. EOF

Create the kube-proxy.service systemd unit file:

  1. cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
  2. [Unit]
  3. Description=Kubernetes Kube Proxy
  4. Documentation=https://github.com/kubernetes/kubernetes
  5. [Service]
  6. ExecStart=/usr/local/bin/kube-proxy \\
  7. --config=/var/lib/kube-proxy/kube-proxy-config.yaml
  8. Restart=on-failure
  9. RestartSec=5
  10. [Install]
  11. WantedBy=multi-user.target
  12. EOF

Start the Worker Services

  1. {
  2. sudo systemctl daemon-reload
  3. sudo systemctl enable containerd kubelet kube-proxy
  4. sudo systemctl start containerd kubelet kube-proxy
  5. }

Remember to run the above commands on each worker node: worker-0, worker-1, and worker-2.

Verification

The compute instances created in this tutorial will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances.

List the registered Kubernetes nodes:

  1. gcloud compute ssh controller-0 \
  2. --command "kubectl get nodes --kubeconfig admin.kubeconfig"

output

  1. NAME STATUS ROLES AGE VERSION
  2. worker-0 Ready <none> 20s v1.10.2
  3. worker-1 Ready <none> 20s v1.10.2
  4. worker-2 Ready <none> 20s v1.10.2

Next: Configuring kubectl for Remote Access