Bootstrapping the Kubernetes Control Plane

In this lab you will bootstrap the Kubernetes control plane across three compute instances and configure it for high availability. You will also create an external load balancer that exposes the Kubernetes API Servers to remote clients. The following components will be installed on each node: Kubernetes API Server, Scheduler, and Controller Manager.

Prerequisites

The commands in this lab must be run on each controller instance: controller-0, controller-1, and controller-2. Login to each controller instance using the gcloud command. Example:

  1. gcloud compute ssh controller-0

Provision the Kubernetes Control Plane

Download and Install the Kubernetes Controller Binaries

Download the official Kubernetes release binaries:

  1. wget -q --show-progress --https-only --timestamping \
  2. "https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kube-apiserver" \
  3. "https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kube-controller-manager" \
  4. "https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kube-scheduler" \
  5. "https://storage.googleapis.com/kubernetes-release/release/v1.7.4/bin/linux/amd64/kubectl"

Install the Kubernetes binaries:

  1. chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
  1. sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/

Configure the Kubernetes API Server

  1. sudo mkdir -p /var/lib/kubernetes/
  1. sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem encryption-config.yaml /var/lib/kubernetes/

The instance internal IP address will be used advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance:

  1. INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
  2. http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)

Create the kube-apiserver.service systemd unit file:

  1. cat > kube-apiserver.service <<EOF
  2. [Unit]
  3. Description=Kubernetes API Server
  4. Documentation=https://github.com/GoogleCloudPlatform/kubernetes
  5. [Service]
  6. ExecStart=/usr/local/bin/kube-apiserver \\
  7. --admission-control=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
  8. --advertise-address=${INTERNAL_IP} \\
  9. --allow-privileged=true \\
  10. --apiserver-count=3 \\
  11. --audit-log-maxage=30 \\
  12. --audit-log-maxbackup=3 \\
  13. --audit-log-maxsize=100 \\
  14. --audit-log-path=/var/log/audit.log \\
  15. --authorization-mode=Node,RBAC \\
  16. --bind-address=0.0.0.0 \\
  17. --client-ca-file=/var/lib/kubernetes/ca.pem \\
  18. --enable-swagger-ui=true \\
  19. --etcd-cafile=/var/lib/kubernetes/ca.pem \\
  20. --etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
  21. --etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
  22. --etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379,https://10.240.0.12:2379 \\
  23. --event-ttl=1h \\
  24. --experimental-encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
  25. --insecure-bind-address=0.0.0.0 \\
  26. --kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \\
  27. --kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \\
  28. --kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \\
  29. --kubelet-https=true \\
  30. --runtime-config=rbac.authorization.k8s.io/v1alpha1 \\
  31. --service-account-key-file=/var/lib/kubernetes/ca-key.pem \\
  32. --service-cluster-ip-range=10.32.0.0/24 \\
  33. --service-node-port-range=30000-32767 \\
  34. --tls-ca-file=/var/lib/kubernetes/ca.pem \\
  35. --tls-cert-file=/var/lib/kubernetes/kubernetes.pem \\
  36. --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \\
  37. --v=2
  38. Restart=on-failure
  39. RestartSec=5
  40. [Install]
  41. WantedBy=multi-user.target
  42. EOF

Configure the Kubernetes Controller Manager

Create the kube-controller-manager.service systemd unit file:

  1. cat > kube-controller-manager.service <<EOF
  2. [Unit]
  3. Description=Kubernetes Controller Manager
  4. Documentation=https://github.com/GoogleCloudPlatform/kubernetes
  5. [Service]
  6. ExecStart=/usr/local/bin/kube-controller-manager \\
  7. --address=0.0.0.0 \\
  8. --cluster-cidr=10.200.0.0/16 \\
  9. --cluster-name=kubernetes \\
  10. --cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \\
  11. --cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \\
  12. --leader-elect=true \\
  13. --master=http://${INTERNAL_IP}:8080 \\
  14. --root-ca-file=/var/lib/kubernetes/ca.pem \\
  15. --service-account-private-key-file=/var/lib/kubernetes/ca-key.pem \\
  16. --service-cluster-ip-range=10.32.0.0/16 \\
  17. --v=2
  18. Restart=on-failure
  19. RestartSec=5
  20. [Install]
  21. WantedBy=multi-user.target
  22. EOF

Configure the Kubernetes Scheduler

Create the kube-scheduler.service systemd unit file:

  1. cat > kube-scheduler.service <<EOF
  2. [Unit]
  3. Description=Kubernetes Scheduler
  4. Documentation=https://github.com/GoogleCloudPlatform/kubernetes
  5. [Service]
  6. ExecStart=/usr/local/bin/kube-scheduler \\
  7. --leader-elect=true \\
  8. --master=http://${INTERNAL_IP}:8080 \\
  9. --v=2
  10. Restart=on-failure
  11. RestartSec=5
  12. [Install]
  13. WantedBy=multi-user.target
  14. EOF

Start the Controller Services

  1. sudo mv kube-apiserver.service kube-scheduler.service kube-controller-manager.service /etc/systemd/system/
  1. sudo systemctl daemon-reload
  1. sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
  1. sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler

Allow up to 10 seconds for the Kubernetes API Server to fully initialize.

Verification

  1. kubectl get componentstatuses
  1. NAME STATUS MESSAGE ERROR
  2. controller-manager Healthy ok
  3. scheduler Healthy ok
  4. etcd-2 Healthy {"health": "true"}
  5. etcd-0 Healthy {"health": "true"}
  6. etcd-1 Healthy {"health": "true"}

Remember to run the above commands on each controller node: controller-0, controller-1, and controller-2.

The Kubernetes Frontend Load Balancer

In this section you will provision an external load balancer to front the Kubernetes API Servers. The kubernetes-the-hard-way static IP address will be attached to the resulting load balancer.

The compute instances created in this tutorial will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances.

Create the external load balancer network resources:

  1. gcloud compute http-health-checks create kube-apiserver-health-check \
  2. --description "Kubernetes API Server Health Check" \
  3. --port 8080 \
  4. --request-path /healthz
  1. gcloud compute target-pools create kubernetes-target-pool \
  2. --http-health-check=kube-apiserver-health-check
  1. gcloud compute target-pools add-instances kubernetes-target-pool \
  2. --instances controller-0,controller-1,controller-2
  1. KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
  2. --region $(gcloud config get-value compute/region) \
  3. --format 'value(name)')
  1. gcloud compute forwarding-rules create kubernetes-forwarding-rule \
  2. --address ${KUBERNETES_PUBLIC_ADDRESS} \
  3. --ports 6443 \
  4. --region $(gcloud config get-value compute/region) \
  5. --target-pool kubernetes-target-pool

Verification

Retrieve the kubernetes-the-hard-way static IP address:

  1. KUBERNETES_PUBLIC_IP_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
  2. --region $(gcloud config get-value compute/region) \
  3. --format 'value(address)')

Make a HTTP request for the Kubernetes version info:

  1. curl --cacert ca.pem https://${KUBERNETES_PUBLIC_IP_ADDRESS}:6443/version

output

  1. {
  2. "major": "1",
  3. "minor": "7",
  4. "gitVersion": "v1.7.4",
  5. "gitCommit": "793658f2d7ca7f064d2bdf606519f9fe1229c381",
  6. "gitTreeState": "clean",
  7. "buildDate": "2017-08-17T08:30:51Z",
  8. "goVersion": "go1.8.3",
  9. "compiler": "gc",
  10. "platform": "linux/amd64"
  11. }

Next: Bootstrapping the Kubernetes Worker Nodes