Smoke Test

In this lab you will complete a series of tasks to ensure your Kubernetes cluster is functioning correctly.

Data Encryption

In this section you will verify the ability to encrypt secret data at rest.

Create a generic secret:

  1. kubectl create secret generic kubernetes-the-hard-way \
  2. --from-literal="mykey=mydata"

Print a hexdump of the kubernetes-the-hard-way secret stored in etcd:

  1. gcloud compute ssh controller-0 \
  2. --command "sudo ETCDCTL_API=3 etcdctl get \
  3. --endpoints=https://127.0.0.1:2379 \
  4. --cacert=/etc/etcd/ca.pem \
  5. --cert=/etc/etcd/kubernetes.pem \
  6. --key=/etc/etcd/kubernetes-key.pem\
  7. /registry/secrets/default/kubernetes-the-hard-way | hexdump -C"

output

  1. 00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret|
  2. 00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern|
  3. 00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa|
  4. 00000030 79 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |y.k8s:enc:aescbc|
  5. 00000040 3a 76 31 3a 6b 65 79 31 3a dd 3f 36 6c ce 65 9d |:v1:key1:.?6l.e.|
  6. 00000050 b3 b1 46 1a ba ae a2 1f e4 fa 13 0c 4b 6e 2c 3c |..F.........Kn,<|
  7. 00000060 15 fa 88 56 84 b7 aa c0 7a ca 66 f3 de db 2b a3 |...V....z.f...+.|
  8. 00000070 88 dc b1 b1 d8 2f 16 3e 6b 4a cb ac 88 5d 23 2d |...../.>kJ...]#-|
  9. 00000080 99 62 be 72 9f a5 01 38 15 c4 43 ac 38 5f ef 88 |.b.r...8..C.8_..|
  10. 00000090 3b 88 c1 e6 b6 06 4f ae a8 6b c8 40 70 ac 0a d3 |;.....O..k.@p...|
  11. 000000a0 3e dc 2b b6 0f 01 b6 8b e2 21 29 4d 32 d6 67 a6 |>.+......!)M2.g.|
  12. 000000b0 4e 6d bb 61 0d 85 22 ea f4 d6 2d 0a af 3c 71 85 |Nm.a.."...-..<q.|
  13. 000000c0 96 27 c9 ec 90 e3 56 8c 94 a7 1c 9a 0e 00 28 11 |.'....V.......(.|
  14. 000000d0 18 28 f4 33 42 d9 57 d9 e3 e9 1c 38 e3 bc 1e c3 |.(.3B.W....8....|
  15. 000000e0 d2 47 f3 20 60 be b8 57 a7 0a |.G. `..W..|
  16. 000000ea

The etcd key should be prefixed with k8s:enc:aescbc:v1:key1, which indicates the aescbc provider was used to encrypt the data with the key1 encryption key.

Deployments

In this section you will verify the ability to create and manage Deployments.

Create a deployment for the nginx web server:

  1. kubectl run nginx --image=nginx

List the pod created by the nginx deployment:

  1. kubectl get pods -l run=nginx

output

  1. NAME READY STATUS RESTARTS AGE
  2. nginx-dbddb74b8-6lxg2 1/1 Running 0 10s

Port Forwarding

In this section you will verify the ability to access applications remotely using port forwarding.

Retrieve the full name of the nginx pod:

  1. POD_NAME=$(kubectl get pods -l run=nginx -o jsonpath="{.items[0].metadata.name}")

Forward port 8080 on your local machine to port 80 of the nginx pod:

  1. kubectl port-forward $POD_NAME 8080:80

output

  1. Forwarding from 127.0.0.1:8080 -> 80
  2. Forwarding from [::1]:8080 -> 80

In a new terminal make an HTTP request using the forwarding address:

  1. curl --head http://127.0.0.1:8080

output

  1. HTTP/1.1 200 OK
  2. Server: nginx/1.15.4
  3. Date: Sun, 30 Sep 2018 19:23:10 GMT
  4. Content-Type: text/html
  5. Content-Length: 612
  6. Last-Modified: Tue, 25 Sep 2018 15:04:03 GMT
  7. Connection: keep-alive
  8. ETag: "5baa4e63-264"
  9. Accept-Ranges: bytes

Switch back to the previous terminal and stop the port forwarding to the nginx pod:

  1. Forwarding from 127.0.0.1:8080 -> 80
  2. Forwarding from [::1]:8080 -> 80
  3. Handling connection for 8080
  4. ^C

Logs

In this section you will verify the ability to retrieve container logs.

Print the nginx pod logs:

  1. kubectl logs $POD_NAME

output

  1. 127.0.0.1 - - [30/Sep/2018:19:23:10 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.58.0" "-"

Exec

In this section you will verify the ability to execute commands in a container.

Print the nginx version by executing the nginx -v command in the nginx container:

  1. kubectl exec -ti $POD_NAME -- nginx -v

output

  1. nginx version: nginx/1.15.4

Services

In this section you will verify the ability to expose applications using a Service.

Expose the nginx deployment using a NodePort service:

  1. kubectl expose deployment nginx --port 80 --type NodePort

The LoadBalancer service type can not be used because your cluster is not configured with cloud provider integration. Setting up cloud provider integration is out of scope for this tutorial.

Retrieve the node port assigned to the nginx service:

  1. NODE_PORT=$(kubectl get svc nginx \
  2. --output=jsonpath='{range .spec.ports[0]}{.nodePort}')

Create a firewall rule that allows remote access to the nginx node port:

  1. gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \
  2. --allow=tcp:${NODE_PORT} \
  3. --network kubernetes-the-hard-way

Retrieve the external IP address of a worker instance:

  1. EXTERNAL_IP=$(gcloud compute instances describe worker-0 \
  2. --format 'value(networkInterfaces[0].accessConfigs[0].natIP)')

Make an HTTP request using the external IP address and the nginx node port:

  1. curl -I http://${EXTERNAL_IP}:${NODE_PORT}

output

  1. HTTP/1.1 200 OK
  2. Server: nginx/1.15.4
  3. Date: Sun, 30 Sep 2018 19:25:40 GMT
  4. Content-Type: text/html
  5. Content-Length: 612
  6. Last-Modified: Tue, 25 Sep 2018 15:04:03 GMT
  7. Connection: keep-alive
  8. ETag: "5baa4e63-264"
  9. Accept-Ranges: bytes

Untrusted Workloads

This section will verify the ability to run untrusted workloads using gVisor.

Create the untrusted pod:

  1. cat <<EOF | kubectl apply -f -
  2. apiVersion: v1
  3. kind: Pod
  4. metadata:
  5. name: untrusted
  6. annotations:
  7. io.kubernetes.cri.untrusted-workload: "true"
  8. spec:
  9. containers:
  10. - name: webserver
  11. image: gcr.io/hightowerlabs/helloworld:2.0.0
  12. EOF

Verification

In this section you will verify the untrusted pod is running under gVisor (runsc) by inspecting the assigned worker node.

Verify the untrusted pod is running:

  1. kubectl get pods -o wide
  1. NAME READY STATUS RESTARTS AGE IP NODE
  2. busybox-68654f944b-djjjb 1/1 Running 0 5m 10.200.0.2 worker-0
  3. nginx-65899c769f-xkfcn 1/1 Running 0 4m 10.200.1.2 worker-1
  4. untrusted 1/1 Running 0 10s 10.200.0.3 worker-0

Get the node name where the untrusted pod is running:

  1. INSTANCE_NAME=$(kubectl get pod untrusted --output=jsonpath='{.spec.nodeName}')

SSH into the worker node:

  1. gcloud compute ssh ${INSTANCE_NAME}

List the containers running under gVisor:

  1. sudo runsc --root /run/containerd/runsc/k8s.io list
  1. I0930 19:27:13.255142 20832 x:0] ***************************
  2. I0930 19:27:13.255326 20832 x:0] Args: [runsc --root /run/containerd/runsc/k8s.io list]
  3. I0930 19:27:13.255386 20832 x:0] Git Revision: 50c283b9f56bb7200938d9e207355f05f79f0d17
  4. I0930 19:27:13.255429 20832 x:0] PID: 20832
  5. I0930 19:27:13.255472 20832 x:0] UID: 0, GID: 0
  6. I0930 19:27:13.255591 20832 x:0] Configuration:
  7. I0930 19:27:13.255654 20832 x:0] RootDir: /run/containerd/runsc/k8s.io
  8. I0930 19:27:13.255781 20832 x:0] Platform: ptrace
  9. I0930 19:27:13.255893 20832 x:0] FileAccess: exclusive, overlay: false
  10. I0930 19:27:13.256004 20832 x:0] Network: sandbox, logging: false
  11. I0930 19:27:13.256128 20832 x:0] Strace: false, max size: 1024, syscalls: []
  12. I0930 19:27:13.256238 20832 x:0] ***************************
  13. ID PID STATUS BUNDLE CREATED OWNER
  14. 79e74d0cec52a1ff4bc2c9b0bb9662f73ea918959c08bca5bcf07ddb6cb0e1fd 20449 running /run/containerd/io.containerd.runtime.v1.linux/k8s.io/79e74d0cec52a1ff4bc2c9b0bb9662f73ea918959c08bca5bcf07ddb6cb0e1fd 0001-01-01T00:00:00Z
  15. af7470029008a4520b5db9fb5b358c65d64c9f748fae050afb6eaf014a59fea5 20510 running /run/containerd/io.containerd.runtime.v1.linux/k8s.io/af7470029008a4520b5db9fb5b358c65d64c9f748fae050afb6eaf014a59fea5 0001-01-01T00:00:00Z
  16. I0930 19:27:13.259733 20832 x:0] Exiting with status: 0

Get the ID of the untrusted pod:

  1. POD_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \
  2. pods --name untrusted -q)

Get the ID of the webserver container running in the untrusted pod:

  1. CONTAINER_ID=$(sudo crictl -r unix:///var/run/containerd/containerd.sock \
  2. ps -p ${POD_ID} -q)

Use the gVisor runsc command to display the processes running inside the webserver container:

  1. sudo runsc --root /run/containerd/runsc/k8s.io ps ${CONTAINER_ID}

output

  1. I0930 19:31:31.419765 21217 x:0] ***************************
  2. I0930 19:31:31.419907 21217 x:0] Args: [runsc --root /run/containerd/runsc/k8s.io ps af7470029008a4520b5db9fb5b358c65d64c9f748fae050afb6eaf014a59fea5]
  3. I0930 19:31:31.419959 21217 x:0] Git Revision: 50c283b9f56bb7200938d9e207355f05f79f0d17
  4. I0930 19:31:31.420000 21217 x:0] PID: 21217
  5. I0930 19:31:31.420041 21217 x:0] UID: 0, GID: 0
  6. I0930 19:31:31.420081 21217 x:0] Configuration:
  7. I0930 19:31:31.420115 21217 x:0] RootDir: /run/containerd/runsc/k8s.io
  8. I0930 19:31:31.420188 21217 x:0] Platform: ptrace
  9. I0930 19:31:31.420266 21217 x:0] FileAccess: exclusive, overlay: false
  10. I0930 19:31:31.420424 21217 x:0] Network: sandbox, logging: false
  11. I0930 19:31:31.420515 21217 x:0] Strace: false, max size: 1024, syscalls: []
  12. I0930 19:31:31.420676 21217 x:0] ***************************
  13. UID PID PPID C STIME TIME CMD
  14. 0 1 0 0 19:26 10ms app
  15. I0930 19:31:31.422022 21217 x:0] Exiting with status: 0

Next: Cleaning Up