Smoke Test

In this lab you will complete a series of tasks to ensure your Kubernetes cluster is functioning correctly.

Data Encryption

In this section you will verify the ability to encrypt secret data at rest.

Create a generic secret:

  1. kubectl create secret generic kubernetes-the-hard-way \
  2. --from-literal="mykey=mydata"

Print a hexdump of the kubernetes-the-hard-way secret stored in etcd:

  1. gcloud compute ssh controller-0 \
  2. --command "ETCDCTL_API=3 etcdctl get /registry/secrets/default/kubernetes-the-hard-way | hexdump -C"

output

  1. 00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret|
  2. 00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern|
  3. 00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa|
  4. 00000030 79 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |y.k8s:enc:aescbc|
  5. 00000040 3a 76 31 3a 6b 65 79 31 3a 70 88 d8 52 83 b7 96 |:v1:key1:p..R...|
  6. 00000050 04 a3 bd 7e 42 9e 8a 77 2f 97 24 a7 68 3f c5 ec |...~B..w/.$.h?..|
  7. 00000060 9e f7 66 e8 a3 81 fc c8 3c df 63 71 33 0a 87 8f |..f.....<.cq3...|
  8. 00000070 0e c7 0a 0a f2 04 46 85 33 92 9a 4b 61 b2 10 c0 |......F.3..Ka...|
  9. 00000080 0b 00 05 dd c3 c2 d0 6b ff ff f2 32 3b e0 ec a0 |.......k...2;...|
  10. 00000090 63 d3 8b 1c 29 84 88 71 a7 88 e2 26 4b 65 95 14 |c...)..q...&Ke..|
  11. 000000a0 dc 8d 59 63 11 e5 f3 4e b4 94 cc 3d 75 52 c7 07 |..Yc...N...=uR..|
  12. 000000b0 73 f5 b4 b0 63 aa f9 9d 29 f8 d6 88 aa 33 c4 24 |s...c...)....3.$|
  13. 000000c0 ac c6 71 2b 45 98 9e 5f c6 a4 9d a2 26 3c 24 41 |..q+E.._....&<$A|
  14. 000000d0 95 5b d3 2c 4b 1e 4a 47 c8 47 c8 f3 ac d6 e8 cb |.[.,K.JG.G......|
  15. 000000e0 5f a9 09 93 91 d7 5d c9 c2 68 f8 cf 3c 7e 3b a3 |_.....]..h..<~;.|
  16. 000000f0 db d8 d5 9e 0c bf 2a 2f 58 0a |......*/X.|
  17. 000000fa

The etcd key should be prefixed with k8s:enc:aescbc:v1:key1, which indicates the aescbc provider was used to encrypt the data with the key1 encryption key.

Deployments

In this section you will verify the ability to create and manage Deployments.

Create a deployment for the nginx web server:

  1. kubectl run nginx --image=nginx

List the pod created by the nginx deployment:

  1. kubectl get pods -l run=nginx

output

  1. NAME READY STATUS RESTARTS AGE
  2. nginx-4217019353-b5gzn 1/1 Running 0 15s

Port Forwarding

In this section you will verify the ability to access applications remotely using port forwarding.

Retrieve the full name of the nginx pod:

  1. POD_NAME=$(kubectl get pods -l run=nginx -o jsonpath="{.items[0].metadata.name}")

Forward port 8080 on your local machine to port 80 of the nginx pod:

  1. kubectl port-forward $POD_NAME 8080:80

output

  1. Forwarding from 127.0.0.1:8080 -> 80
  2. Forwarding from [::1]:8080 -> 80

In a new terminal make an HTTP request using the forwarding address:

  1. curl --head http://127.0.0.1:8080

output

  1. HTTP/1.1 200 OK
  2. Server: nginx/1.13.5
  3. Date: Mon, 02 Oct 2017 01:04:20 GMT
  4. Content-Type: text/html
  5. Content-Length: 612
  6. Last-Modified: Tue, 08 Aug 2017 15:25:00 GMT
  7. Connection: keep-alive
  8. ETag: "5989d7cc-264"
  9. Accept-Ranges: bytes

Switch back to the previous terminal and stop the port forwarding to the nginx pod:

  1. Forwarding from 127.0.0.1:8080 -> 80
  2. Forwarding from [::1]:8080 -> 80
  3. Handling connection for 8080
  4. ^C

Logs

In this section you will verify the ability to retrieve container logs.

Print the nginx pod logs:

  1. kubectl logs $POD_NAME

output

  1. 127.0.0.1 - - [02/Oct/2017:01:04:20 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.54.0" "-"

Exec

In this section you will verify the ability to execute commands in a container.

Print the nginx version by executing the nginx -v command in the nginx container:

  1. kubectl exec -ti $POD_NAME -- nginx -v

output

  1. nginx version: nginx/1.13.5

Services

In this section you will verify the ability to expose applications using a Service.

Expose the nginx deployment using a NodePort service:

  1. kubectl expose deployment nginx --port 80 --type NodePort

The LoadBalancer service type can not be used because your cluster is not configured with cloud provider integration. Setting up cloud provider integration is out of scope for this tutorial.

Retrieve the node port assigned to the nginx service:

  1. NODE_PORT=$(kubectl get svc nginx \
  2. --output=jsonpath='{range .spec.ports[0]}{.nodePort}')

Create a firewall rule that allows remote access to the nginx node port:

  1. gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \
  2. --allow=tcp:${NODE_PORT} \
  3. --network kubernetes-the-hard-way

Retrieve the external IP address of a worker instance:

  1. EXTERNAL_IP=$(gcloud compute instances describe worker-0 \
  2. --format 'value(networkInterfaces[0].accessConfigs[0].natIP)')

Make an HTTP request using the external IP address and the nginx node port:

  1. curl -I http://${EXTERNAL_IP}:${NODE_PORT}

output

  1. HTTP/1.1 200 OK
  2. Server: nginx/1.13.5
  3. Date: Mon, 02 Oct 2017 01:06:11 GMT
  4. Content-Type: text/html
  5. Content-Length: 612
  6. Last-Modified: Tue, 08 Aug 2017 15:25:00 GMT
  7. Connection: keep-alive
  8. ETag: "5989d7cc-264"
  9. Accept-Ranges: bytes

Next: Cleaning Up