Connecting Applications with Services

The Kubernetes model for connecting containers

Now that you have a continuously running, replicated application you can expose it on a network.

Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on. Kubernetes gives every pod its own cluster-private IP address, so you do not need to explicitly create links between pods or map container ports to host ports. This means that containers within a Pod can all reach each other’s ports on localhost, and all pods in a cluster can see each other without NAT. The rest of this document elaborates on how you can run reliable services on such a networking model.

This tutorial uses a simple nginx web server to demonstrate the concept.

Exposing pods to the cluster

We did this in a previous example, but let’s do it once again and focus on the networking perspective. Create an nginx Pod, and note that it has a container port specification:

service/networking/run-my-nginx.yaml Connecting Applications with Services - 图1

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: my-nginx
  5. spec:
  6. selector:
  7. matchLabels:
  8. run: my-nginx
  9. replicas: 2
  10. template:
  11. metadata:
  12. labels:
  13. run: my-nginx
  14. spec:
  15. containers:
  16. - name: my-nginx
  17. image: nginx
  18. ports:
  19. - containerPort: 80

This makes it accessible from any node in your cluster. Check the nodes the Pod is running on:

  1. kubectl apply -f ./run-my-nginx.yaml
  2. kubectl get pods -l run=my-nginx -o wide
  1. NAME READY STATUS RESTARTS AGE IP NODE
  2. my-nginx-3800858182-jr4a2 1/1 Running 0 13s 10.244.3.4 kubernetes-minion-905m
  3. my-nginx-3800858182-kna2y 1/1 Running 0 13s 10.244.2.5 kubernetes-minion-ljyd

Check your pods’ IPs:

  1. kubectl get pods -l run=my-nginx -o custom-columns=POD_IP:.status.podIPs
  2. POD_IP
  3. [map[ip:10.244.3.4]]
  4. [map[ip:10.244.2.5]]

You should be able to ssh into any node in your cluster and use a tool such as curl to make queries against both IPs. Note that the containers are not using port 80 on the node, nor are there any special NAT rules to route traffic to the pod. This means you can run multiple nginx pods on the same node all using the same containerPort, and access them from any other pod or node in your cluster using the assigned IP address for the Service. If you want to arrange for a specific port on the host Node to be forwarded to backing Pods, you can - but the networking model should mean that you do not need to do so.

You can read more about the Kubernetes Networking Model if you’re curious.

Creating a Service

So we have pods running nginx in a flat, cluster wide, address space. In theory, you could talk to these pods directly, but what happens when a node dies? The pods die with it, and the Deployment will create new ones, with different IPs. This is the problem a Service solves.

A Kubernetes Service is an abstraction which defines a logical set of Pods running somewhere in your cluster, that all provide the same functionality. When created, each Service is assigned a unique IP address (also called clusterIP). This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the Service, and know that communication to the Service will be automatically load-balanced out to some pod that is a member of the Service.

You can create a Service for your 2 nginx replicas with kubectl expose:

  1. kubectl expose deployment/my-nginx
  1. service/my-nginx exposed

This is equivalent to kubectl apply -f the following yaml:

service/networking/nginx-svc.yaml Connecting Applications with Services - 图2

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: my-nginx
  5. labels:
  6. run: my-nginx
  7. spec:
  8. ports:
  9. - port: 80
  10. protocol: TCP
  11. selector:
  12. run: my-nginx

This specification will create a Service which targets TCP port 80 on any Pod with the run: my-nginx label, and expose it on an abstracted Service port (targetPort: is the port the container accepts traffic on, port: is the abstracted Service port, which can be any port other pods use to access the Service). View Service API object to see the list of supported fields in service definition. Check your Service:

  1. kubectl get svc my-nginx
  1. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  2. my-nginx ClusterIP 10.0.162.149 <none> 80/TCP 21s

As mentioned previously, a Service is backed by a group of Pods. These Pods are exposed through EndpointSlices. The Service’s selector will be evaluated continuously and the results will be POSTed to an EndpointSlice that is connected to the Service using a labels. When a Pod dies, it is automatically removed from the EndpointSlices that contain it as an endpoint. New Pods that match the Service’s selector will automatically get added to an EndpointSlice for that Service. Check the endpoints, and note that the IPs are the same as the Pods created in the first step:

  1. kubectl describe svc my-nginx
  1. Name: my-nginx
  2. Namespace: default
  3. Labels: run=my-nginx
  4. Annotations: <none>
  5. Selector: run=my-nginx
  6. Type: ClusterIP
  7. IP Family Policy: SingleStack
  8. IP Families: IPv4
  9. IP: 10.0.162.149
  10. IPs: 10.0.162.149
  11. Port: <unset> 80/TCP
  12. TargetPort: 80/TCP
  13. Endpoints: 10.244.2.5:80,10.244.3.4:80
  14. Session Affinity: None
  15. Events: <none>
  1. kubectl get endpointslices -l kubernetes.io/service-name=my-nginx
  1. NAME ADDRESSTYPE PORTS ENDPOINTS AGE
  2. my-nginx-7vzhx IPv4 80 10.244.2.5,10.244.3.4 21s

You should now be able to curl the nginx Service on <CLUSTER-IP>:<PORT> from any node in your cluster. Note that the Service IP is completely virtual, it never hits the wire. If you’re curious about how this works you can read more about the service proxy.

Accessing the Service

Kubernetes supports 2 primary modes of finding a Service - environment variables and DNS. The former works out of the box while the latter requires the CoreDNS cluster addon.

Note: If the service environment variables are not desired (because possible clashing with expected program ones, too many variables to process, only using DNS, etc) you can disable this mode by setting the enableServiceLinks flag to false on the pod spec.

Environment Variables

When a Pod runs on a Node, the kubelet adds a set of environment variables for each active Service. This introduces an ordering problem. To see why, inspect the environment of your running nginx Pods (your Pod name will be different):

  1. kubectl exec my-nginx-3800858182-jr4a2 -- printenv | grep SERVICE
  1. KUBERNETES_SERVICE_HOST=10.0.0.1
  2. KUBERNETES_SERVICE_PORT=443
  3. KUBERNETES_SERVICE_PORT_HTTPS=443

Note there’s no mention of your Service. This is because you created the replicas before the Service. Another disadvantage of doing this is that the scheduler might put both Pods on the same machine, which will take your entire Service down if it dies. We can do this the right way by killing the 2 Pods and waiting for the Deployment to recreate them. This time around the Service exists before the replicas. This will give you scheduler-level Service spreading of your Pods (provided all your nodes have equal capacity), as well as the right environment variables:

  1. kubectl scale deployment my-nginx --replicas=0; kubectl scale deployment my-nginx --replicas=2;
  2. kubectl get pods -l run=my-nginx -o wide
  1. NAME READY STATUS RESTARTS AGE IP NODE
  2. my-nginx-3800858182-e9ihh 1/1 Running 0 5s 10.244.2.7 kubernetes-minion-ljyd
  3. my-nginx-3800858182-j4rm4 1/1 Running 0 5s 10.244.3.8 kubernetes-minion-905m

You may notice that the pods have different names, since they are killed and recreated.

  1. kubectl exec my-nginx-3800858182-e9ihh -- printenv | grep SERVICE
  1. KUBERNETES_SERVICE_PORT=443
  2. MY_NGINX_SERVICE_HOST=10.0.162.149
  3. KUBERNETES_SERVICE_HOST=10.0.0.1
  4. MY_NGINX_SERVICE_PORT=80
  5. KUBERNETES_SERVICE_PORT_HTTPS=443

DNS

Kubernetes offers a DNS cluster addon Service that automatically assigns dns names to other Services. You can check if it’s running on your cluster:

  1. kubectl get services kube-dns --namespace=kube-system
  1. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  2. kube-dns ClusterIP 10.0.0.10 <none> 53/UDP,53/TCP 8m

The rest of this section will assume you have a Service with a long lived IP (my-nginx), and a DNS server that has assigned a name to that IP. Here we use the CoreDNS cluster addon (application name kube-dns), so you can talk to the Service from any pod in your cluster using standard methods (e.g. gethostbyname()). If CoreDNS isn’t running, you can enable it referring to the CoreDNS README or Installing CoreDNS. Let’s run another curl application to test this:

  1. kubectl run curl --image=radial/busyboxplus:curl -i --tty --rm
  1. Waiting for pod default/curl-131556218-9fnch to be running, status is Pending, pod ready: false
  2. Hit enter for command prompt

Then, hit enter and run nslookup my-nginx:

  1. [ root@curl-131556218-9fnch:/ ]$ nslookup my-nginx
  2. Server: 10.0.0.10
  3. Address 1: 10.0.0.10
  4. Name: my-nginx
  5. Address 1: 10.0.162.149

Securing the Service

Till now we have only accessed the nginx server from within the cluster. Before exposing the Service to the internet, you want to make sure the communication channel is secure. For this, you will need:

  • Self signed certificates for https (unless you already have an identity certificate)
  • An nginx server configured to use the certificates
  • A secret that makes the certificates accessible to pods

You can acquire all these from the nginx https example. This requires having go and make tools installed. If you don’t want to install those, then follow the manual steps later. In short:

  1. make keys KEY=/tmp/nginx.key CERT=/tmp/nginx.crt
  2. kubectl create secret tls nginxsecret --key /tmp/nginx.key --cert /tmp/nginx.crt
  1. secret/nginxsecret created
  1. kubectl get secrets
  1. NAME TYPE DATA AGE
  2. nginxsecret kubernetes.io/tls 2 1m

And also the configmap:

  1. kubectl create configmap nginxconfigmap --from-file=default.conf
  1. configmap/nginxconfigmap created
  1. kubectl get configmaps
  1. NAME DATA AGE
  2. nginxconfigmap 1 114s

Following are the manual steps to follow in case you run into problems running make (on windows for example):

  1. # Create a public private key pair
  2. openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /d/tmp/nginx.key -out /d/tmp/nginx.crt -subj "/CN=my-nginx/O=my-nginx"
  3. # Convert the keys to base64 encoding
  4. cat /d/tmp/nginx.crt | base64
  5. cat /d/tmp/nginx.key | base64

Use the output from the previous commands to create a yaml file as follows. The base64 encoded value should all be on a single line.

  1. apiVersion: "v1"
  2. kind: "Secret"
  3. metadata:
  4. name: "nginxsecret"
  5. namespace: "default"
  6. type: kubernetes.io/tls
  7. data:
  8. tls.crt: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURIekNDQWdlZ0F3SUJBZ0lKQUp5M3lQK0pzMlpJTUEwR0NTcUdTSWIzRFFFQkJRVUFNQ1l4RVRBUEJnTlYKQkFNVENHNW5hVzU0YzNaak1SRXdEd1lEVlFRS0V3aHVaMmx1ZUhOMll6QWVGdzB4TnpFd01qWXdOekEzTVRKYQpGdzB4T0RFd01qWXdOekEzTVRKYU1DWXhFVEFQQmdOVkJBTVRDRzVuYVc1NGMzWmpNUkV3RHdZRFZRUUtFd2h1CloybHVlSE4yWXpDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBSjFxSU1SOVdWM0IKMlZIQlRMRmtobDRONXljMEJxYUhIQktMSnJMcy8vdzZhU3hRS29GbHlJSU94NGUrMlN5ajBFcndCLzlYTnBwbQppeW1CL3JkRldkOXg5UWhBQUxCZkVaTmNiV3NsTVFVcnhBZW50VWt1dk1vLzgvMHRpbGhjc3paenJEYVJ4NEo5Ci82UVRtVVI3a0ZTWUpOWTVQZkR3cGc3dlVvaDZmZ1Voam92VG42eHNVR0M2QURVODBpNXFlZWhNeVI1N2lmU2YKNHZpaXdIY3hnL3lZR1JBRS9mRTRqakxCdmdONjc2SU90S01rZXV3R0ljNDFhd05tNnNTSzRqYUNGeGpYSnZaZQp2by9kTlEybHhHWCtKT2l3SEhXbXNhdGp4WTRaNVk3R1ZoK0QrWnYvcW1mMFgvbVY0Rmo1NzV3ajFMWVBocWtsCmdhSXZYRyt4U1FVQ0F3RUFBYU5RTUU0d0hRWURWUjBPQkJZRUZPNG9OWkI3YXc1OUlsYkROMzhIYkduYnhFVjcKTUI4R0ExVWRJd1FZTUJhQUZPNG9OWkI3YXc1OUlsYkROMzhIYkduYnhFVjdNQXdHQTFVZEV3UUZNQU1CQWY4dwpEUVlKS29aSWh2Y05BUUVGQlFBRGdnRUJBRVhTMW9FU0lFaXdyMDhWcVA0K2NwTHI3TW5FMTducDBvMm14alFvCjRGb0RvRjdRZnZqeE04Tzd2TjB0clcxb2pGSW0vWDE4ZnZaL3k4ZzVaWG40Vm8zc3hKVmRBcStNZC9jTStzUGEKNmJjTkNUekZqeFpUV0UrKzE5NS9zb2dmOUZ3VDVDK3U2Q3B5N0M3MTZvUXRUakViV05VdEt4cXI0Nk1OZWNCMApwRFhWZmdWQTRadkR4NFo3S2RiZDY5eXM3OVFHYmg5ZW1PZ05NZFlsSUswSGt0ejF5WU4vbVpmK3FqTkJqbWZjCkNnMnlwbGQ0Wi8rUUNQZjl3SkoybFIrY2FnT0R4elBWcGxNSEcybzgvTHFDdnh6elZPUDUxeXdLZEtxaUMwSVEKQ0I5T2wwWW5scE9UNEh1b2hSUzBPOStlMm9KdFZsNUIyczRpbDlhZ3RTVXFxUlU9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
  9. tls.key: "LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRQ2RhaURFZlZsZHdkbFIKd1V5eFpJWmVEZWNuTkFhbWh4d1NpeWF5N1AvOE9ta3NVQ3FCWmNpQ0RzZUh2dGtzbzlCSzhBZi9WemFhWm9zcApnZjYzUlZuZmNmVUlRQUN3WHhHVFhHMXJKVEVGSzhRSHA3VkpMcnpLUC9QOUxZcFlYTE0yYzZ3MmtjZUNmZitrCkU1bEVlNUJVbUNUV09UM3c4S1lPNzFLSWVuNEZJWTZMMDUrc2JGQmd1Z0ExUE5JdWFubm9UTWtlZTRuMG4rTDQKb3NCM01ZUDhtQmtRQlAzeE9JNHl3YjREZXUraURyU2pKSHJzQmlIT05Xc0RadXJFaXVJMmdoY1kxeWIyWHI2UAozVFVOcGNSbC9pVG9zQngxcHJHclk4V09HZVdPeGxZZmcvbWIvNnBuOUYvNWxlQlkrZStjSTlTMkQ0YXBKWUdpCkwxeHZzVWtGQWdNQkFBRUNnZ0VBZFhCK0xkbk8ySElOTGo5bWRsb25IUGlHWWVzZ294RGQwci9hQ1Zkank4dlEKTjIwL3FQWkUxek1yall6Ry9kVGhTMmMwc0QxaTBXSjdwR1lGb0xtdXlWTjltY0FXUTM5SjM0VHZaU2FFSWZWNgo5TE1jUHhNTmFsNjRLMFRVbUFQZytGam9QSFlhUUxLOERLOUtnNXNrSE5pOWNzMlY5ckd6VWlVZWtBL0RBUlBTClI3L2ZjUFBacDRuRWVBZmI3WTk1R1llb1p5V21SU3VKdlNyblBESGtUdW1vVlVWdkxMRHRzaG9reUxiTWVtN3oKMmJzVmpwSW1GTHJqbGtmQXlpNHg0WjJrV3YyMFRrdWtsZU1jaVlMbjk4QWxiRi9DSmRLM3QraTRoMTVlR2ZQegpoTnh3bk9QdlVTaDR2Q0o3c2Q5TmtEUGJvS2JneVVHOXBYamZhRGR2UVFLQmdRRFFLM01nUkhkQ1pKNVFqZWFKClFGdXF4cHdnNzhZTjQyL1NwenlUYmtGcVFoQWtyczJxWGx1MDZBRzhrZzIzQkswaHkzaE9zSGgxcXRVK3NHZVAKOWRERHBsUWV0ODZsY2FlR3hoc0V0L1R6cEdtNGFKSm5oNzVVaTVGZk9QTDhPTm1FZ3MxMVRhUldhNzZxelRyMgphRlpjQ2pWV1g0YnRSTHVwSkgrMjZnY0FhUUtCZ1FEQmxVSUUzTnNVOFBBZEYvL25sQVB5VWs1T3lDdWc3dmVyClUycXlrdXFzYnBkSi9hODViT1JhM05IVmpVM25uRGpHVHBWaE9JeXg5TEFrc2RwZEFjVmxvcG9HODhXYk9lMTAKMUdqbnkySmdDK3JVWUZiRGtpUGx1K09IYnRnOXFYcGJMSHBzUVpsMGhucDBYSFNYVm9CMUliQndnMGEyOFVadApCbFBtWmc2d1BRS0JnRHVIUVV2SDZHYTNDVUsxNFdmOFhIcFFnMU16M2VvWTBPQm5iSDRvZUZKZmcraEppSXlnCm9RN3hqWldVR3BIc3AyblRtcHErQWlSNzdyRVhsdlhtOElVU2FsbkNiRGlKY01Pc29RdFBZNS9NczJMRm5LQTQKaENmL0pWb2FtZm1nZEN0ZGtFMXNINE9MR2lJVHdEbTRpb0dWZGIwMllnbzFyb2htNUpLMUI3MkpBb0dBUW01UQpHNDhXOTVhL0w1eSt5dCsyZ3YvUHM2VnBvMjZlTzRNQ3lJazJVem9ZWE9IYnNkODJkaC8xT2sybGdHZlI2K3VuCnc1YytZUXRSTHlhQmd3MUtpbGhFZDBKTWU3cGpUSVpnQWJ0LzVPbnlDak9OVXN2aDJjS2lrQ1Z2dTZsZlBjNkQKckliT2ZIaHhxV0RZK2Q1TGN1YSt2NzJ0RkxhenJsSlBsRzlOZHhrQ2dZRUF5elIzT3UyMDNRVVV6bUlCRkwzZAp4Wm5XZ0JLSEo3TnNxcGFWb2RjL0d5aGVycjFDZzE2MmJaSjJDV2RsZkI0VEdtUjZZdmxTZEFOOFRwUWhFbUtKCnFBLzVzdHdxNWd0WGVLOVJmMWxXK29xNThRNTBxMmk1NVdUTThoSDZhTjlaMTltZ0FGdE5VdGNqQUx2dFYxdEYKWSs4WFJkSHJaRnBIWll2NWkwVW1VbGc9Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K"

Now create the secrets using the file:

  1. kubectl apply -f nginxsecrets.yaml
  2. kubectl get secrets
  1. NAME TYPE DATA AGE
  2. nginxsecret kubernetes.io/tls 2 1m

Now modify your nginx replicas to start an https server using the certificate in the secret, and the Service, to expose both ports (80 and 443):

service/networking/nginx-secure-app.yaml Connecting Applications with Services - 图3

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: my-nginx
  5. labels:
  6. run: my-nginx
  7. spec:
  8. type: NodePort
  9. ports:
  10. - port: 8080
  11. targetPort: 80
  12. protocol: TCP
  13. name: http
  14. - port: 443
  15. protocol: TCP
  16. name: https
  17. selector:
  18. run: my-nginx
  19. ---
  20. apiVersion: apps/v1
  21. kind: Deployment
  22. metadata:
  23. name: my-nginx
  24. spec:
  25. selector:
  26. matchLabels:
  27. run: my-nginx
  28. replicas: 1
  29. template:
  30. metadata:
  31. labels:
  32. run: my-nginx
  33. spec:
  34. volumes:
  35. - name: secret-volume
  36. secret:
  37. secretName: nginxsecret
  38. - name: configmap-volume
  39. configMap:
  40. name: nginxconfigmap
  41. containers:
  42. - name: nginxhttps
  43. image: bprashanth/nginxhttps:1.0
  44. ports:
  45. - containerPort: 443
  46. - containerPort: 80
  47. volumeMounts:
  48. - mountPath: /etc/nginx/ssl
  49. name: secret-volume
  50. - mountPath: /etc/nginx/conf.d
  51. name: configmap-volume

Noteworthy points about the nginx-secure-app manifest:

  • It contains both Deployment and Service specification in the same file.
  • The nginx server serves HTTP traffic on port 80 and HTTPS traffic on 443, and nginx Service exposes both ports.
  • Each container has access to the keys through a volume mounted at /etc/nginx/ssl. This is set up before the nginx server is started.
  1. kubectl delete deployments,svc my-nginx; kubectl create -f ./nginx-secure-app.yaml

At this point you can reach the nginx server from any node.

  1. kubectl get pods -l run=my-nginx -o custom-columns=POD_IP:.status.podIPs
  2. POD_IP
  3. [map[ip:10.244.3.5]]
  1. node $ curl -k https://10.244.3.5
  2. ...
  3. <h1>Welcome to nginx!</h1>

Note how we supplied the -k parameter to curl in the last step, this is because we don’t know anything about the pods running nginx at certificate generation time, so we have to tell curl to ignore the CName mismatch. By creating a Service we linked the CName used in the certificate with the actual DNS name used by pods during Service lookup. Let’s test this from a pod (the same secret is being reused for simplicity, the pod only needs nginx.crt to access the Service):

service/networking/curlpod.yaml Connecting Applications with Services - 图4

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: curl-deployment
  5. spec:
  6. selector:
  7. matchLabels:
  8. app: curlpod
  9. replicas: 1
  10. template:
  11. metadata:
  12. labels:
  13. app: curlpod
  14. spec:
  15. volumes:
  16. - name: secret-volume
  17. secret:
  18. secretName: nginxsecret
  19. containers:
  20. - name: curlpod
  21. command:
  22. - sh
  23. - -c
  24. - while true; do sleep 1; done
  25. image: radial/busyboxplus:curl
  26. volumeMounts:
  27. - mountPath: /etc/nginx/ssl
  28. name: secret-volume
  1. kubectl apply -f ./curlpod.yaml
  2. kubectl get pods -l app=curlpod
  1. NAME READY STATUS RESTARTS AGE
  2. curl-deployment-1515033274-1410r 1/1 Running 0 1m
  1. kubectl exec curl-deployment-1515033274-1410r -- curl https://my-nginx --cacert /etc/nginx/ssl/tls.crt
  2. ...
  3. <title>Welcome to nginx!</title>
  4. ...

Exposing the Service

For some parts of your applications you may want to expose a Service onto an external IP address. Kubernetes supports two ways of doing this: NodePorts and LoadBalancers. The Service created in the last section already used NodePort, so your nginx HTTPS replica is ready to serve traffic on the internet if your node has a public IP.

  1. kubectl get svc my-nginx -o yaml | grep nodePort -C 5
  2. uid: 07191fb3-f61a-11e5-8ae5-42010af00002
  3. spec:
  4. clusterIP: 10.0.162.149
  5. ports:
  6. - name: http
  7. nodePort: 31704
  8. port: 8080
  9. protocol: TCP
  10. targetPort: 80
  11. - name: https
  12. nodePort: 32453
  13. port: 443
  14. protocol: TCP
  15. targetPort: 443
  16. selector:
  17. run: my-nginx
  1. kubectl get nodes -o yaml | grep ExternalIP -C 1
  2. - address: 104.197.41.11
  3. type: ExternalIP
  4. allocatable:
  5. --
  6. - address: 23.251.152.56
  7. type: ExternalIP
  8. allocatable:
  9. ...
  10. $ curl https://<EXTERNAL-IP>:<NODE-PORT> -k
  11. ...
  12. <h1>Welcome to nginx!</h1>

Let’s now recreate the Service to use a cloud load balancer. Change the Type of my-nginx Service from NodePort to LoadBalancer:

  1. kubectl edit svc my-nginx
  2. kubectl get svc my-nginx
  1. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  2. my-nginx LoadBalancer 10.0.162.149 xx.xxx.xxx.xxx 8080:30163/TCP 21s
  1. curl https://<EXTERNAL-IP> -k
  2. ...
  3. <title>Welcome to nginx!</title>

The IP address in the EXTERNAL-IP column is the one that is available on the public internet. The CLUSTER-IP is only available inside your cluster/private cloud network.

Note that on AWS, type LoadBalancer creates an ELB, which uses a (long) hostname, not an IP. It’s too long to fit in the standard kubectl get svc output, in fact, so you’ll need to do kubectl describe service my-nginx to see it. You’ll see something like this:

  1. kubectl describe service my-nginx
  2. ...
  3. LoadBalancer Ingress: a320587ffd19711e5a37606cf4a74574-1142138393.us-east-1.elb.amazonaws.com
  4. ...

What’s next