Connect a Frontend to a Backend Using Services

This task shows how to create a frontend and a backend microservice. The backend microservice is a hello greeter. The frontend exposes the backend using nginx and a Kubernetes Service object.

Objectives

  • Create and run a sample hello backend microservice using a Deployment object.
  • Use a Service object to send traffic to the backend microservice’s multiple replicas.
  • Create and run a nginx frontend microservice, also using a Deployment object.
  • Configure the frontend microservice to send traffic to the backend microservice.
  • Use a Service object of type=LoadBalancer to expose the frontend microservice outside the cluster.

Before you begin

You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:

To check the version, enter kubectl version.

This task uses Services with external load balancers, which require a supported environment. If your environment does not support this, you can use a Service of type NodePort instead.

Creating the backend using a Deployment

The backend is a simple hello greeter microservice. Here is the configuration file for the backend Deployment:

service/access/backend-deployment.yaml

  1. ---
  2. apiVersion: apps/v1
  3. kind: Deployment
  4. metadata:
  5. name: backend
  6. spec:
  7. selector:
  8. matchLabels:
  9. app: hello
  10. tier: backend
  11. track: stable
  12. replicas: 3
  13. template:
  14. metadata:
  15. labels:
  16. app: hello
  17. tier: backend
  18. track: stable
  19. spec:
  20. containers:
  21. - name: hello
  22. image: "gcr.io/google-samples/hello-go-gke:1.0"
  23. ports:
  24. - name: http
  25. containerPort: 80
  26. ...

Create the backend Deployment:

  1. kubectl apply -f https://k8s.io/examples/service/access/backend-deployment.yaml

View information about the backend Deployment:

  1. kubectl describe deployment backend

The output is similar to this:

  1. Name: backend
  2. Namespace: default
  3. CreationTimestamp: Mon, 24 Oct 2016 14:21:02 -0700
  4. Labels: app=hello
  5. tier=backend
  6. track=stable
  7. Annotations: deployment.kubernetes.io/revision=1
  8. Selector: app=hello,tier=backend,track=stable
  9. Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
  10. StrategyType: RollingUpdate
  11. MinReadySeconds: 0
  12. RollingUpdateStrategy: 1 max unavailable, 1 max surge
  13. Pod Template:
  14. Labels: app=hello
  15. tier=backend
  16. track=stable
  17. Containers:
  18. hello:
  19. Image: "gcr.io/google-samples/hello-go-gke:1.0"
  20. Port: 80/TCP
  21. Environment: <none>
  22. Mounts: <none>
  23. Volumes: <none>
  24. Conditions:
  25. Type Status Reason
  26. ---- ------ ------
  27. Available True MinimumReplicasAvailable
  28. Progressing True NewReplicaSetAvailable
  29. OldReplicaSets: <none>
  30. NewReplicaSet: hello-3621623197 (3/3 replicas created)
  31. Events:
  32. ...

Creating the hello Service object

The key to sending requests from a frontend to a backend is the backend Service. A Service creates a persistent IP address and DNS name entry so that the backend microservice can always be reached. A Service uses selectors to find the Pods that it routes traffic to.

First, explore the Service configuration file:

service/access/backend-service.yaml

  1. ---
  2. apiVersion: v1
  3. kind: Service
  4. metadata:
  5. name: hello
  6. spec:
  7. selector:
  8. app: hello
  9. tier: backend
  10. ports:
  11. - protocol: TCP
  12. port: 80
  13. targetPort: http
  14. ...

In the configuration file, you can see that the Service, named hello routes traffic to Pods that have the labels app: hello and tier: backend.

Create the backend Service:

  1. kubectl apply -f https://k8s.io/examples/service/access/backend-service.yaml

At this point, you have a backend Deployment running three replicas of your hello application, and you have a Service that can route traffic to them. However, this service is neither available nor resolvable outside the cluster.

Creating the frontend

Now that you have your backend running, you can create a frontend that is accessible outside the cluster, and connects to the backend by proxying requests to it.

The frontend sends requests to the backend worker Pods by using the DNS name given to the backend Service. The DNS name is hello, which is the value of the name field in the examples/service/access/backend-service.yaml configuration file.

The Pods in the frontend Deployment run a nginx image that is configured to proxy requests to the hello backend Service. Here is the nginx configuration file:

service/access/frontend-nginx.conf

  1. # The identifier Backend is internal to nginx, and used to name this specific upstream
  2. upstream Backend {
  3. # hello is the internal DNS name used by the backend Service inside Kubernetes
  4. server hello;
  5. }
  6. server {
  7. listen 80;
  8. location / {
  9. # The following statement will proxy traffic to the upstream named Backend
  10. proxy_pass http://Backend;
  11. }
  12. }

Similar to the backend, the frontend has a Deployment and a Service. An important difference to notice between the backend and frontend services, is that the configuration for the frontend Service has type: LoadBalancer, which means that the Service uses a load balancer provisioned by your cloud provider and will be accessible from outside the cluster.

service/access/frontend-service.yaml

  1. ---
  2. apiVersion: v1
  3. kind: Service
  4. metadata:
  5. name: frontend
  6. spec:
  7. selector:
  8. app: hello
  9. tier: frontend
  10. ports:
  11. - protocol: "TCP"
  12. port: 80
  13. targetPort: 80
  14. type: LoadBalancer
  15. ...

service/access/frontend-deployment.yaml

  1. ---
  2. apiVersion: apps/v1
  3. kind: Deployment
  4. metadata:
  5. name: frontend
  6. spec:
  7. selector:
  8. matchLabels:
  9. app: hello
  10. tier: frontend
  11. track: stable
  12. replicas: 1
  13. template:
  14. metadata:
  15. labels:
  16. app: hello
  17. tier: frontend
  18. track: stable
  19. spec:
  20. containers:
  21. - name: nginx
  22. image: "gcr.io/google-samples/hello-frontend:1.0"
  23. lifecycle:
  24. preStop:
  25. exec:
  26. command: ["/usr/sbin/nginx","-s","quit"]
  27. ...

Create the frontend Deployment and Service:

  1. kubectl apply -f https://k8s.io/examples/service/access/frontend-deployment.yaml
  2. kubectl apply -f https://k8s.io/examples/service/access/frontend-service.yaml

The output verifies that both resources were created:

  1. deployment.apps/frontend created
  2. service/frontend created

Note: The nginx configuration is baked into the container image. A better way to do this would be to use a ConfigMap, so that you can change the configuration more easily.

Interact with the frontend Service

Once you’ve created a Service of type LoadBalancer, you can use this command to find the external IP:

  1. kubectl get service frontend --watch

This displays the configuration for the frontend Service and watches for changes. Initially, the external IP is listed as <pending>:

  1. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  2. frontend LoadBalancer 10.51.252.116 <pending> 80/TCP 10s

As soon as an external IP is provisioned, however, the configuration updates to include the new IP under the EXTERNAL-IP heading:

  1. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  2. frontend LoadBalancer 10.51.252.116 XXX.XXX.XXX.XXX 80/TCP 1m

That IP can now be used to interact with the frontend service from outside the cluster.

Send traffic through the frontend

The frontend and backend are now connected. You can hit the endpoint by using the curl command on the external IP of your frontend Service.

  1. curl http://${EXTERNAL_IP} # replace this with the EXTERNAL-IP you saw earlier

The output shows the message generated by the backend:

  1. {"message":"Hello"}

Cleaning up

To delete the Services, enter this command:

  1. kubectl delete services frontend backend

To delete the Deployments, the ReplicaSets and the Pods that are running the backend and frontend applications, enter this command:

  1. kubectl delete deployment frontend backend

What’s next