Bringing traffic to the cluster

There are downsides to running Kubernetes outside of well integrated platforms such as AWS or GCE. One of those is the lack of external ingress and load balancing solutions. Fortunately, it’s fairly easy to get an NGINX powered ingress controller running inside the cluster, which will enable services to register for receiving public traffic.

Ingress controller setup

Because there’s no load balancer available with most cloud providers, we have to make sure the NGINX server is always running on the same host, accessible via an IP address that doesn’t change. As our master node is pretty much idle at this point, and no ordinary pods will get scheduled on it, we make kube1 our dedicated host for routing public traffic.

We already opened port 80 and 443 during the initial firewall configuration, now all we have to do is to write a couple of manifests to deploy the NGINX ingress controller on kube1:

One part requires special attention. In order to make sure NGINX runs on kube1—which is a tainted master node and no pods will normally be scheduled on it—we need to specify a toleration:

  1. # from ingress/deployment.yml
  2. tolerations:
  3. - key: node-role.kubernetes.io/master
  4. operator: Equal
  5. effect: NoSchedule

Specifying a toleration doesn’t make sure that a pod is getting scheduled on any specific node. For this we need to add a node affinity rule. As we have just a single master node, the following specification is enough to schedule a pod on kube1:

  1. # from ingress/deployment.yml
  2. affinity:
  3. nodeAffinity:
  4. requiredDuringSchedulingIgnoredDuringExecution:
  5. nodeSelectorTerms:
  6. - matchExpressions:
  7. - key: node-role.kubernetes.io/master
  8. operator: Exists

Running kubectl apply -f ingress/ will apply all manifests in this folder. First, a namespace called ingress is created, followed by the NGINX deployment, plus a default backend to serve 404 pages for undefined domains and routes including the necessary service object. There’s no need to define a service object for NGINX itself, because we configure it to use the host network (hostNetwork: true), which means that the container is bound to the actual ports on the host, not to some virtual interface within the pod overlay network.

Services are now able to make use of the ingress controller and receive public traffic with a simple manifest:

  1. apiVersion: extensions/v1beta1
  2. kind: Ingress
  3. metadata:
  4. name: example-ingress
  5. annotations:
  6. kubernetes.io/ingress.class: "nginx"
  7. spec:
  8. rules:
  9. - host: service.example.com
  10. http:
  11. paths:
  12. - path: /
  13. backend:
  14. serviceName: example-service
  15. servicePort: example-service-http

The NGINX ingress controller is quite flexible and supports a whole bunch of configuration options.

DNS records

Terraform dns/cloudflare
Terraform dns/google
Terraform dns/aws

At this point we could use a domain name and put some DNS entries into place. To serve web traffic it’s enough to create an A record pointing to the public IP address of kube1 plus a wildcard entry to be able to use subdomains:

Type Name Value
A example.com
CNAME *.example.com example.com

Once the DNS entries are propagated our example service would be accessible at http://service.example.com. If you don’t have a domain name at hand, you can always add an entry to your hosts file instead.

Additionally, it might be a good idea to assign a subdomain to each host, e.g. kube1.example.com. It’s way more comfortable to ssh into a host using a domain name instead of an IP address.

Obtaining SSL/TLS certificates

Thanks to Let’s Encrypt and a project called kube-lego it’s incredibly easy to obtain free certificates for any domain name pointing at our Kubernetes cluster. Setting this service up takes no time and it plays well with the NGINX ingress controller we deployed earlier. These are the related manifests:

Before deploying kube-lego using the manifests above, make sure to replace the email address in ingress/tls/configmap.yml with your own.

To enable certificates for a service, the ingress manifest needs to be slightly extended:

  1. apiVersion: extensions/v1beta1
  2. kind: Ingress
  3. metadata:
  4. name: example-ingress
  5. annotations:
  6. kubernetes.io/tls-acme: "true" # enable certificates
  7. kubernetes.io/ingress.class: "nginx"
  8. spec:
  9. tls: # specify domains to fetch certificates for
  10. - hosts:
  11. - service.example.com
  12. secretName: example-service-tls
  13. rules:
  14. - host: service.example.com
  15. http:
  16. paths:
  17. - path: /
  18. backend:
  19. serviceName: example-service
  20. servicePort: example-service-http

After applying this manifest, kube-lego will try to obtain a certificate for service.example.com and reload the NGINX configuration to enable TLS. Make sure to check the logs of the kube-lego pod if something goes wrong.

NGINX will automatically redirect clients to HTTPS whenever TLS is enabled. In case you still want to serve traffic on HTTP, add ingress.kubernetes.io/ssl-redirect: "false" to the list of annotations.

Deploying the Kubernetes Dashboard

Now that everything is in place, we are able to expose services on specific domains and automatically obtain certificates for them. Let’s try this out by deploying the Kubernetes Dashboard with the following manifests:

Optionally, the following manifests can be used to get resource utilization graphs within the dashboard using heapster:

What’s new here is that we enable basic authentication to restrict access to the dashboard. The following annotations are supported by the NGINX ingress controller, and may or may not work with other solutions:

  1. # from dashboard/ingress.yml
  2. annotations:
  3. # ...
  4. ingress.kubernetes.io/auth-type: basic
  5. ingress.kubernetes.io/auth-secret: kubernetes-dashboard-auth
  6. ingress.kubernetes.io/auth-realm: "Authentication Required"
  7. # dashboard/secret.yml
  8. apiVersion: v1
  9. kind: Secret
  10. metadata:
  11. name: kubernetes-dashboard-auth
  12. namespace: kube-system
  13. data:
  14. auth: YWRtaW46JGFwcjEkV3hBNGpmQmkkTHYubS9PdzV5Y1RFMXMxMWNMYmJpLw==
  15. type: Opaque

This example will prompt a visitor to enter their credentials (user: admin / password: test) when accessing the dashboard. Secrets for basic authentication can be created using htpasswd, and need to be added to the manifest as a base64 encoded string.