HA endpoints for K8s

The following components require a highly available endpoints:

  • etcd cluster,
  • kube-apiserver service instances.

The latter relies on a 3rd side reverse proxy, like Nginx or HAProxy, toachieve the same goal.

Etcd

The etcd clients (kube-api-masters) are configured with the list of all etcd peers. If the etcd-cluster has multiple instances, it’s configured in HA already.

Kube-apiserver

K8s components require a loadbalancer to access the apiservers via a reverseproxy. Kubespray includes support for an nginx-based proxy that resides on eachnon-master Kubernetes node. This is referred to as localhost loadbalancing. Itis less efficient than a dedicated load balancer because it creates extrahealth checks on the Kubernetes apiserver, but is more practical for scenarioswhere an external LB or virtual IP management is inconvenient. This option isconfigured by the variable loadbalancer_apiserver_localhost (defaults toTrue. Or False, if there is an external loadbalancer_apiserver defined).You may also define the port the local internal loadbalancer uses by changing,loadbalancer_apiserver_port. This defaults to the value ofkube_apiserver_port. It is also important to note that Kubespray will onlyconfigure kubelet and kube-proxy on non-master nodes to use the local internalloadbalancer.

If you choose to NOT use the local internal loadbalancer, you will need toconfigure your own loadbalancer to achieve HA. Note that deploying aloadbalancer is up to a user and is not covered by ansible roles in Kubespray.By default, it only configures a non-HA endpoint, which points to theaccess_ip or IP address of the first server node in the kube-master group.It can also configure clients to use endpoints for a given loadbalancer type.The following diagram shows how traffic to the apiserver is directed.

Image

Note: Kubernetes master nodes still use insecure localhost access because there are bugs in Kubernetes <1.5.0 in using TLS auth on master role services. This makes backends receiving unencrypted traffic and may be a security issue when interconnecting different nodes, or maybe not, if those belong to the isolated management network without external access.

A user may opt to use an external loadbalancer (LB) instead. An external LBprovides access for external clients, while the internal LB accepts clientconnections only to the localhost.Given a frontend VIP address and IP1, IP2 addresses of backends, here isan example configuration for a HAProxy service acting as an external LB:

  1. listen kubernetes-apiserver-https
  2. bind <VIP>:8383
  3. option ssl-hello-chk
  4. mode tcp
  5. timeout client 3h
  6. timeout server 3h
  7. server master1 <IP1>:6443
  8. server master2 <IP2>:6443
  9. balance roundrobin

Note: That’s an example config managed elsewhere outside of Kubespray.

And the corresponding example global vars for such a “cluster-aware”external LB with the cluster API access modes configured in Kubespray:

  1. apiserver_loadbalancer_domain_name: "my-apiserver-lb.example.com"
  2. loadbalancer_apiserver:
  3. address: <VIP>
  4. port: 8383

Note: The default kubernetes apiserver configuration binds to all interfaces, so you will need to use a different port for the vip from that the API is listening on, or set the kube_apiserver_bind_address so that the API only listens on a specific interface (to avoid conflict with haproxy binding the port on the VIP address)

This domain name, or default “lb-apiserver.kubernetes.local”, will be insertedinto the /etc/hosts file of all servers in the k8s-cluster group and wiredinto the generated self-signed TLS/SSL certificates as well. Note thatthe HAProxy service should as well be HA and requires a VIP management, whichis out of scope of this doc.

There is a special case for an internal and an externally configured (not withKubespray) LB used simultaneously. Keep in mind that the cluster is not awareof such an external LB and you need no to specify any configuration variablesfor it.

Note: TLS/SSL termination for externally accessed API endpoints’ will not be covered by Kubespray for that case. Make sure your external LB provides it. Alternatively you may specify an externally load balanced VIPs in the supplementary_addresses_in_ssl_keys list. Then, kubespray will add them into the generated cluster certificates as well.

Aside of that specific case, the loadbalancer_apiserver considered mutuallyexclusive to loadbalancer_apiserver_localhost.

Access API endpoints are evaluated automatically, as the following:

Endpoint type kube-master non-master external
Local LB (default) https://bip:sp https://lc:nsp https://m[0].aip:sp
Local LB + Unmanaged here LB https://bip:sp https://lc:nsp https://ext
External LB, no internal https://bip:sp https://lb:lp https://lb:lp
No ext/int LB https://bip:sp https://m[0].aip:sp https://m[0].aip:sp

Where:

  • m[0] - the first node in the kube-master group;
  • lb - LB FQDN, apiserver_loadbalancer_domain_name;
  • ext - Externally load balanced VIP:port and FQDN, not managed by Kubespray;
  • lc - localhost;
  • bip - a custom bind IP or localhost for the default bind IP ‘0.0.0.0’;
  • nsp - nginx secure port, loadbalancer_apiserver_port, defers to sp;
  • sp - secure port, kube_apiserver_port;
  • lp - LB port, loadbalancer_apiserver.port, defers to the secure port;
  • ip - the node IP, defers to the ansible IP;
  • aip - access_ip, defers to the ip.

A second and a third column represent internal cluster access modes. The lastcolumn illustrates an example URI to access the cluster APIs externally.Kubespray has nothing to do with it, this is informational only.

As you can see, the masters’ internal API endpoints are alwayscontacted via the local bind IP, which is https://bip:sp.

Note that for some cases, like healthchecks of applications deployed byKubespray, the masters’ APIs are accessed via the insecure endpoint, whichconsists of the local kube_apiserver_insecure_bind_address andkube_apiserver_insecure_port.

Optional configurations

ETCD with a LB

In order to use an external loadbalancing (L4/TCP or L7 w/ SSL Passthrough VIP), the following variables need to be overridden in group_vars

  • etcd_access_addresses
  • etcd_client_url
  • etcd_cert_alt_names
  • etcd_cert_alt_ips

Example of a VIP w/ FQDN

  1. etcd_access_addresses: https://etcd.example.com:2379
  2. etcd_client_url: https://etcd.example.com:2379
  3. etcd_cert_alt_names:
  4. - "etcd.kube-system.svc.{{ dns_domain }}"
  5. - "etcd.kube-system.svc"
  6. - "etcd.kube-system"
  7. - "etcd"
  8. - "etcd.example.com" # This one needs to be added to the default etcd_cert_alt_names

Example of a VIP w/o FQDN (IP only)

  1. etcd_access_addresses: https://2.3.7.9:2379
  2. etcd_client_url: https://2.3.7.9:2379
  3. etcd_cert_alt_ips:
  4. - "2.3.7.9"