Load balancing on RHOSP

Using the Octavia OVN load balancer provider driver with Kuryr SDN

If your OKD cluster uses Kuryr and was installed on a Red Hat OpenStack Platform (RHOSP) 13 cloud that was later upgraded to RHOSP 16, you can configure it to use the Octavia OVN provider driver.

Kuryr replaces existing load balancers after you change provider drivers. This process results in some downtime.

Prerequisites

  • Install the RHOSP CLI, openstack.

  • Install the OKD CLI, oc.

  • Verify that the Octavia OVN driver on RHOSP is enabled.

    To view a list of available Octavia drivers, on a command line, enter openstack loadbalancer provider list.

    The ovn driver is displayed in the command’s output.

Procedure

To change from the Octavia Amphora provider driver to Octavia OVN:

  1. Open the kuryr-config ConfigMap. On a command line, enter:

    1. $ oc -n openshift-kuryr edit cm kuryr-config
  2. In the ConfigMap, delete the line that contains kuryr-octavia-provider: default. For example:

    1. ...
    2. kind: ConfigMap
    3. metadata:
    4. annotations:
    5. networkoperator.openshift.io/kuryr-octavia-provider: default (1)
    6. ...
    1Delete this line. The cluster will regenerate it with ovn as the value.

    Wait for the Cluster Network Operator to detect the modification and to redeploy the kuryr-controller and kuryr-cni pods. This process might take several minutes.

  3. Verify that the kuryr-config ConfigMap annotation is present with ovn as its value. On a command line, enter:

    1. $ oc -n openshift-kuryr edit cm kuryr-config

    The ovn provider value is displayed in the output:

    1. ...
    2. kind: ConfigMap
    3. metadata:
    4. annotations:
    5. networkoperator.openshift.io/kuryr-octavia-provider: ovn
    6. ...
  4. Verify that RHOSP recreated its load balancers.

    1. On a command line, enter:

      1. $ openstack loadbalancer list | grep amphora

      A single Amphora load balancer is displayed. For example:

      1. a4db683b-2b7b-4988-a582-c39daaad7981 | ostest-7mbj6-kuryr-api-loadbalancer | 84c99c906edd475ba19478a9a6690efd | 172.30.0.1 | ACTIVE | amphora
    2. Search for ovn load balancers by entering:

      1. $ openstack loadbalancer list | grep ovn

      The remaining load balancers of the ovn type are displayed. For example:

      1. 2dffe783-98ae-4048-98d0-32aa684664cc | openshift-apiserver-operator/metrics | 84c99c906edd475ba19478a9a6690efd | 172.30.167.119 | ACTIVE | ovn
      2. 0b1b2193-251f-4243-af39-2f99b29d18c5 | openshift-etcd/etcd | 84c99c906edd475ba19478a9a6690efd | 172.30.143.226 | ACTIVE | ovn
      3. f05b07fc-01b7-4673-bd4d-adaa4391458e | openshift-dns-operator/metrics | 84c99c906edd475ba19478a9a6690efd | 172.30.152.27 | ACTIVE | ovn

Scaling clusters for application traffic by using Octavia

OKD clusters that run on Red Hat OpenStack Platform (RHOSP) can use the Octavia load balancing service to distribute traffic across multiple virtual machines (VMs) or floating IP addresses. This feature mitigates the bottleneck that single machines or addresses create.

If your cluster uses Kuryr, the Cluster Network Operator created an internal Octavia load balancer at deployment. You can use this load balancer for application network scaling.

If your cluster does not use Kuryr, you must create your own Octavia load balancer to use it for application network scaling.

Scaling clusters by using Octavia

If you want to use multiple API load balancers, or if your cluster does not use Kuryr, create an Octavia load balancer and then configure your cluster to use it.

Prerequisites

  • Octavia is available on your Red Hat OpenStack Platform (RHOSP) deployment.

Procedure

  1. From a command line, create an Octavia load balancer that uses the Amphora driver:

    1. $ openstack loadbalancer create --name API_OCP_CLUSTER --vip-subnet-id <id_of_worker_vms_subnet>

    You can use a name of your choice instead of API_OCP_CLUSTER.

  2. After the load balancer becomes active, create listeners:

    1. $ openstack loadbalancer listener create --name API_OCP_CLUSTER_6443 --protocol HTTPS--protocol-port 6443 API_OCP_CLUSTER

    To view the status of the load balancer, enter openstack loadbalancer list.

  3. Create a pool that uses the round robin algorithm and has session persistence enabled:

    1. $ openstack loadbalancer pool create --name API_OCP_CLUSTER_pool_6443 --lb-algorithm ROUND_ROBIN --session-persistence type=<source_IP_address> --listener API_OCP_CLUSTER_6443 --protocol HTTPS
  4. To ensure that control plane machines are available, create a health monitor:

    1. $ openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type TCP API_OCP_CLUSTER_pool_6443
  5. Add the control plane machines as members of the load balancer pool:

    1. $ for SERVER in $(MASTER-0-IP MASTER-1-IP MASTER-2-IP)
    2. do
    3. openstack loadbalancer member create --address $SERVER --protocol-port 6443 API_OCP_CLUSTER_pool_6443
    4. done
  6. Optional: To reuse the cluster API floating IP address, unset it:

    1. $ openstack floating ip unset $API_FIP
  7. Add either the unset API_FIP or a new address to the created load balancer VIP:

    1. $ openstack floating ip set --port $(openstack loadbalancer show -c <vip_port_id> -f value API_OCP_CLUSTER) $API_FIP

Your cluster now uses Octavia for load balancing.

If Kuryr uses the Octavia Amphora driver, all traffic is routed through a single Amphora virtual machine (VM).

You can repeat this procedure to create additional load balancers, which can alleviate the bottleneck.

Scaling clusters that use Kuryr by using Octavia

If your cluster uses Kuryr, associate the API floating IP address of your cluster with the pre-existing Octavia load balancer.

Prerequisites

  • Your OKD cluster uses Kuryr.

  • Octavia is available on your Red Hat OpenStack Platform (RHOSP) deployment.

Procedure

  1. Optional: From a command line, to reuse the cluster API floating IP address, unset it:

    1. $ openstack floating ip unset $API_FIP
  2. Add either the unset API_FIP or a new address to the created load balancer VIP:

    1. $ openstack floating ip set --port $(openstack loadbalancer show -c <vip_port_id> -f value ${OCP_CLUSTER}-kuryr-api-loadbalancer) $API_FIP

Your cluster now uses Octavia for load balancing.

If Kuryr uses the Octavia Amphora driver, all traffic is routed through a single Amphora virtual machine (VM).

You can repeat this procedure to create additional load balancers, which can alleviate the bottleneck.

Scaling for ingress traffic by using RHOSP Octavia

You can use Octavia load balancers to scale Ingress controllers on clusters that use Kuryr.

Prerequisites

  • Your OKD cluster uses Kuryr.

  • Octavia is available on your RHOSP deployment.

Procedure

  1. To copy the current internal router service, on a command line, enter:

    1. $ oc -n openshift-ingress get svc router-internal-default -o yaml > external_router.yaml
  2. In the file external_router.yaml, change the values of metadata.name and spec.type to LoadBalancer.

    Example router file

    1. apiVersion: v1
    2. kind: Service
    3. metadata:
    4. labels:
    5. ingresscontroller.operator.openshift.io/owning-ingresscontroller: default
    6. name: router-external-default (1)
    7. namespace: openshift-ingress
    8. spec:
    9. ports:
    10. - name: http
    11. port: 80
    12. protocol: TCP
    13. targetPort: http
    14. - name: https
    15. port: 443
    16. protocol: TCP
    17. targetPort: https
    18. - name: metrics
    19. port: 1936
    20. protocol: TCP
    21. targetPort: 1936
    22. selector:
    23. ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default
    24. sessionAffinity: None
    25. type: LoadBalancer (2)
    1Ensure that this value is descriptive, like router-external-default.
    2Ensure that this value is LoadBalancer.

You can delete timestamps and other information that is irrelevant to load balancing.

  1. From a command line, create a service from the external_router.yaml file:

    1. $ oc apply -f external_router.yaml
  2. Verify that the external IP address of the service is the same as the one that is associated with the load balancer:

    1. On a command line, retrieve the external IP address of the service:

      1. $ oc -n openshift-ingress get svc

      Example output

      1. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
      2. router-external-default LoadBalancer 172.30.235.33 10.46.22.161 80:30112/TCP,443:32359/TCP,1936:30317/TCP 3m38s
      3. router-internal-default ClusterIP 172.30.115.123 <none> 80/TCP,443/TCP,1936/TCP 22h
    2. Retrieve the IP address of the load balancer:

      1. $ openstack loadbalancer list | grep router-external

      Example output

      1. | 21bf6afe-b498-4a16-a958-3229e83c002c | openshift-ingress/router-external-default | 66f3816acf1b431691b8d132cc9d793c | 172.30.235.33 | ACTIVE | octavia |
    3. Verify that the addresses you retrieved in the previous steps are associated with each other in the floating IP list:

      1. $ openstack floating ip list | grep 172.30.235.33

      Example output

      1. | e2f80e97-8266-4b69-8636-e58bacf1879e | 10.46.22.161 | 172.30.235.33 | 655e7122-806a-4e0a-a104-220c6e17bda6 | a565e55a-99e7-4d15-b4df-f9d7ee8c9deb | 66f3816acf1b431691b8d132cc9d793c |

You can now use the value of EXTERNAL-IP as the new Ingress address.

If Kuryr uses the Octavia Amphora driver, all traffic is routed through a single Amphora virtual machine (VM).

You can repeat this procedure to create additional load balancers, which can alleviate the bottleneck.