Autoscaling across clusters with FederatedHPA

In Karmada, a FederatedHPA scales up/down the workload’s replicas across multiple clusters, with the aim of automatically scaling the workload to match the demand.

When the load is increase, FederatedHPA scales up the replicas of the workload(the Deployment, StatefulSet, or other similar resource) if the number of Pods is under the configured maximum. When the load is decrease, FederatedHPA scales down the replicas of the workload if the number of Pods is above the configured minimum.

This document walk you through an example of enabling FederatedHPA to automatically manage scale for a cross-cluster deployed nginx.

The walkthrough example will do as follows:
federatedhpa-demo

  • One deployment’s pod exists in member1 cluster.
  • The service is deployed in member1 and member2 cluster.
  • Request the multi-cluster service and trigger the pod’s CPU usage increases.
  • The replicas will be scaled up in member1 and member2 cluster.

Prerequisites

Karmada has been installed

We can install Karmada by referring to Quick Start, or directly run hack/local-up-karmada.sh script which is also used to run our E2E cases.

Member Cluster Network

Ensure that at least two clusters have been added to Karmada, and the container networks between member clusters are connected.

  • If you use the hack/local-up-karmada.sh script to deploy Karmada, Karmada will have three member clusters, and the container networks of the member1 and member2 will be connected.
  • You can use Submariner or other related open source projects to connect networks between member clusters.

Note: In order to prevent routing conflicts, Pod and Service CIDRs of clusters need non-overlapping.

The ServiceExport and ServiceImport CRDs have been installed

We need to install ServiceExport and ServiceImport in the member clusters to enable multi-cluster service.

After ServiceExport and ServiceImport have been installed on the Karmada Control Plane, we can create ClusterPropagationPolicy to propagate those two CRDs to the member clusters.

  1. # propagate ServiceExport CRD
  2. apiVersion: policy.karmada.io/v1alpha1
  3. kind: ClusterPropagationPolicy
  4. metadata:
  5. name: serviceexport-policy
  6. spec:
  7. resourceSelectors:
  8. - apiVersion: apiextensions.k8s.io/v1
  9. kind: CustomResourceDefinition
  10. name: serviceexports.multicluster.x-k8s.io
  11. placement:
  12. clusterAffinity:
  13. clusterNames:
  14. - member1
  15. - member2
  16. ---
  17. # propagate ServiceImport CRD
  18. apiVersion: policy.karmada.io/v1alpha1
  19. kind: ClusterPropagationPolicy
  20. metadata:
  21. name: serviceimport-policy
  22. spec:
  23. resourceSelectors:
  24. - apiVersion: apiextensions.k8s.io/v1
  25. kind: CustomResourceDefinition
  26. name: serviceimports.multicluster.x-k8s.io
  27. placement:
  28. clusterAffinity:
  29. clusterNames:
  30. - member1
  31. - member2

metrics-server has been installed in member clusters

We need to install metrics-server for member clusters to provider the metrics API, install it by running:

  1. hack/deploy-k8s-metrics-server.sh ${member_cluster_kubeconfig} ${member_cluster_context_name}

If you use the hack/local-up-karmada.sh script to deploy Karmada, you can run following command to deploy metrics-server in all three member clusters:

  1. hack/deploy-k8s-metrics-server.sh $HOME/.kube/members.config member1
  2. hack/deploy-k8s-metrics-server.sh $HOME/.kube/members.config member2
  3. hack/deploy-k8s-metrics-server.sh $HOME/.kube/members.config member3

karmada-metrics-adapter has been installed in Karmada control plane

We need to install karmada-metrics-adapter in Karmada control plane to provide the metrics API, install it by running:

  1. hack/deploy-metrics-adapter.sh ${host_cluster_kubeconfig} ${host_cluster_context} ${karmada_apiserver_kubeconfig} ${karmada_apiserver_context_name}

If you use the hack/local-up-karmada.sh script to deploy Karmada, you can run following command to deploy karmada-metrics-adapter:

  1. hack/deploy-metrics-adapter.sh $HOME/.kube/karmada.config karmada-host $HOME/.kube/karmada.config karmada-apiserver

Deploy workload in member1 and member2 cluster

We need to deploy deployment(1 replica) and service in member1 and member2.

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: nginx
  5. labels:
  6. app: nginx
  7. spec:
  8. replicas: 1
  9. selector:
  10. matchLabels:
  11. app: nginx
  12. template:
  13. metadata:
  14. labels:
  15. app: nginx
  16. spec:
  17. containers:
  18. - image: nginx
  19. name: nginx
  20. resources:
  21. requests:
  22. cpu: 25m
  23. memory: 64Mi
  24. limits:
  25. cpu: 25m
  26. memory: 64Mi
  27. ---
  28. apiVersion: v1
  29. kind: Service
  30. metadata:
  31. name: nginx-service
  32. spec:
  33. ports:
  34. - port: 80
  35. targetPort: 80
  36. selector:
  37. app: nginx
  38. ---
  39. apiVersion: policy.karmada.io/v1alpha1
  40. kind: PropagationPolicy
  41. metadata:
  42. name: nginx-propagation
  43. spec:
  44. resourceSelectors:
  45. - apiVersion: apps/v1
  46. kind: Deployment
  47. name: nginx
  48. - apiVersion: v1
  49. kind: Service
  50. name: nginx-service
  51. placement:
  52. clusterAffinity:
  53. clusterNames:
  54. - member1
  55. - member2
  56. replicaScheduling:
  57. replicaDivisionPreference: Weighted
  58. replicaSchedulingType: Divided
  59. weightPreference:
  60. staticWeightList:
  61. - targetCluster:
  62. clusterNames:
  63. - member1
  64. weight: 1
  65. - targetCluster:
  66. clusterNames:
  67. - member2
  68. weight: 1

After deploying, you can check the distribution of the pods and service:

  1. $ karmadactl get pods
  2. NAME CLUSTER READY STATUS RESTARTS AGE
  3. nginx-777bc7b6d7-mbdn8 member1 1/1 Running 0 9h
  4. $ karmadactl get svc
  5. NAME CLUSTER TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ADOPTION
  6. nginx-service member1 ClusterIP 10.11.216.215 <none> 80/TCP 9h Y
  7. nginx-service member2 ClusterIP 10.13.46.61 <none> 80/TCP 9h Y

Deploy FederatedHPA in Karmada control plane

Then let’s deploy FederatedHPA in Karmada control plane.

  1. apiVersion: autoscaling.karmada.io/v1alpha1
  2. kind: FederatedHPA
  3. metadata:
  4. name: nginx
  5. spec:
  6. scaleTargetRef:
  7. apiVersion: apps/v1
  8. kind: Deployment
  9. name: nginx
  10. minReplicas: 1
  11. maxReplicas: 10
  12. behavior:
  13. scaleDown:
  14. stabilizationWindowSeconds: 10
  15. scaleUp:
  16. stabilizationWindowSeconds: 10
  17. metrics:
  18. - type: Resource
  19. resource:
  20. name: cpu
  21. target:
  22. type: Utilization
  23. averageUtilization: 10

After deploying, you can check the FederatedHPA:

  1. $ kubectl --kubeconfig $HOME/.kube/karmada.config --context karmada-apiserver get fhpa
  2. NAME REFERENCE-KIND REFERENCE-NAME MINPODS MAXPODS REPLICAS AGE
  3. nginx Deployment nginx 1 10 1 9h

Export service to member1 cluster

As mentioned before, we need a multi-cluster service to route the requests to the pods in member1 and member2 cluster, so let create this mult-cluster service.

  • Create a ServiceExport object on Karmada Control Plane, and then create a PropagationPolicy to propagate the ServiceExport object to member1 and member2 cluster.

    1. apiVersion: multicluster.x-k8s.io/v1alpha1
    2. kind: ServiceExport
    3. metadata:
    4. name: nginx-service
    5. ---
    6. apiVersion: policy.karmada.io/v1alpha1
    7. kind: PropagationPolicy
    8. metadata:
    9. name: serve-export-policy
    10. spec:
    11. resourceSelectors:
    12. - apiVersion: multicluster.x-k8s.io/v1alpha1
    13. kind: ServiceExport
    14. name: nginx-service
    15. placement:
    16. clusterAffinity:
    17. clusterNames:
    18. - member1
    19. - member2
  • Create a ServiceImport object on Karmada Control Plane, and then create a PropagationPolicy to propagate the ServiceImport object to member1 cluster.

    1. apiVersion: multicluster.x-k8s.io/v1alpha1
    2. kind: ServiceImport
    3. metadata:
    4. name: nginx-service
    5. spec:
    6. type: ClusterSetIP
    7. ports:
    8. - port: 80
    9. protocol: TCP
    10. ---
    11. apiVersion: policy.karmada.io/v1alpha1
    12. kind: PropagationPolicy
    13. metadata:
    14. name: serve-import-policy
    15. spec:
    16. resourceSelectors:
    17. - apiVersion: multicluster.x-k8s.io/v1alpha1
    18. kind: ServiceImport
    19. name: nginx-service
    20. placement:
    21. clusterAffinity:
    22. clusterNames:
    23. - member1

After deploying, you can check the multi-cluster service:

  1. $ karmadactl get svc
  2. NAME CLUSTER TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ADOPTION
  3. derived-nginx-service member1 ClusterIP 10.11.59.213 <none> 80/TCP 9h Y

Install hey http load testing tool in member1 cluster

In order to do http requests, here we use hey.

  • Download hey and copy it to kind cluster container.
  1. $ wget https://hey-release.s3.us-east-2.amazonaws.com/hey_linux_amd64
  2. $ chmod +x hey_linux_amd64
  3. $ docker cp hey_linux_amd64 member1-control-plane:/usr/local/bin/hey

Test scaling up

  • Check the pod distribution firstly.

    1. $ karmadactl get pods
    2. NAME CLUSTER READY STATUS RESTARTS AGE
    3. nginx-777bc7b6d7-mbdn8 member1 1/1 Running 0 61m
  • Check multi-cluster service ip.

    1. $ karmadactl get svc
    2. NAME CLUSTER TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ADOPTION
    3. derived-nginx-service member1 ClusterIP 10.11.59.213 <none> 80/TCP 20m Y
  • Request multi-cluster service with hey to increase the nginx pods’ CPU usage.

    1. $ docker exec member1-control-plane hey -c 1000 -z 1m http://10.11.59.213
  • Wait 15s, the replicas will be scaled up, then you can check the pod distribution again.

    1. $ karmadactl get pods -l app=nginx
    2. NAME CLUSTER READY STATUS RESTARTS AGE
    3. nginx-777bc7b6d7-c2cfv member1 1/1 Running 0 22s
    4. nginx-777bc7b6d7-mbdn8 member1 1/1 Running 0 62m
    5. nginx-777bc7b6d7-pk2s4 member1 1/1 Running 0 37s
    6. nginx-777bc7b6d7-tbb4k member1 1/1 Running 0 37s
    7. nginx-777bc7b6d7-znlj9 member1 1/1 Running 0 22s
    8. nginx-777bc7b6d7-6n7d9 member2 1/1 Running 0 22s
    9. nginx-777bc7b6d7-dfbnw member2 1/1 Running 0 22s
    10. nginx-777bc7b6d7-fsdg2 member2 1/1 Running 0 37s
    11. nginx-777bc7b6d7-kddhn member2 1/1 Running 0 22s
    12. nginx-777bc7b6d7-lwn52 member2 1/1 Running 0 37s

Test scaling down

After 1 minute, the load testing tool will be stopped, then you can see the workload is scaled down across clusters.

  1. $ karmadactl get pods -l app=nginx
  2. NAME CLUSTER READY STATUS RESTARTS AGE
  3. nginx-777bc7b6d7-mbdn8 member1 1/1 Running 0 64m