Understanding how to add custom metrics autoscalers

To add a custom metrics autoscaler, create a ScaledObject custom resource for a deployment, stateful set, or custom resource. Create a ScaledJob custom resource for a job.

You can create only one scaled object for each workload that you want to scale. Also, you cannot use a scaled object and the horizontal pod autoscaler (HPA) on the same workload.

Adding a custom metrics autoscaler to a workload

You can create a custom metrics autoscaler for a workload that is created by a Deployment, StatefulSet, or custom resource object.

Prerequisites

  • The Custom Metrics Autoscaler Operator must be installed.

  • If you use a custom metrics autoscaler for scaling based on CPU or memory:

    • Your cluster administrator must have properly configured cluster metrics. You can use the oc describe PodMetrics <pod-name> command to determine if metrics are configured. If metrics are configured, the output appears similar to the following, with CPU and Memory displayed under Usage.

      1. $ oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal

      Example output

      1. Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal
      2. Namespace: openshift-kube-scheduler
      3. Labels: <none>
      4. Annotations: <none>
      5. API Version: metrics.k8s.io/v1beta1
      6. Containers:
      7. Name: wait-for-host-port
      8. Usage:
      9. Memory: 0
      10. Name: scheduler
      11. Usage:
      12. Cpu: 8m
      13. Memory: 45440Ki
      14. Kind: PodMetrics
      15. Metadata:
      16. Creation Timestamp: 2019-05-23T18:47:56Z
      17. Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal
      18. Timestamp: 2019-05-23T18:47:56Z
      19. Window: 1m0s
      20. Events: <none>
    • The pods associated with the object you want to scale must include specified memory and CPU limits. For example:

      Example pod spec

      1. apiVersion: v1
      2. kind: Pod
      3. # ...
      4. spec:
      5. containers:
      6. - name: app
      7. image: images.my-company.example/app:v4
      8. resources:
      9. limits:
      10. memory: "128Mi"
      11. cpu: "500m"
      12. # ...

Procedure

  1. Create a YAML file similar to the following. Only the name <2>, object name <4>, and object kind <5> are required:

    Example scaled object

    1. apiVersion: keda.sh/v1alpha1
    2. kind: ScaledObject
    3. metadata:
    4. annotations:
    5. autoscaling.keda.sh/paused-replicas: "0" (1)
    6. name: scaledobject (2)
    7. namespace: my-namespace
    8. spec:
    9. scaleTargetRef:
    10. apiVersion: apps/v1 (3)
    11. name: example-deployment (4)
    12. kind: Deployment (5)
    13. envSourceContainerName: .spec.template.spec.containers[0] (6)
    14. cooldownPeriod: 200 (7)
    15. maxReplicaCount: 100 (8)
    16. minReplicaCount: 0 (9)
    17. metricsServer: (10)
    18. auditConfig:
    19. logFormat: "json"
    20. logOutputVolumeClaim: "persistentVolumeClaimName"
    21. policy:
    22. rules:
    23. - level: Metadata
    24. omitStages: "RequestReceived"
    25. omitManagedFields: false
    26. lifetime:
    27. maxAge: "2"
    28. maxBackup: "1"
    29. maxSize: "50"
    30. fallback: (11)
    31. failureThreshold: 3
    32. replicas: 6
    33. pollingInterval: 30 (12)
    34. advanced:
    35. restoreToOriginalReplicaCount: false (13)
    36. horizontalPodAutoscalerConfig:
    37. name: keda-hpa-scale-down (14)
    38. behavior: (15)
    39. scaleDown:
    40. stabilizationWindowSeconds: 300
    41. policies:
    42. - type: Percent
    43. value: 100
    44. periodSeconds: 15
    45. triggers:
    46. - type: prometheus (16)
    47. metadata:
    48. serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092
    49. namespace: kedatest
    50. metricName: http_requests_total
    51. threshold: '5'
    52. query: sum(rate(http_requests_total{job="test-app"}[1m]))
    53. authModes: basic
    54. authenticationRef: (17)
    55. name: prom-triggerauthentication
    56. kind: TriggerAuthentication
    1Optional: Specifies that the Custom Metrics Autoscaler Operator is to scale the replicas to the specified value and stop autoscaling, as described in the “Pausing the custom metrics autoscaler for a workload” section.
    2Specifies a name for this custom metrics autoscaler.
    3Optional: Specifies the API version of the target resource. The default is apps/v1.
    4Specifies the name of the object that you want to scale.
    5Specifies the kind as Deployment, StatefulSet or CustomResource.
    6Optional: Specifies the name of the container in the target resource, from which the custom metrics autoscaler gets environment variables holding secrets and so forth. The default is .spec.template.spec.containers[0].
    7Optional. Specifies the period in seconds to wait after the last trigger is reported before scaling the deployment back to 0 if the minReplicaCount is set to 0. The default is 300.
    8Optional: Specifies the maximum number of replicas when scaling up. The default is 100.
    9Optional: Specifies the minimum number of replicas when scaling down.
    10Optional: Specifies the parameters for audit logs. as described in the “Configuring audit logging” section.
    11Optional: Specifies the number of replicas to fall back to if a scaler fails to get metrics from the source for the number of times defined by the failureThreshold parameter. For more information on fallback behavior, see the KEDA documentation.
    12Optional: Specifies the interval in seconds to check each trigger on. The default is 30.
    13Optional: Specifies whether to scale back the target resource to the original replica count after the scaled object is deleted. The default is false, which keeps the replica count as it is when the scaled object is deleted.
    14Optional: Specifies a name for the horizontal pod autoscaler. The default is keda-hpa-{scaled-object-name}.
    15Optional: Specifies a scaling policy to use to control the rate to scale pods up or down, as described in the “Scaling policies” section.
    16Specifies the trigger to use as the basis for scaling, as described in the “Understanding the custom metrics autoscaler triggers” section. This example uses OKD monitoring.
    17Optional: Specifies a trigger authentication or a cluster trigger authentication. For more information, see Understanding the custom metrics autoscaler trigger authentication in the Additional resources section.
    • Enter TriggerAuthentication to use a trigger authentication. This is the default.

    • Enter ClusterTriggerAuthentication to use a cluster trigger authentication.

  2. Create the custom metrics autoscaler by running the following command:

    1. $ oc create -f <filename>.yaml

Verification

  • View the command output to verify that the custom metrics autoscaler was created:

    1. $ oc get scaledobject <scaled_object_name>

    Example output

    1. NAME SCALETARGETKIND SCALETARGETNAME MIN MAX TRIGGERS AUTHENTICATION READY ACTIVE FALLBACK AGE
    2. scaledobject apps/v1.Deployment example-deployment 0 50 prometheus prom-triggerauthentication True True True 17s

    Note the following fields in the output:

    • TRIGGERS: Indicates the trigger, or scaler, that is being used.

    • AUTHENTICATION: Indicates the name of any trigger authentication being used.

    • READY: Indicates whether the scaled object is ready to start scaling:

      • If True, the scaled object is ready.

      • If False, the scaled object is not ready because of a problem in one or more of the objects you created.

    • ACTIVE: Indicates whether scaling is taking place:

      • If True, scaling is taking place.

      • If False, scaling is not taking place because there are no metrics or there is a problem in one or more of the objects you created.

    • FALLBACK: Indicates whether the custom metrics autoscaler is able to get metrics from the source

      • If False, the custom metrics autoscaler is getting metrics.

      • If True, the custom metrics autoscaler is getting metrics because there are no metrics or there is a problem in one or more of the objects you created.

Adding a custom metrics autoscaler to a job

You can create a custom metrics autoscaler for any Job object.

Scaling by using a scaled job is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Prerequisites

  • The Custom Metrics Autoscaler Operator must be installed.

Procedure

  1. Create a YAML file similar to the following:

    1. kind: ScaledJob
    2. apiVersion: keda.sh/v1alpha1
    3. metadata:
    4. name: scaledjob
    5. namespace: my-namespace
    6. spec:
    7. failedJobsHistoryLimit: 5
    8. jobTargetRef:
    9. activeDeadlineSeconds: 600 (1)
    10. backoffLimit: 6 (2)
    11. parallelism: 1 (3)
    12. completions: 1 (4)
    13. template: (5)
    14. metadata:
    15. name: pi
    16. spec:
    17. containers:
    18. - name: pi
    19. image: perl
    20. command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
    21. maxReplicaCount: 100 (6)
    22. pollingInterval: 30 (7)
    23. successfulJobsHistoryLimit: 5 (8)
    24. failedJobsHistoryLimit: 5 (9)
    25. envSourceContainerName: (10)
    26. rolloutStrategy: gradual (11)
    27. scalingStrategy: (12)
    28. strategy: "custom"
    29. customScalingQueueLengthDeduction: 1
    30. customScalingRunningJobPercentage: "0.5"
    31. pendingPodConditions:
    32. - "Ready"
    33. - "PodScheduled"
    34. - "AnyOtherCustomPodCondition"
    35. multipleScalersCalculation : "max"
    36. triggers:
    37. - type: prometheus (13)
    38. metadata:
    39. serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092
    40. namespace: kedatest
    41. metricName: http_requests_total
    42. threshold: '5'
    43. query: sum(rate(http_requests_total{job="test-app"}[1m]))
    44. authModes: "bearer"
    45. authenticationRef: (14)
    46. name: prom-cluster-triggerauthentication
    1Specifies the maximum duration the job can run.
    2Specifies the number of retries for a job. The default is 6.
    3Optional: Specifies how many pod replicas a job should run in parallel; defaults to 1.
    • For non-parallel jobs, leave unset. When unset, the default is 1.

    4Optional: Specifies how many successful pod completions are needed to mark a job completed.
    • For non-parallel jobs, leave unset. When unset, the default is 1.

    • For parallel jobs with a fixed completion count, specify the number of completions.

    • For parallel jobs with a work queue, leave unset. When unset the default is the value of the parallelism parameter.

    5Specifies the template for the pod the controller creates.
    6Optional: Specifies the maximum number of replicas when scaling up. The default is 100.
    7Optional: Specifies the interval in seconds to check each trigger on. The default is 30.
    8Optional: Specifies the number of successful finished jobs should be kept. The default is 100.
    9Optional: Specifies how many failed jobs should be kept. The default is 100.
    10Optional: Specifies the name of the container in the target resource, from which the custom autoscaler gets environment variables holding secrets and so forth. The default is .spec.template.spec.containers[0].
    11Optional: Specifies whether existing jobs are terminated whenever a scaled job is being updated:
    • default: The autoscaler terminates an existing job if its associated scaled job is updated. The autoscaler recreates the job with the latest specs.

    • gradual: The autoscaler does not terminate an existing job if its associated scaled job is updated. The autoscaler creates new jobs with the latest specs.

    12Optional: Specifies a scaling strategy: default, custom, or accurate. The default is default. For more information, see the link in the “Additional resources” section that follows.
    13Specifies the trigger to use as the basis for scaling, as described in the “Understanding the custom metrics autoscaler triggers” section.
    14Optional: Specifies a trigger authentication or a cluster trigger authentication. For more information, see Understanding the custom metrics autoscaler trigger authentication in the Additional resources section.
    • Enter TriggerAuthentication to use a trigger authentication. This is the default.

    • Enter ClusterTriggerAuthentication to use a cluster trigger authentication.

  2. Create the custom metrics autoscaler by running the following command:

    1. $ oc create -f <filename>.yaml

Verification

  • View the command output to verify that the custom metrics autoscaler was created:

    1. $ oc get scaledjob <scaled_job_name>

    Example output

    1. NAME MAX TRIGGERS AUTHENTICATION READY ACTIVE AGE
    2. scaledjob 100 prometheus prom-triggerauthentication True True 8s

    Note the following fields in the output:

    • TRIGGERS: Indicates the trigger, or scaler, that is being used.

    • AUTHENTICATION: Indicates the name of any trigger authentication being used.

    • READY: Indicates whether the scaled object is ready to start scaling:

      • If True, the scaled object is ready.

      • If False, the scaled object is not ready because of a problem in one or more of the objects you created.

    • ACTIVE: Indicates whether scaling is taking place:

      • If True, scaling is taking place.

      • If False, scaling is not taking place because there are no metrics or there is a problem in one or more of the objects you created.

Additional resources