Horizontal Pod Autoscaler Walkthrough

Horizontal Pod Autoscaler automatically scales the number of Pods in a replication controller, deployment, replica set or stateful set based on observed CPU utilization (or, with beta support, on some other, application-provided metrics).

This document walks you through an example of enabling Horizontal Pod Autoscaler for the php-apache server. For more information on how Horizontal Pod Autoscaler behaves, see the Horizontal Pod Autoscaler user guide.

Before you begin

This example requires a running Kubernetes cluster and kubectl, version 1.2 or later. Metrics server monitoring needs to be deployed in the cluster to provide metrics through the Metrics API. Horizontal Pod Autoscaler uses this API to collect metrics. To learn how to deploy the metrics-server, see the metrics-server documentation.

To specify multiple resource metrics for a Horizontal Pod Autoscaler, you must have a Kubernetes cluster and kubectl at version 1.6 or later. To make use of custom metrics, your cluster must be able to communicate with the API server providing the custom Metrics API. Finally, to use metrics not related to any Kubernetes object you must have a Kubernetes cluster at version 1.10 or later, and you must be able to communicate with the API server that provides the external Metrics API. See the Horizontal Pod Autoscaler user guide for more details.

Run and expose php-apache server

To demonstrate Horizontal Pod Autoscaler we will use a custom docker image based on the php-apache image. The Dockerfile has the following content:

  1. FROM php:5-apache
  2. COPY index.php /var/www/html/index.php
  3. RUN chmod a+rx index.php

It defines an index.php page which performs some CPU intensive computations:

  1. <?php
  2. $x = 0.0001;
  3. for ($i = 0; $i <= 1000000; $i++) {
  4. $x += sqrt($x);
  5. }
  6. echo "OK!";
  7. ?>

First, we will start a deployment running the image and expose it as a service using the following configuration:

application/php-apache.yaml Horizontal Pod Autoscaler Walkthrough - 图1

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: php-apache
  5. spec:
  6. selector:
  7. matchLabels:
  8. run: php-apache
  9. replicas: 1
  10. template:
  11. metadata:
  12. labels:
  13. run: php-apache
  14. spec:
  15. containers:
  16. - name: php-apache
  17. image: k8s.gcr.io/hpa-example
  18. ports:
  19. - containerPort: 80
  20. resources:
  21. limits:
  22. cpu: 500m
  23. requests:
  24. cpu: 200m
  25. ---
  26. apiVersion: v1
  27. kind: Service
  28. metadata:
  29. name: php-apache
  30. labels:
  31. run: php-apache
  32. spec:
  33. ports:
  34. - port: 80
  35. selector:
  36. run: php-apache

Run the following command:

  1. kubectl apply -f https://k8s.io/examples/application/php-apache.yaml
  1. deployment.apps/php-apache created
  2. service/php-apache created

Create Horizontal Pod Autoscaler

Now that the server is running, we will create the autoscaler using kubectl autoscale. The following command will create a Horizontal Pod Autoscaler that maintains between 1 and 10 replicas of the Pods controlled by the php-apache deployment we created in the first step of these instructions. Roughly speaking, HPA will increase and decrease the number of replicas (via the deployment) to maintain an average CPU utilization across all Pods of 50% (since each pod requests 200 milli-cores by kubectl run), this means average CPU usage of 100 milli-cores). See here for more details on the algorithm.

  1. kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
  1. horizontalpodautoscaler.autoscaling/php-apache autoscaled

We may check the current status of autoscaler by running:

  1. kubectl get hpa
  1. NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE
  2. php-apache Deployment/php-apache/scale 0% / 50% 1 10 1 18s

Please note that the current CPU consumption is 0% as we are not sending any requests to the server (the TARGET column shows the average across all the pods controlled by the corresponding deployment).

Increase load

Now, we will see how the autoscaler reacts to increased load. We will start a container, and send an infinite loop of queries to the php-apache service (please run it in a different terminal):

  1. kubectl run -i --tty load-generator --rm --image=busybox --restart=Never -- /bin/sh -c "while sleep 0.01; do wget -q -O- http://php-apache; done"

Within a minute or so, we should see the higher CPU load by executing:

  1. kubectl get hpa
  1. NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE
  2. php-apache Deployment/php-apache/scale 305% / 50% 1 10 1 3m

Here, CPU consumption has increased to 305% of the request. As a result, the deployment was resized to 7 replicas:

  1. kubectl get deployment php-apache
  1. NAME READY UP-TO-DATE AVAILABLE AGE
  2. php-apache 7/7 7 7 19m

Note: It may take a few minutes to stabilize the number of replicas. Since the amount of load is not controlled in any way it may happen that the final number of replicas will differ from this example.

Stop load

We will finish our example by stopping the user load.

In the terminal where we created the container with busybox image, terminate the load generation by typing <Ctrl> + C.

Then we will verify the result state (after a minute or so):

  1. kubectl get hpa
  1. NAME REFERENCE TARGET MINPODS MAXPODS REPLICAS AGE
  2. php-apache Deployment/php-apache/scale 0% / 50% 1 10 1 11m
  1. kubectl get deployment php-apache
  1. NAME READY UP-TO-DATE AVAILABLE AGE
  2. php-apache 1/1 1 1 27m

Here CPU utilization dropped to 0, and so HPA autoscaled the number of replicas back down to 1.

Note: Autoscaling the replicas may take a few minutes.

Autoscaling on multiple metrics and custom metrics

You can introduce additional metrics to use when autoscaling the php-apache Deployment by making use of the autoscaling/v2beta2 API version.

First, get the YAML of your HorizontalPodAutoscaler in the autoscaling/v2beta2 form:

  1. kubectl get hpa.v2beta2.autoscaling -o yaml > /tmp/hpa-v2.yaml

Open the /tmp/hpa-v2.yaml file in an editor, and you should see YAML which looks like this:

  1. apiVersion: autoscaling/v2beta2
  2. kind: HorizontalPodAutoscaler
  3. metadata:
  4. name: php-apache
  5. spec:
  6. scaleTargetRef:
  7. apiVersion: apps/v1
  8. kind: Deployment
  9. name: php-apache
  10. minReplicas: 1
  11. maxReplicas: 10
  12. metrics:
  13. - type: Resource
  14. resource:
  15. name: cpu
  16. target:
  17. type: Utilization
  18. averageUtilization: 50
  19. status:
  20. observedGeneration: 1
  21. lastScaleTime: <some-time>
  22. currentReplicas: 1
  23. desiredReplicas: 1
  24. currentMetrics:
  25. - type: Resource
  26. resource:
  27. name: cpu
  28. current:
  29. averageUtilization: 0
  30. averageValue: 0

Notice that the targetCPUUtilizationPercentage field has been replaced with an array called metrics. The CPU utilization metric is a resource metric, since it is represented as a percentage of a resource specified on pod containers. Notice that you can specify other resource metrics besides CPU. By default, the only other supported resource metric is memory. These resources do not change names from cluster to cluster, and should always be available, as long as the metrics.k8s.io API is available.

You can also specify resource metrics in terms of direct values, instead of as percentages of the requested value, by using a target.type of AverageValue instead of Utilization, and setting the corresponding target.averageValue field instead of the target.averageUtilization.

There are two other types of metrics, both of which are considered custom metrics: pod metrics and object metrics. These metrics may have names which are cluster specific, and require a more advanced cluster monitoring setup.

The first of these alternative metric types is pod metrics. These metrics describe Pods, and are averaged together across Pods and compared with a target value to determine the replica count. They work much like resource metrics, except that they only support a target type of AverageValue.

Pod metrics are specified using a metric block like this:

  1. type: Pods
  2. pods:
  3. metric:
  4. name: packets-per-second
  5. target:
  6. type: AverageValue
  7. averageValue: 1k

The second alternative metric type is object metrics. These metrics describe a different object in the same namespace, instead of describing Pods. The metrics are not necessarily fetched from the object; they only describe it. Object metrics support target types of both Value and AverageValue. With Value, the target is compared directly to the returned metric from the API. With AverageValue, the value returned from the custom metrics API is divided by the number of Pods before being compared to the target. The following example is the YAML representation of the requests-per-second metric.

  1. type: Object
  2. object:
  3. metric:
  4. name: requests-per-second
  5. describedObject:
  6. apiVersion: networking.k8s.io/v1beta1
  7. kind: Ingress
  8. name: main-route
  9. target:
  10. type: Value
  11. value: 2k

If you provide multiple such metric blocks, the HorizontalPodAutoscaler will consider each metric in turn. The HorizontalPodAutoscaler will calculate proposed replica counts for each metric, and then choose the one with the highest replica count.

For example, if you had your monitoring system collecting metrics about network traffic, you could update the definition above using kubectl edit to look like this:

  1. apiVersion: autoscaling/v2beta2
  2. kind: HorizontalPodAutoscaler
  3. metadata:
  4. name: php-apache
  5. spec:
  6. scaleTargetRef:
  7. apiVersion: apps/v1
  8. kind: Deployment
  9. name: php-apache
  10. minReplicas: 1
  11. maxReplicas: 10
  12. metrics:
  13. - type: Resource
  14. resource:
  15. name: cpu
  16. target:
  17. type: Utilization
  18. averageUtilization: 50
  19. - type: Pods
  20. pods:
  21. metric:
  22. name: packets-per-second
  23. target:
  24. type: AverageValue
  25. averageValue: 1k
  26. - type: Object
  27. object:
  28. metric:
  29. name: requests-per-second
  30. describedObject:
  31. apiVersion: networking.k8s.io/v1beta1
  32. kind: Ingress
  33. name: main-route
  34. target:
  35. type: Value
  36. value: 10k
  37. status:
  38. observedGeneration: 1
  39. lastScaleTime: <some-time>
  40. currentReplicas: 1
  41. desiredReplicas: 1
  42. currentMetrics:
  43. - type: Resource
  44. resource:
  45. name: cpu
  46. current:
  47. averageUtilization: 0
  48. averageValue: 0
  49. - type: Object
  50. object:
  51. metric:
  52. name: requests-per-second
  53. describedObject:
  54. apiVersion: networking.k8s.io/v1beta1
  55. kind: Ingress
  56. name: main-route
  57. current:
  58. value: 10k

Then, your HorizontalPodAutoscaler would attempt to ensure that each pod was consuming roughly 50% of its requested CPU, serving 1000 packets per second, and that all pods behind the main-route Ingress were serving a total of 10000 requests per second.

Autoscaling on more specific metrics

Many metrics pipelines allow you to describe metrics either by name or by a set of additional descriptors called labels. For all non-resource metric types (pod, object, and external, described below), you can specify an additional label selector which is passed to your metric pipeline. For instance, if you collect a metric http_requests with the verb label, you can specify the following metric block to scale only on GET requests:

  1. type: Object
  2. object:
  3. metric:
  4. name: http_requests
  5. selector: {matchLabels: {verb: GET}}

This selector uses the same syntax as the full Kubernetes label selectors. The monitoring pipeline determines how to collapse multiple series into a single value, if the name and selector match multiple series. The selector is additive, and cannot select metrics that describe objects that are not the target object (the target pods in the case of the Pods type, and the described object in the case of the Object type).

Autoscaling on metrics not related to Kubernetes objects

Applications running on Kubernetes may need to autoscale based on metrics that don’t have an obvious relationship to any object in the Kubernetes cluster, such as metrics describing a hosted service with no direct correlation to Kubernetes namespaces. In Kubernetes 1.10 and later, you can address this use case with external metrics.

Using external metrics requires knowledge of your monitoring system; the setup is similar to that required when using custom metrics. External metrics allow you to autoscale your cluster based on any metric available in your monitoring system. Just provide a metric block with a name and selector, as above, and use the External metric type instead of Object. If multiple time series are matched by the metricSelector, the sum of their values is used by the HorizontalPodAutoscaler. External metrics support both the Value and AverageValue target types, which function exactly the same as when you use the Object type.

For example if your application processes tasks from a hosted queue service, you could add the following section to your HorizontalPodAutoscaler manifest to specify that you need one worker per 30 outstanding tasks.

  1. - type: External
  2. external:
  3. metric:
  4. name: queue_messages_ready
  5. selector: "queue=worker_tasks"
  6. target:
  7. type: AverageValue
  8. averageValue: 30

When possible, it’s preferable to use the custom metric target types instead of external metrics, since it’s easier for cluster administrators to secure the custom metrics API. The external metrics API potentially allows access to any metric, so cluster administrators should take care when exposing it.

Appendix: Horizontal Pod Autoscaler Status Conditions

When using the autoscaling/v2beta2 form of the HorizontalPodAutoscaler, you will be able to see status conditions set by Kubernetes on the HorizontalPodAutoscaler. These status conditions indicate whether or not the HorizontalPodAutoscaler is able to scale, and whether or not it is currently restricted in any way.

The conditions appear in the status.conditions field. To see the conditions affecting a HorizontalPodAutoscaler, we can use kubectl describe hpa:

  1. kubectl describe hpa cm-test
  1. Name: cm-test
  2. Namespace: prom
  3. Labels: <none>
  4. Annotations: <none>
  5. CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000
  6. Reference: ReplicationController/cm-test
  7. Metrics: ( current / target )
  8. "http_requests" on pods: 66m / 500m
  9. Min replicas: 1
  10. Max replicas: 4
  11. ReplicationController pods: 1 current / 1 desired
  12. Conditions:
  13. Type Status Reason Message
  14. ---- ------ ------ -------
  15. AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale
  16. ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_requests
  17. ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range
  18. Events:

For this HorizontalPodAutoscaler, we can see several conditions in a healthy state. The first, AbleToScale, indicates whether or not the HPA is able to fetch and update scales, as well as whether or not any backoff-related conditions would prevent scaling. The second, ScalingActive, indicates whether or not the HPA is enabled (i.e. the replica count of the target is not zero) and is able to calculate desired scales. When it is False, it generally indicates problems with fetching metrics. Finally, the last condition, ScalingLimited, indicates that the desired scale was capped by the maximum or minimum of the HorizontalPodAutoscaler. This is an indication that you may wish to raise or lower the minimum or maximum replica count constraints on your HorizontalPodAutoscaler.

Appendix: Quantities

All metrics in the HorizontalPodAutoscaler and metrics APIs are specified using a special whole-number notation known in Kubernetes as a quantity. For example, the quantity 10500m would be written as 10.5 in decimal notation. The metrics APIs will return whole numbers without a suffix when possible, and will generally return quantities in milli-units otherwise. This means you might see your metric value fluctuate between 1 and 1500m, or 1 and 1.5 when written in decimal notation.

Appendix: Other possible scenarios

Creating the autoscaler declaratively

Instead of using kubectl autoscale command to create a HorizontalPodAutoscaler imperatively we can use the following file to create it declaratively:

application/hpa/php-apache.yaml Horizontal Pod Autoscaler Walkthrough - 图2

  1. apiVersion: autoscaling/v1
  2. kind: HorizontalPodAutoscaler
  3. metadata:
  4. name: php-apache
  5. spec:
  6. scaleTargetRef:
  7. apiVersion: apps/v1
  8. kind: Deployment
  9. name: php-apache
  10. minReplicas: 1
  11. maxReplicas: 10
  12. targetCPUUtilizationPercentage: 50

We will create the autoscaler by executing the following command:

  1. kubectl create -f https://k8s.io/examples/application/hpa/php-apache.yaml
  1. horizontalpodautoscaler.autoscaling/php-apache created