Kubernetes HPA with Custom Metrics

This guide enables Kubernetes HPAv2 (Horizontal Pod Autoscaling) with Custom Metrics.

The core components required are:

  • Prometheus (deployed with OpenFaaS) - for scraping (collecting), storing and enabling queries
  • Prometheus Metrics Adapter - to expose Prometheus metrics to the Kubernetes API server
  • Helm for installing the metrics adapter

Install the pre-reqs

If you’re an arkade user, helm is available at ~/.arkade/bin/helm3/, add it to your path with export PATH=$PATH:~/.arkade/bin/helm3/.

  • Install OpenFaaS via Helm or via arkade

You will need to use the latest version of the chart and faas-netes.

  1. arkade install openfaas \
  2. --set gateway.directFunctions=false

Additionally, set gateway.directFunctions=false so that the provider (faas-netes) performs manual load-balancing between Pod IPs instead of relying on Kubernetes, which could use KeepAlive on the connections with the load-testing tool and not spread the load evenly.

Create a values.yaml file with overrides for Prometheus when deployed via OpenFaaS:

  1. prometheus:
  2. url: http://prometheus.openfaas.svc
  3. port: 9090
  4. rules:
  5. default: false
  6. custom:
  7. - seriesQuery: 'http_requests_total{kubernetes_namespace!="",kubernetes_pod_name!=""}'
  8. resources:
  9. overrides:
  10. kubernetes_namespace: {resource: "namespace"}
  11. kubernetes_pod_name: {resource: "pod"}
  12. name:
  13. matches: "^(.*)_total"
  14. as: "${1}_per_second"
  15. metricsQuery: 'sum(rate(<<.Series>>{<<.LabelMatchers>>}[1m])) by (<<.GroupBy>>)'

Notes: the URL http://prometheus.openfaas.svc points to the specific Prometheus instance that OpenFaaS uses. The custom rule maps http_requests_total to the http_requests_per_second metric in Kubernetes measuring over a 1m timeframe.

Deploy a function

Deploy the nodeinfo sample, which is a Node.js HTTP server to print system information about the container it is running in.

Add two annotations to enable Prometheus scraping, and set a min and max scale to the same number to disable OpenFaaS autoscaling.

  1. faas-cli store deploy nodeinfo \
  2. --annotation prometheus.io.scrape=true \
  3. --annotation prometheus.io.port=8081 \
  4. --annotation com.openfaaas.scale.min=1 \
  5. --annotation com.openfaaas.scale.max=1

The OpenFaaS watchdog and classic watchdog both expose standard HTTP metrics on port 8081. You can change the port to another one and expose your own metrics, if you wish. You can also alter the path via prometheus.io.path.

Note: if you are using an older OpenFaaS (faas-netes) version < 0.10.4, then you will need to manually edit the function’s deployment and update the prometheus.io.scrape annotation from false to true.

Generate some load

Use the hey tool to generate some low-level metrics:

  1. hey -q 1 -c 1 -z 15m http://localhost:8080/function/nodeinfo

This sends 1 request / second over 15 minutes.

Check the data is populated in Prometheus

Port-forward the Prometheus UI:

  1. kubectl port-forward svc/prometheus -n openfaas 9090:9090

Look at the http_requests_total metrics, you should see data for both namespaces:

  • http_requests_total{kubernetes_namespace="openfaas"}
  • http_requests_total{kubernetes_namespace="openfaas-fn"}

Create a HPAv2 rule

Create a rule to scale the function independently of the gateway’s auto-scaling algorithm.

We need to reference the deployment created by OpenFaaS in the openfaas-fn namespace in the scaleTargetRef field so that the autoscaler knows which Deployment to scale.

Then specify the metricName and an targetAverageValue. If a Pod starts to process more than 5 total requests per second, then the deployment will be scaled, until the average value is less than 5, or the upper ceiling of 10 pods is hit.

  1. cat >nodeinfo-hpa.yaml<<EOF
  2. apiVersion: autoscaling/v2beta1
  3. kind: HorizontalPodAutoscaler
  4. metadata:
  5. name: nodeinfo-hpa
  6. namespace: openfaas-fn
  7. spec:
  8. scaleTargetRef:
  9. apiVersion: apps/v1
  10. kind: Deployment
  11. name: nodeinfo
  12. minReplicas: 1
  13. maxReplicas: 10
  14. metrics:
  15. - type: Pods
  16. pods:
  17. metricName: http_requests_per_second
  18. targetAverageValue: 5
  19. EOF
  20. kubectl apply -f nodeinfo-hpa.yaml

Now ramp up the load-test using hey, so that the traffic is over 5 requests/per second.

  1. hey -q 10 -c 1 -z 15m http://localhost:8080/function/nodeinfo

We should see at least 2-3 new pods come online to deal with the additional load, until the average between all is around 5 or lower.

Monitor the auto-scaling

Run the following to watch the metrics being observed and new replicas of the nodeinfo deployment being brought online.

  1. watch "kubectl describe -f nodeinfo-hpa.yaml"

You’ll see output like below which gives reasoning on why changes are being made.

  1. Name: nodeinfo-hpa
  2. Namespace: openfaas-fn
  3. Labels: <none>
  4. CreationTimestamp: Fri, 24 Apr 2020 13:11:30 +0100
  5. Reference: Deployment/nodeinfo
  6. Metrics: ( current / target )
  7. "http_requests_per_second" on pods: 1890m / 5
  8. Min replicas: 1
  9. Max replicas: 10
  10. Deployment pods: 5 current / 5 desired
  11. Type Reason Age From Message
  12. ---- ------ ---- ---- -------
  13. Normal SuccessfulRescale 61s horizontal-pod-autoscaler New size: 5; reason: All metrics below target

You can also look at the Prometheus query of http_requests_total{kubernetes_namespace="openfaas-fn", faas_function="nodeinfo"}.

View its rate over a minute with: rate(http_requests_total{kubernetes_namespace="openfaas-fn", faas_function="nodeinfo"}[1m])