Version: v1.8

Metrics Collection

In your application, if you want to expose the metrics of your component (like webservice) to Prometheus, you just need to add the prometheus-scrape trait as follows.

  1. apiVersion: core.oam.dev/v1beta1
  2. kind: Application
  3. metadata:
  4. name: my-app
  5. spec:
  6. components:
  7. - name: my-app
  8. type: webservice
  9. properties:
  10. image: somefive/prometheus-client-example:new
  11. traits:
  12. - type: prometheus-scrape

You can also explicitly specify which port and which path to expose metrics.

  1. apiVersion: core.oam.dev/v1beta1
  2. kind: Application
  3. metadata:
  4. name: my-app
  5. spec:
  6. components:
  7. - name: my-app
  8. type: webservice
  9. properties:
  10. image: somefive/prometheus-client-example:new
  11. traits:
  12. - type: prometheus-scrape
  13. properties:
  14. port: 8080
  15. path: /metrics

This will let your application be scrapable by the prometheus server. If you want to see those metrics on Grafana, you need to create Grafana dashboard further. Go to Dashboard for learning the following steps.

If you want to make customization to your prometheus-server installation, you can put your configuration into an individual ConfigMap, like my-prom in namespace o11y-system. To distribute your custom config to all clusters, you can also use a KubeVela Application to do the job.

For example, if you want to add some recording rules to all your prometheus server configurations in all clusters, you can firstly create an application to distribute your recording rules as below.

  1. # my-prom.yaml
  2. apiVersion: core.oam.dev/v1beta1
  3. kind: Application
  4. metadata:
  5. name: my-prom
  6. namespace: o11y-system
  7. spec:
  8. components:
  9. - type: k8s-objects
  10. name: my-prom
  11. properties:
  12. objects:
  13. - apiVersion: v1
  14. kind: ConfigMap
  15. metadata:
  16. name: my-prom
  17. namespace: o11y-system
  18. data:
  19. my-recording-rules.yml: |
  20. groups:
  21. - name: example
  22. rules:
  23. - record: apiserver:requests:rate5m
  24. expr: sum(rate(apiserver_request_total{job="kubernetes-nodes"}[5m]))
  25. policies:
  26. - type: topology
  27. name: topology
  28. properties:
  29. clusterLabelSelector: {}

Then you need to add customConfig parameter to the enabling process of the prometheus-server addon, like

  1. vela addon enable prometheus-server thanos=true serviceType=LoadBalancer storage=1G customConfig=my-prom

Then you will be able to see the recording rules configuration being delivered into all prometheus instances.

To make customization to other configurations like alerting rules, the process is same with the recording rules example shown above. You only need to change/add prometheus configurations in the application.

  1. data:
  2. my-alerting-rules.yml: |
  3. groups:
  4. - name: example
  5. rules:
  6. - alert: HighApplicationQueueDepth
  7. expr: sum(workqueue_depth{app_kubernetes_io_name="vela-core",name="application"}) > 100
  8. for: 10m
  9. annotations:
  10. summary: High Application Queue Depth

prometheus-rules-config

If you want your prometheus-server to persist data in volumes, you can also specify storage parameter for your installation, like

  1. vela addon enable prometheus-server storage=1G

This will create PersistentVolumeClaims and let the addon use the provided storage. The storage will not be automatically recycled even if the addon is disabled. You need to clean up the storage manually.

Last updated on May 6, 2023 by Tianxin Dong