Overview

  • grafana-agent 的 metrics采集,完全兼容 prometheus exporter 生态,一些常见的 exporter,会在 grafana-agent 中内嵌实现(列表如下);
  • 对于未嵌入到 grafana-agent中的 exporter,则可以在 grafana-agent 中配置 scrape_configs 来完成抓取和收集,请参考抓取第三方exporter;

grafana-agent 内置实现的 exporter 列表

内置exporter的配置项说明

grafana-agent 本身的配置

  1. server:
  2. log_level: info
  3. http_listen_port: 12345

grafana-agent 抓取 metrics 的相关配置(类似于prometheus的scrape_configs)

  1. metrics:
  2. global:
  3. scrape_interval: 15s
  4. scrape_timeout: 10s
  5. remote_write:
  6. - url: https://n9e-server:19000/prometheus/v1/write
  7. basic_auth:
  8. username: <string>
  9. password: <string>
  10. # grafana-agent integration 相关的配置
  11. integrations:
  12. ## grafana-agent self-integration
  13. ## grafana-agent 本身的metrics 采集,这也是一个内嵌的 integration,可以选择启用或者关闭。
  14. agent:
  15. ### 是否开启针对grafana-agent 自身的integration,允许grafana-agent自动采集和发送其自身的metrics
  16. [enabled: <boolean> | default = false]
  17. # Sets an explicit value for the instance label when the integration is
  18. # self-scraped. Overrides inferred values.
  19. #
  20. # The default value for this integration is inferred from the agent hostname
  21. # and HTTP listen port, delimited by a colon.
  22. [instance: <string>]
  23. # Automatically collect metrics from this integration. If disabled,
  24. # the agent integration will be run but not scraped and thus not
  25. # remote_written. Metrics for the integration will be exposed at
  26. # /integrations/agent/metrics and can be scraped by an external process.
  27. ### 这个配置项如果设置为false,那么 /integrations/agent/metrics 的数据并不会被自动抓取和发送
  28. ### 但是,该接口 /integrations/agent/metrics 的数据仍然支持被外部的抓取进程所抓取
  29. [scrape_integration: <boolean> | default = <integrations_config.scrape_integrations>]
  30. # How often should the metrics be collected? Defaults to
  31. # prometheus.global.scrape_interval.
  32. [scrape_interval: <duration> | default = <global_config.scrape_interval>]
  33. # The timeout before considering the scrape a failure. Defaults to
  34. # prometheus.global.scrape_timeout.
  35. [scrape_timeout: <duration> | default = <global_config.scrape_timeout>]
  36. # How frequent to truncate the WAL for this integration.
  37. [wal_truncate_frequency: <duration> | default = "60m"]
  38. # Allows for relabeling labels on the target.
  39. relabel_configs:
  40. [- <relabel_config> ... ]
  41. # Relabel metrics coming from the integration, allowing to drop series
  42. # from the integration that you don't care about.
  43. metric_relabel_configs:
  44. [ - <relabel_config> ... ]
  45. # Client TLS Configuration
  46. # Client Cert/Key Values need to be defined if the server is requesting a certificate
  47. # (Client Auth Type = RequireAndVerifyClientCert || RequireAnyClientCert).
  48. http_tls_config: <tls_config>
  49. ## 控制内嵌的 node_exporter 工作逻辑
  50. node_exporter: <node_exporter_config>
  51. ## 控制内嵌的 process_exporter 工作逻辑
  52. process_exporter: <process_exporter_config>
  53. ## 控制内嵌的 mysqld_exporter 工作逻辑
  54. mysqld_exporter: <mysqld_exporter_config>
  55. ## 控制内嵌的 redis_exporter 工作逻辑
  56. redis_exporter: <redis_exporter_config>
  57. ## 控制内嵌的 dnsmasq_exporter 工作逻辑
  58. dnsmasq_exporter: <dnsmasq_exporter_config>
  59. ## 控制内嵌的 elasticsearch_exporter 工作逻辑
  60. elasticsearch_expoter: <elasticsearch_expoter_config>
  61. # Controls the memcached_exporter integration
  62. memcached_exporter: <memcached_exporter_config>
  63. ## 控制内嵌的 postgres_exporter 工作逻辑
  64. postgres_exporter: <postgres_exporter_config>
  65. ## 控制内嵌的 statsd_exporter 工作逻辑
  66. statsd_exporter: <statsd_exporter_config>
  67. ## 控制内嵌的 consul_exporter 工作逻辑
  68. consul_exporter: <consul_exporter_config>
  69. ## 控制内嵌的 windows_exporter 工作逻辑
  70. windows_exporter: <windows_exporter_config>
  71. ## 控制内嵌的 kafka_exporter 工作逻辑
  72. kafka_exporter: <kafka_exporter_config>
  73. ## 控制内嵌的 mongodb_exporter 工作逻辑
  74. mongodb_exporter: <mongodb_exporter_config>
  75. ## 控制内嵌的 github_exporter 工作逻辑
  76. github_exporter: <github_exporter_config>
  77. # Automatically collect metrics from enabled integrations. If disabled,
  78. # integrations will be run but not scraped and thus not remote_written. Metrics
  79. # for integrations will be exposed at /integrations/<integration_key>/metrics
  80. # and can be scraped by an external process.
  81. ## 如果设置为false,相关的exporter metrics接口仍会被暴露出来,但是grafana-agent不会去主动抓取和发送
  82. [scrape_integrations: <boolean> | default = true]
  83. # Extra labels to add to all samples coming from integrations.
  84. labels:
  85. { <string>: <string> }
  86. # The period to wait before restarting an integration that exits with an
  87. # error.
  88. [integration_restart_backoff: <duration> | default = "5s"]
  89. # A list of remote_write targets. Defaults to global_config.remote_write.
  90. # If provided, overrides the global defaults.
  91. prometheus_remote_write:
  92. - [<remote_write>]

通过grafana-agent抓取第三方exporter并收集

如文章开头所述,对于未嵌入到grafana-agent中的exporter,则可以在grafana-agent中配置scrape_configs来完成抓取和收集,其配置形式完全等同于 prometheus scrape_configs

grafana-agent中关于自定义配置scrape_configs的详细说明如下:

  1. # scrape_configs like prometheus style
  2. configs:
  3. scrape_timeout: 10s
  4. # 比如,我们可以配置抓取 grafana-agent 本身的 metrics : http://127.0.0.1:12345/metrics
  5. - name: grafana-agent
  6. host_filter: false
  7. scrape_configs:
  8. - job_name: grafana-agent
  9. static_configs:
  10. - targets: ['127.0.0.1:12345']
  11. remote_write:
  12. - url: http://localhost:9090/api/v1/write
  13. # 再比如,我们也可以配置抓取您的应用程序暴露的metrics接口: http://helloworld.app:8088/metrics
  14. - name: outside-exporters
  15. host_filter: false
  16. scrape_configs:
  17. - job_name: prometheus
  18. static_configs:
  19. - targets: ['127.0.0.1:9090']
  20. labels:
  21. cluster: 'fc-monitoring'
  22. remote_write:
  23. - url: https://n9e-server:19000/prometheus/v1/write
  24. basic_auth:
  25. username: <string>
  26. password: <string>