• grafana-agent 的 metrics采集,完全兼容 prometheus exporter 生态,一些常见的 exporter,会在 grafana-agent 中内嵌实现(列表如下);
  • 对于未嵌入到 grafana-agent中的 exporter,则可以在 grafana-agent 中配置 scrape_configs 来完成抓取和收集,请参考抓取第三方exporter;

grafana-agent 内置实现的 exporter 列表

内置exporter的配置项说明

  1. # grafana-agent 本身的配置
  2. server:
  3. log_level: info
  4. http_listen_port: 12345
  5. # grafana-agent 抓取 metrics 的相关配置(类似于prometheus的scrape_configs)
  6. metrics:
  7. global:
  8. scrape_interval: 15s
  9. scrape_timeout: 10s
  10. remote_write:
  11. - url: https://n9e-server:19000/prometheus/v1/write
  12. basic_auth:
  13. username: <string>
  14. password: <string>
  15. # grafana-agent integration 相关的配置
  16. integrations:
  17. ## grafana-agent self-integration
  18. ## grafana-agent 本身的metrics 采集,这也是一个内嵌的 integration,可以选择启用或者关闭。
  19. agent:
  20. ### 是否开启针对grafana-agent 自身的integration,允许grafana-agent自动采集和发送其自身的metrics
  21. [enabled: <boolean> | default = false]
  22. # Sets an explicit value for the instance label when the integration is
  23. # self-scraped. Overrides inferred values.
  24. #
  25. # The default value for this integration is inferred from the agent hostname
  26. # and HTTP listen port, delimited by a colon.
  27. [instance: <string>]
  28. # Automatically collect metrics from this integration. If disabled,
  29. # the agent integration will be run but not scraped and thus not
  30. # remote_written. Metrics for the integration will be exposed at
  31. # /integrations/agent/metrics and can be scraped by an external process.
  32. ### 这个配置项如果设置为false,那么 /integrations/agent/metrics 的数据并不会被自动抓取和发送
  33. ### 但是,该接口 /integrations/agent/metrics 的数据仍然支持被外部的抓取进程所抓取
  34. [scrape_integration: <boolean> | default = <integrations_config.scrape_integrations>]
  35. # How often should the metrics be collected? Defaults to
  36. # prometheus.global.scrape_interval.
  37. [scrape_interval: <duration> | default = <global_config.scrape_interval>]
  38. # The timeout before considering the scrape a failure. Defaults to
  39. # prometheus.global.scrape_timeout.
  40. [scrape_timeout: <duration> | default = <global_config.scrape_timeout>]
  41. # How frequent to truncate the WAL for this integration.
  42. [wal_truncate_frequency: <duration> | default = "60m"]
  43. # Allows for relabeling labels on the target.
  44. relabel_configs:
  45. [- <relabel_config> ... ]
  46. # Relabel metrics coming from the integration, allowing to drop series
  47. # from the integration that you don't care about.
  48. metric_relabel_configs:
  49. [ - <relabel_config> ... ]
  50. # Client TLS Configuration
  51. # Client Cert/Key Values need to be defined if the server is requesting a certificate
  52. # (Client Auth Type = RequireAndVerifyClientCert || RequireAnyClientCert).
  53. http_tls_config: <tls_config>
  54. ## 控制内嵌的 node_exporter 工作逻辑
  55. node_exporter: <node_exporter_config>
  56. ## 控制内嵌的 process_exporter 工作逻辑
  57. process_exporter: <process_exporter_config>
  58. ## 控制内嵌的 mysqld_exporter 工作逻辑
  59. mysqld_exporter: <mysqld_exporter_config>
  60. ## 控制内嵌的 redis_exporter 工作逻辑
  61. redis_exporter: <redis_exporter_config>
  62. ## 控制内嵌的 dnsmasq_exporter 工作逻辑
  63. dnsmasq_exporter: <dnsmasq_exporter_config>
  64. ## 控制内嵌的 elasticsearch_exporter 工作逻辑
  65. elasticsearch_expoter: <elasticsearch_expoter_config>
  66. # Controls the memcached_exporter integration
  67. memcached_exporter: <memcached_exporter_config>
  68. ## 控制内嵌的 postgres_exporter 工作逻辑
  69. postgres_exporter: <postgres_exporter_config>
  70. ## 控制内嵌的 statsd_exporter 工作逻辑
  71. statsd_exporter: <statsd_exporter_config>
  72. ## 控制内嵌的 consul_exporter 工作逻辑
  73. consul_exporter: <consul_exporter_config>
  74. ## 控制内嵌的 windows_exporter 工作逻辑
  75. windows_exporter: <windows_exporter_config>
  76. ## 控制内嵌的 kafka_exporter 工作逻辑
  77. kafka_exporter: <kafka_exporter_config>
  78. ## 控制内嵌的 mongodb_exporter 工作逻辑
  79. mongodb_exporter: <mongodb_exporter_config>
  80. ## 控制内嵌的 github_exporter 工作逻辑
  81. github_exporter: <github_exporter_config>
  82. # Automatically collect metrics from enabled integrations. If disabled,
  83. # integrations will be run but not scraped and thus not remote_written. Metrics
  84. # for integrations will be exposed at /integrations/<integration_key>/metrics
  85. # and can be scraped by an external process.
  86. ## 如果设置为false,相关的exporter metrics接口仍会被暴露出来,但是grafana-agent不会去主动抓取和发送
  87. [scrape_integrations: <boolean> | default = true]
  88. # Extra labels to add to all samples coming from integrations.
  89. labels:
  90. { <string>: <string> }
  91. # The period to wait before restarting an integration that exits with an
  92. # error.
  93. [integration_restart_backoff: <duration> | default = "5s"]
  94. # A list of remote_write targets. Defaults to global_config.remote_write.
  95. # If provided, overrides the global defaults.
  96. prometheus_remote_write:
  97. - [<remote_write>]

通过grafana-agent抓取第三方exporter并收集

如文章开头所述,对于未嵌入到grafana-agent中的exporter,则可以在grafana-agent中配置scrape_configs来完成抓取和收集,其配置形式完全等同于 prometheus scrape_configs

grafana-agent中关于自定义配置scrape_configs的详细说明如下:

  1. # scrape_configs like prometheus style
  2. configs:
  3. scrape_timeout: 10s
  4. # 比如,我们可以配置抓取 grafana-agent 本身的 metrics : http://127.0.0.1:12345/metrics
  5. - name: grafana-agent
  6. host_filter: false
  7. scrape_configs:
  8. - job_name: grafana-agent
  9. static_configs:
  10. - targets: ['127.0.0.1:12345']
  11. remote_write:
  12. - url: http://localhost:9090/api/v1/write
  13. # 再比如,我们也可以配置抓取您的应用程序暴露的metrics接口: http://helloworld.app:8088/metrics
  14. - name: outside-exporters
  15. host_filter: false
  16. scrape_configs:
  17. - job_name: prometheus
  18. static_configs:
  19. - targets: ['127.0.0.1:9090']
  20. labels:
  21. cluster: 'fc-monitoring'
  22. remote_write:
  23. - url: https://n9e-server:19000/prometheus/v1/write
  24. basic_auth:
  25. username: <string>
  26. password: <string>