Log Collection

[!TIP] This document is machine-translated by Google. If you find grammatical and semantic errors, and the document description is not clear, please PR

In order to ensure the stable operation of the business and predict the unhealthy risks of the service, the collection of logs can help us observe the current health of the service. In traditional business development, when there are not many machine deployments, we usually log in directly to the server to view and debug logs. However, as the business increases, services continue to be split.

The maintenance cost of the service will also become more and more complicated. In a distributed system, there are more server machines, and the service is distributed on different servers. When problems are encountered, We can’t use traditional methods to log in to the server for log investigation and debugging. The complexity can be imagined.

log-flow

[!TIP] If it is a simple single service system, or the service is too small, it is not recommended to use it directly, otherwise it will be counterproductive.

Prepare

  • kafka
  • elasticsearch
  • kibana
  • filebeat、Log-Pilot(k8s)
  • go-stash

Filebeat

  1. $ vim xx/filebeat.yaml
  1. filebeat.inputs:
  2. - type: log
  3. enabled: true
  4. # Turn on json parsing
  5. json.keys_under_root: true
  6. json.add_error_key: true
  7. # Log file path
  8. paths:
  9. - /var/log/order/*.log
  10. setup.template.settings:
  11. index.number_of_shards: 1
  12. # Define kafka topic field
  13. fields:
  14. log_topic: log-collection
  15. # Export to kafka
  16. output.kafka:
  17. hosts: ["127.0.0.1:9092"]
  18. topic: '%{[fields.log_topic]}'
  19. partition.round_robin:
  20. reachable_only: false
  21. required_acks: 1
  22. keep_alive: 10s
  23. # ================================= Processors =================================
  24. processors:
  25. - decode_json_fields:
  26. fields: ['@timestamp','level','content','trace','span','duration']
  27. target: ""

[!TIP] xx is the path where filebeat.yaml is located

go-stash configuration

  • Create a new config.yaml file
  • Add configuration content
  1. $ vim config.yaml
  1. Clusters:
  2. - Input:
  3. Kafka:
  4. Name: go-stash
  5. Log:
  6. Mode: file
  7. Brokers:
  8. - "127.0.0.1:9092"
  9. Topics:
  10. - log-collection
  11. Group: stash
  12. Conns: 3
  13. Consumers: 10
  14. Processors: 60
  15. MinBytes: 1048576
  16. MaxBytes: 10485760
  17. Offset: first
  18. Filters:
  19. - Action: drop
  20. Conditions:
  21. - Key: status
  22. Value: "503"
  23. Type: contains
  24. - Key: type
  25. Value: "app"
  26. Type: match
  27. Op: and
  28. - Action: remove_field
  29. Fields:
  30. - source
  31. - _score
  32. - "@metadata"
  33. - agent
  34. - ecs
  35. - input
  36. - log
  37. - fields
  38. Output:
  39. ElasticSearch:
  40. Hosts:
  41. - "http://127.0.0.1:9200"
  42. Index: "go-stash-{{yyyy.MM.dd}}"
  43. MaxChunkBytes: 5242880
  44. GracePeriod: 10s
  45. Compress: false
  46. TimeZone: UTC

Start services (start in order)

  • Start kafka
  • Start elasticsearch
  • Start kibana
  • Start go-stash
  • Start filebeat
  • Start the order-api service and its dependent services (order-api service in the go-zero-demo project)

Visit kibana

Enter 127.0.0.1:5601 log

[!TIP] Here we only demonstrate the logs generated by logx in the collection service, and log collection in nginx is the same.

Reference