Use Filebeat to collect logs of Karmada member clusters

Filebeat is a lightweight shipper for forwarding and centralizing log data. Installed as an agent on your servers, Filebeat monitors the log files or locations that you specify, collects log events, and forwards them either to Elasticsearch or kafka for indexing.

This document demonstrates how to use the Filebeat to collect logs of Karmada member clusters.

Start up Karmada clusters

You just need to clone Karmada repo, and run the following script in Karmada directory.

  1. hack/local-up-karmada.sh

Start Filebeat

  1. Create resource objects of Filebeat, the content is as follows. You can specify a list of inputs in the filebeat.inputs section of the filebeat.yml. Inputs specify how Filebeat locates and processes input data, also you can configure Filebeat to write to a specific output by setting options in the Outputs section of the filebeat.yml config file. The example will collect the log information of each container and write the collected logs to a file. For more detailed information about the input and output configuration, please refer to: https://github.com/elastic/beats/tree/master/filebeat/docs

    1. apiVersion: v1
    2. kind: Namespace
    3. metadata:
    4. name: logging
    5. ---
    6. apiVersion: v1
    7. kind: ServiceAccount
    8. metadata:
    9. name: filebeat
    10. namespace: logging
    11. labels:
    12. k8s-app: filebeat
    13. ---
    14. apiVersion: rbac.authorization.k8s.io/v1
    15. kind: ClusterRole
    16. metadata:
    17. name: filebeat
    18. rules:
    19. - apiGroups: [""] # "" indicates the core API group
    20. resources:
    21. - namespaces
    22. - pods
    23. verbs:
    24. - get
    25. - watch
    26. - list
    27. ---
    28. apiVersion: rbac.authorization.k8s.io/v1
    29. kind: ClusterRoleBinding
    30. metadata:
    31. name: filebeat
    32. subjects:
    33. - kind: ServiceAccount
    34. name: filebeat
    35. namespace: kube-system
    36. roleRef:
    37. kind: ClusterRole
    38. name: filebeat
    39. apiGroup: rbac.authorization.k8s.io
    40. ---
    41. apiVersion: v1
    42. kind: ConfigMap
    43. metadata:
    44. name: filebeat-config
    45. namespace: logging
    46. labels:
    47. k8s-app: filebeat
    48. kubernetes.io/cluster-service: "true"
    49. data:
    50. filebeat.yml: |-
    51. filebeat.inputs:
    52. - type: container
    53. paths:
    54. - /var/log/containers/*.log
    55. processors:
    56. - add_kubernetes_metadata:
    57. host: ${NODE_NAME}
    58. matchers:
    59. - logs_path:
    60. logs_path: "/var/log/containers/"
    61. # To enable hints based autodiscover, remove `filebeat.inputs` configuration and uncomment this:
    62. #filebeat.autodiscover:
    63. # providers:
    64. # - type: kubernetes
    65. # node: ${NODE_NAME}
    66. # hints.enabled: true
    67. # hints.default_config:
    68. # type: container
    69. # paths:
    70. # - /var/log/containers/*${data.kubernetes.container.id}.log
    71. processors:
    72. - add_cloud_metadata:
    73. - add_host_metadata:
    74. #output.elasticsearch:
    75. # hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
    76. # username: ${ELASTICSEARCH_USERNAME}
    77. # password: ${ELASTICSEARCH_PASSWORD}
    78. output.file:
    79. path: "/tmp/filebeat"
    80. filename: filebeat
    81. ---
    82. apiVersion: apps/v1
    83. kind: DaemonSet
    84. metadata:
    85. name: filebeat
    86. namespace: logging
    87. labels:
    88. k8s-app: filebeat
    89. spec:
    90. selector:
    91. matchLabels:
    92. k8s-app: filebeat
    93. template:
    94. metadata:
    95. labels:
    96. k8s-app: filebeat
    97. spec:
    98. serviceAccountName: filebeat
    99. terminationGracePeriodSeconds: 30
    100. tolerations:
    101. - effect: NoSchedule
    102. key: node-role.kubernetes.io/master
    103. containers:
    104. - name: filebeat
    105. image: docker.elastic.co/beats/filebeat:8.0.0-beta1-amd64
    106. imagePullPolicy: IfNotPresent
    107. args: [ "-c", "/usr/share/filebeat/filebeat.yml", "-e",]
    108. env:
    109. - name: NODE_NAME
    110. valueFrom:
    111. fieldRef:
    112. fieldPath: spec.nodeName
    113. securityContext:
    114. runAsUser: 0
    115. resources:
    116. limits:
    117. memory: 200Mi
    118. requests:
    119. cpu: 100m
    120. memory: 100Mi
    121. volumeMounts:
    122. - name: config
    123. mountPath: /usr/share/filebeat/filebeat.yml
    124. readOnly: true
    125. subPath: filebeat.yml
    126. - name: inputs
    127. mountPath: /usr/share/filebeat/inputs.d
    128. readOnly: true
    129. - name: data
    130. mountPath: /usr/share/filebeat/data
    131. - name: varlibdockercontainers
    132. mountPath: /var/lib/docker/containers
    133. readOnly: true
    134. - name: varlog
    135. mountPath: /var/log
    136. readOnly: true
    137. volumes:
    138. - name: config
    139. configMap:
    140. defaultMode: 0600
    141. name: filebeat-config
    142. - name: varlibdockercontainers
    143. hostPath:
    144. path: /var/lib/docker/containers
    145. - name: varlog
    146. hostPath:
    147. path: /var/log
    148. - name: inputs
    149. configMap:
    150. defaultMode: 0600
    151. name: filebeat-config
    152. # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
    153. - name: data
    154. hostPath:
    155. path: /var/lib/filebeat-data
    156. type: DirectoryOrCreate
  2. Run the below command to execute Karmada PropagationPolicy and ClusterPropagationPolicy.

    1. cat <<EOF | kubectl apply -f -
    2. apiVersion: policy.karmada.io/v1alpha1
    3. kind: PropagationPolicy
    4. metadata:
    5. name: filebeat-propagation
    6. namespace: logging
    7. spec:
    8. resourceSelectors:
    9. - apiVersion: v1
    10. kind: Namespace
    11. name: logging
    12. - apiVersion: v1
    13. kind: ServiceAccount
    14. name: filebeat
    15. namespace: logging
    16. - apiVersion: v1
    17. kind: ConfigMap
    18. name: filebeat-config
    19. namespace: logging
    20. - apiVersion: apps/v1
    21. kind: DaemonSet
    22. name: filebeat
    23. namespace: logging
    24. placement:
    25. clusterAffinity:
    26. clusterNames:
    27. - member1
    28. - member2
    29. - member3
    30. EOF
    31. cat <<EOF | kubectl apply -f -
    32. apiVersion: policy.karmada.io/v1alpha1
    33. kind: ClusterPropagationPolicy
    34. metadata:
    35. name: filebeatsrbac-propagation
    36. spec:
    37. resourceSelectors:
    38. - apiVersion: rbac.authorization.k8s.io/v1
    39. kind: ClusterRole
    40. name: filebeat
    41. - apiVersion: rbac.authorization.k8s.io/v1
    42. kind: ClusterRoleBinding
    43. name: filebeat
    44. placement:
    45. clusterAffinity:
    46. clusterNames:
    47. - member1
    48. - member2
    49. - member3
    50. EOF
  3. Obtain the collected logs according to the output configuration of the filebeat.yml.

Reference