Collect Kubernetes Events

In addition to using Logconfig to collect logs, Loggie can also configure any source/sink/interceptor through CRD. In essence, Loggie is a data stream that supports multiple Pipelines, integrating common functions such as queue retry, data processing, configuration delivery, monitoring alarms and other functions, which reduces the development cost for similar requirements. Collecting Kubernetes Events is a good example of this.

Kubernetes Events are events generated by Kubernetes’ own components and some controllers. We can use kubectl describe to view the event information of associated resources. Collecting and recording these events can help us trace back, troubleshoot, audit, and summarize problems, and better understand internal state of Kubernetes clusters.

Preparation

Similar to Loggie Aggregator, we can Deploy Aggregator cluster separately or reuse the existing aggregator cluster.

Configuration Example

Configure the kubeEvents source and use type: cluster to distribute configuration to the Aggregator cluster.

Config

  1. apiVersion: loggie.io/v1beta1
  2. kind: ClusterLogConfig
  3. metadata:
  4. name: kubeevent
  5. spec:
  6. selector:
  7. type: cluster
  8. cluster: aggregator
  9. pipeline:
  10. sources: |
  11. - type: kubeEvent
  12. name: event
  13. sinkRef: dev

By default, whether it is sent to Elasticsearch or other sinks, the output is in a format similar to the following:

event

  1. {
  2. "body": "{\"metadata\":{\"name\":\"loggie-aggregator.16c277f8fc4ff0d0\",\"namespace\":\"loggie-aggregator\",\"uid\":\"084cea27-cd4a-4ce4-97ef-12e70f37880e\",\"resourceVersion\":\"2975193\",\"creationTimestamp\":\"2021-12-20T12:58:45Z\",\"managedFields\":[{\"manager\":\"kube-controller-manager\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2021-12-20T12:58:45Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:count\":{},\"f:firstTimestamp\":{},\"f:involvedObject\":{\"f:apiVersion\":{},\"f:kind\":{},\"f:name\":{},\"f:namespace\":{},\"f:resourceVersion\":{},\"f:uid\":{}},\"f:lastTimestamp\":{},\"f:message\":{},\"f:reason\":{},\"f:source\":{\"f:component\":{}},\"f:type\":{}}}]},\"involvedObject\":{\"kind\":\"DaemonSet\",\"namespace\":\"loggie-aggregator\",\"name\":\"loggie-aggregator\",\"uid\":\"7cdf4792-815d-4eba-8a81-d60131ad1fc4\",\"apiVersion\":\"apps/v1\",\"resourceVersion\":\"2975170\"},\"reason\":\"SuccessfulCreate\",\"message\":\"Created pod: loggie-aggregator-pbkjk\",\"source\":{\"component\":\"daemonset-controller\"},\"firstTimestamp\":\"2021-12-20T12:58:45Z\",\"lastTimestamp\":\"2021-12-20T12:58:45Z\",\"count\":1,\"type\":\"Normal\",\"eventTime\":null,\"reportingComponent\":\"\",\"reportingInstance\":\"\"}",
  3. "systemPipelineName": "default/kubeevent/",
  4. "systemSourceName": "event"
  5. }

To facilitate analysis and display, we can add some interceptors to json-decode the collected events data.

The configuration example is as follows. For details, please refer to Log Segmentation

Config

interceptor

  1. apiVersion: loggie.io/v1beta1
  2. kind: Interceptor
  3. metadata:
  4. name: jsondecode
  5. spec:
  6. interceptors: |
  7. - type: normalize
  8. name: json
  9. processors:
  10. - jsonDecode: ~
  11. - drop:
  12. targets: ["body"]

clusterLogConfig

  1. apiVersion: loggie.io/v1beta1
  2. kind: ClusterLogConfig
  3. metadata:
  4. name: kubeevent
  5. spec:
  6. selector:
  7. type: cluster
  8. cluster: aggregator
  9. pipeline:
  10. sources: |
  11. - type: kubeEvent
  12. name: event
  13. interceptorRef: jsondecode
  14. sinkRef: dev

The data after jsonDecode in normalize interceptor is as follows:

event

  1. {
  2. "metadata": {
  3. "name": "loggie-aggregator.16c277f8fc4ff0d0",
  4. "namespace": "loggie-aggregator",
  5. "uid": "084cea27-cd4a-4ce4-97ef-12e70f37880e",
  6. "resourceVersion": "2975193",
  7. "creationTimestamp": "2021-12-20T12:58:45Z",
  8. "managedFields": [
  9. {
  10. "fieldsType": "FieldsV1",
  11. "fieldsV1": {
  12. "f:type": {
  13. },
  14. "f:count": {
  15. },
  16. "f:firstTimestamp": {
  17. },
  18. "f:involvedObject": {
  19. "f:apiVersion": {
  20. },
  21. "f:kind": {
  22. },
  23. "f:name": {
  24. },
  25. "f:namespace": {
  26. },
  27. "f:resourceVersion": {
  28. },
  29. "f:uid": {
  30. }
  31. },
  32. "f:lastTimestamp": {
  33. },
  34. "f:message": {
  35. },
  36. "f:reason": {
  37. },
  38. "f:source": {
  39. "f:component": {
  40. }
  41. }
  42. },
  43. "manager": "kube-controller-manager",
  44. "operation": "Update",
  45. "apiVersion": "v1",
  46. "time": "2021-12-20T12:58:45Z"
  47. }
  48. ]
  49. },
  50. "reportingComponent": "",
  51. "type": "Normal",
  52. "message": "Created pod: loggie-aggregator-pbkjk",
  53. "reason": "SuccessfulCreate",
  54. "reportingInstance": "",
  55. "source": {
  56. "component": "daemonset-controller"
  57. },
  58. "count": 1,
  59. "lastTimestamp": "2021-12-20T12:58:45Z",
  60. "firstTimestamp": "2021-12-20T12:58:45Z",
  61. "eventTime": null,
  62. "involvedObject": {
  63. "kind": "DaemonSet",
  64. "namespace": "loggie-aggregator",
  65. "name": "loggie-aggregator",
  66. "uid": "7cdf4792-815d-4eba-8a81-d60131ad1fc4",
  67. "apiVersion": "apps/v1",
  68. "resourceVersion": "2975170"
  69. },
  70. }

If you feel that there are too many data fields or the format does not meet the requirements, you can also configure the normalize interceptor to modify it.