采集Kubernetes Events

除了使用Logconfig采集日志外,Loggie同样可以通过CRD配置任意的source/sink/interceptor,本质上Loggie就是一个支持多Pipeline的数据流,集成了通用的诸如队列重试、数据处理、配置下发、监控报警等等功能,减少了类似需求的研发成本。采集Kubernetes的Events就是其中一个很好的例子。

Kubernetes Events是由Kubernetes本身组件和一些控制器产生的事件,我们常用的kubectl describe命令就可以查看关联资源的事件信息,采集记录这些事件可以帮助我们回溯、排查、审计、总结问题,更好的了解Kubernetes集群内部状态。

准备

和Loggie中转机类似,我们可以单独部署Aggregator集群或者复用现有的中转机集群。

配置示例

配置kubeEvents source,并且使用type: cluster下发配置到Aggregator集群即可。

Config

  1. apiVersion: loggie.io/v1beta1
  2. kind: ClusterLogConfig
  3. metadata:
  4. name: kubeevent
  5. spec:
  6. selector:
  7. type: cluster
  8. cluster: aggregator
  9. pipeline:
  10. sources: |
  11. - type: kubeEvent
  12. name: event
  13. sinkRef: dev

默认情况下,不管是发送给Elasticsearch还是其他的sink,输出的是类似如下格式的数据:

event

  1. {
  2. "body": "{\"metadata\":{\"name\":\"loggie-aggregator.16c277f8fc4ff0d0\",\"namespace\":\"loggie-aggregator\",\"uid\":\"084cea27-cd4a-4ce4-97ef-12e70f37880e\",\"resourceVersion\":\"2975193\",\"creationTimestamp\":\"2021-12-20T12:58:45Z\",\"managedFields\":[{\"manager\":\"kube-controller-manager\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2021-12-20T12:58:45Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:count\":{},\"f:firstTimestamp\":{},\"f:involvedObject\":{\"f:apiVersion\":{},\"f:kind\":{},\"f:name\":{},\"f:namespace\":{},\"f:resourceVersion\":{},\"f:uid\":{}},\"f:lastTimestamp\":{},\"f:message\":{},\"f:reason\":{},\"f:source\":{\"f:component\":{}},\"f:type\":{}}}]},\"involvedObject\":{\"kind\":\"DaemonSet\",\"namespace\":\"loggie-aggregator\",\"name\":\"loggie-aggregator\",\"uid\":\"7cdf4792-815d-4eba-8a81-d60131ad1fc4\",\"apiVersion\":\"apps/v1\",\"resourceVersion\":\"2975170\"},\"reason\":\"SuccessfulCreate\",\"message\":\"Created pod: loggie-aggregator-pbkjk\",\"source\":{\"component\":\"daemonset-controller\"},\"firstTimestamp\":\"2021-12-20T12:58:45Z\",\"lastTimestamp\":\"2021-12-20T12:58:45Z\",\"count\":1,\"type\":\"Normal\",\"eventTime\":null,\"reportingComponent\":\"\",\"reportingInstance\":\"\"}",
  3. "systemPipelineName": "default/kubeevent/",
  4. "systemSourceName": "event"
  5. }

为了方便分析展示,我们可以添加一些interceptor将采集到的events数据json decode。

配置示例如下,具体请参考日志切分与处理

Config

interceptor

  1. apiVersion: loggie.io/v1beta1
  2. kind: Interceptor
  3. metadata:
  4. name: jsondecode
  5. spec:
  6. interceptors: |
  7. - type: normalize
  8. name: json
  9. processors:
  10. - jsonDecode: ~
  11. - drop:
  12. target: ["body"]

clusterLogConfig

  1. apiVersion: loggie.io/v1beta1
  2. kind: ClusterLogConfig
  3. metadata:
  4. name: kubeevent
  5. spec:
  6. selector:
  7. type: cluster
  8. cluster: aggregator
  9. pipeline:
  10. sources: |
  11. - type: kubeEvent
  12. name: event
  13. interceptorRef: jsondecode
  14. sinkRef: dev

经过normalize interceptor里jsonDecode后的数据如下所示:

event

  1. {
  2. "metadata": {
  3. "name": "loggie-aggregator.16c277f8fc4ff0d0",
  4. "namespace": "loggie-aggregator",
  5. "uid": "084cea27-cd4a-4ce4-97ef-12e70f37880e",
  6. "resourceVersion": "2975193",
  7. "creationTimestamp": "2021-12-20T12:58:45Z",
  8. "managedFields": [
  9. {
  10. "fieldsType": "FieldsV1",
  11. "fieldsV1": {
  12. "f:type": {
  13. },
  14. "f:count": {
  15. },
  16. "f:firstTimestamp": {
  17. },
  18. "f:involvedObject": {
  19. "f:apiVersion": {
  20. },
  21. "f:kind": {
  22. },
  23. "f:name": {
  24. },
  25. "f:namespace": {
  26. },
  27. "f:resourceVersion": {
  28. },
  29. "f:uid": {
  30. }
  31. },
  32. "f:lastTimestamp": {
  33. },
  34. "f:message": {
  35. },
  36. "f:reason": {
  37. },
  38. "f:source": {
  39. "f:component": {
  40. }
  41. }
  42. },
  43. "manager": "kube-controller-manager",
  44. "operation": "Update",
  45. "apiVersion": "v1",
  46. "time": "2021-12-20T12:58:45Z"
  47. }
  48. ]
  49. },
  50. "reportingComponent": "",
  51. "type": "Normal",
  52. "message": "Created pod: loggie-aggregator-pbkjk",
  53. "reason": "SuccessfulCreate",
  54. "reportingInstance": "",
  55. "source": {
  56. "component": "daemonset-controller"
  57. },
  58. "count": 1,
  59. "lastTimestamp": "2021-12-20T12:58:45Z",
  60. "firstTimestamp": "2021-12-20T12:58:45Z",
  61. "eventTime": null,
  62. "involvedObject": {
  63. "kind": "DaemonSet",
  64. "namespace": "loggie-aggregator",
  65. "name": "loggie-aggregator",
  66. "uid": "7cdf4792-815d-4eba-8a81-d60131ad1fc4",
  67. "apiVersion": "apps/v1",
  68. "resourceVersion": "2975170"
  69. },
  70. }

如果觉得数据字段太多或者格式不符合需求,还可以配置normalize interceptor进行修改。