Elastic Log Alerting

Learn how to create an async function to find out error logs.

This document describes how to create an async function to find out error logs.

Overview

This document uses an asynchronous function to analyze the log stream in Kafka to find out the error logs. The async function will then send alerts to Slack. The following diagram illustrates the entire workflow.

Elastic Log Alerting - 图1

Prerequisites

Create a Kafka Server and Topic

  1. Run the following commands to install strimzi-kafka-operator in the default namespace.

    1. helm repo add strimzi https://strimzi.io/charts/
    2. helm install kafka-operator -n default strimzi/strimzi-kafka-operator
  2. Use the following content to create a file kafka.yaml.

    1. apiVersion: kafka.strimzi.io/v1beta2
    2. kind: Kafka
    3. metadata:
    4. name: kafka-logs-receiver
    5. namespace: default
    6. spec:
    7. kafka:
    8. version: 3.3.1
    9. replicas: 1
    10. listeners:
    11. - name: plain
    12. port: 9092
    13. type: internal
    14. tls: false
    15. - name: tls
    16. port: 9093
    17. type: internal
    18. tls: true
    19. config:
    20. offsets.topic.replication.factor: 1
    21. transaction.state.log.replication.factor: 1
    22. transaction.state.log.min.isr: 1
    23. default.replication.factor: 1
    24. min.insync.replicas: 1
    25. inter.broker.protocol.version: "3.1"
    26. storage:
    27. type: ephemeral
    28. zookeeper:
    29. replicas: 1
    30. storage:
    31. type: ephemeral
    32. entityOperator:
    33. topicOperator: {}
    34. userOperator: {}
    35. ---
    36. apiVersion: kafka.strimzi.io/v1beta2
    37. kind: KafkaTopic
    38. metadata:
    39. name: logs
    40. namespace: default
    41. labels:
    42. strimzi.io/cluster: kafka-logs-receiver
    43. spec:
    44. partitions: 10
    45. replicas: 1
    46. config:
    47. retention.ms: 7200000
    48. segment.bytes: 1073741824
  3. Run the following command to deploy a 1-replica Kafka server named kafka-logs-receiver and 1-replica Kafka topic named logs in the default namespace.

    1. kubectl apply -f kafka.yaml
  4. Run the following command to check pod status and wait for Kafka and Zookeeper to be up and running.

    1. $ kubectl get po
    2. NAME READY STATUS RESTARTS AGE
    3. kafka-logs-receiver-entity-operator-57dc457ccc-tlqqs 3/3 Running 0 8m42s
    4. kafka-logs-receiver-kafka-0 1/1 Running 0 9m13s
    5. kafka-logs-receiver-zookeeper-0 1/1 Running 0 9m46s
    6. strimzi-cluster-operator-687fdd6f77-cwmgm 1/1 Running 0 11m
  5. Run the following commands to view the metadata of the Kafka cluster.

    1. # Starts a utility pod.
    2. $ kubectl run utils --image=arunvelsriram/utils -i --tty --rm
    3. # Checks metadata of the Kafka cluster.
    4. $ kafkacat -L -b kafka-logs-receiver-kafka-brokers:9092

Create a Logs Handler Function

  1. Use the following example YAML file to create a manifest logs-handler-function.yaml and modify the value of spec.image to set your own image registry address.
  1. apiVersion: core.openfunction.io/v1beta2
  2. kind: Function
  3. metadata:
  4. name: logs-async-handler
  5. namespace: default
  6. spec:
  7. build:
  8. builder: openfunction/builder-go:latest
  9. env:
  10. FUNC_CLEAR_SOURCE: "true"
  11. FUNC_NAME: LogsHandler
  12. srcRepo:
  13. revision: main
  14. sourceSubPath: functions/async/logs-handler-function/
  15. url: https://github.com/OpenFunction/samples.git
  16. image: openfunctiondev/logs-async-handler:v1
  17. imageCredentials:
  18. name: push-secret
  19. serving:
  20. bindings:
  21. kafka-receiver:
  22. metadata:
  23. - name: brokers
  24. value: kafka-server-kafka-brokers:9092
  25. - name: authRequired
  26. value: "false"
  27. - name: publishTopic
  28. value: logs
  29. - name: topics
  30. value: logs
  31. - name: consumerGroup
  32. value: logs-handler
  33. type: bindings.kafka
  34. version: v1
  35. notification-manager:
  36. metadata:
  37. - name: url
  38. value: http://notification-manager-svc.kubesphere-monitoring-system.svc.cluster.local:19093/api/v2/alerts
  39. type: bindings.http
  40. version: v1
  41. outputs:
  42. - dapr:
  43. name: notification-manager
  44. operation: post
  45. type: bindings.http
  46. scaleOptions:
  47. keda:
  48. scaledObject:
  49. advanced:
  50. horizontalPodAutoscalerConfig:
  51. behavior:
  52. scaleDown:
  53. policies:
  54. - periodSeconds: 15
  55. type: Percent
  56. value: 50
  57. stabilizationWindowSeconds: 45
  58. scaleUp:
  59. stabilizationWindowSeconds: 0
  60. cooldownPeriod: 60
  61. pollingInterval: 15
  62. triggers:
  63. - metadata:
  64. bootstrapServers: kafka-server-kafka-brokers.default.svc.cluster.local:9092
  65. consumerGroup: logs-handler
  66. lagThreshold: "20"
  67. topic: logs
  68. type: kafka
  69. maxReplicas: 10
  70. minReplicas: 0
  71. template:
  72. containers:
  73. - imagePullPolicy: IfNotPresent
  74. name: function
  75. triggers:
  76. dapr:
  77. - name: kafka-receiver
  78. type: bindings.kafka
  79. workloadType: Deployment
  80. version: v2.0.0
  81. workloadRuntime: OCIContainer
  1. Run the following command to create the function logs-async-handler.

    1. kubectl apply -f logs-handler-function.yaml
  2. The logs handler function will be triggered by messages from the logs topic in Kafka.