Deployment in Kubernetes

Deploy Loggie DaemonSet

Make sure you have kubectl and helm executable locally.

Download helm-chart

  1. VERSION=v1.3.0
  2. helm pull https://github.com/loggie-io/installation/releases/download/${VERSION}/loggie-${VERSION}.tgz && tar xvzf loggie-${VERSION}.tgz

Please replace <VERSION> above with the specific version number such as v1.3.0, which can be found release tag.

Modify Configuration

cd into chart directory:

  1. cd installation/helm-chart

Check values.yml, and modify it as you like.

Following are the currently configurable parameters:

Image

  1. image: loggieio/loggie:main

loggie image. All images are available on docker hub.

Resource

  1. resources:
  2. limits:
  3. cpu: 2
  4. memory: 2Gi
  5. requests:
  6. cpu: 100m
  7. memory: 100Mi

The limit/request resource of Loggie Agent can be modified according to the actual condition.

Additional CMD Arguments

  1. extraArgs: {}

The additional CMD arguments of Loggie. For example, if you want to use the debug log level, and do not want to use the json format for log, it can be modified as:

  1. extraArgs:
  2. log.level: debug
  3. log.jsonFormat: false

Extra Mount

  1. extraVolumeMounts:
  2. - mountPath: /var/log/pods
  3. name: podlogs
  4. - mountPath: /var/lib/kubelet/pods
  5. name: kubelet
  6. - mountPath: /var/lib/docker
  7. name: docker
  8. extraVolumes:
  9. - hostPath:
  10. path: /var/log/pods
  11. type: DirectoryOrCreate
  12. name: podlogs
  13. - hostPath:
  14. path: /var/lib/kubelet/pods
  15. type: DirectoryOrCreate
  16. name: kubelet
  17. - hostPath:
  18. path: /var/lib/docker
  19. type: DirectoryOrCreate
  20. name: docker

It is recommended to mount the above directories according to the actual condition.

Because Loggie itself is also deployed in a container, Loggie also needs to mount some volumes of nodes to collect logs. Otherwise, log files cannot be seen inside the Loggie container, and cannot be collected.

Here is a brief list of what paths need to be mounted when loggie collect different kinds of log:

  • Collect stdout: Loggie collects from /var/log/pods, so Loggie needs to mount:

    1. volumeMounts:
    2. - mountPath: /var/log/pods
    3. name: podlogs
    4. - mountPath: /var/lib/docker
    5. name: docker
    6. volumes:
    7. - hostPath:
    8. path: /var/log/pods
    9. type: DirectoryOrCreate
    10. name: podlogs
    11. - hostPath:
    12. path: /var/lib/docker
    13. type: DirectoryOrCreate
    14. name: docker

    But it is possible that log files under /var/log/pods will be soft-linked to the root path of docker. The default is /var/lib/docker. At this time, /var/lib/docker needs to be mounted as well.

    If other runtime is used, such as containerd, there is no need to mount /var/lib/docker, Loggie will look for the actual standard output path from /var/log/pods.

  • Collect the logs mounted by the service Pod using HostPath: For example, if the business pods uniformly mount the logs to the /data/logs path of the node, you need to mount the path:

    1. volumeMounts:
    2. - mountPath: /data/logs
    3. name: logs
    4. volumes:
    5. - hostPath:
    6. path: /data/logs
    7. type: DirectoryOrCreate
    8. name: logs
  • Collect the logs mounted by the business Pod using EmptyDir: By default, emtpyDir will be in the /var/lib/kubelet/pods path of the node, so Loggie needs to mount this path. If configuration of kubelet is modified, it needs to be modified synchronously:

    1. volumeMounts:
    2. - mountPath: /var/lib/kubelet/pods
    3. name: kubelet
    4. volumes:
    5. - hostPath:
    6. path: /var/lib/kubelet/pods
    7. type: DirectoryOrCreate
    8. name: kubelet
  • Collect the logs mounted by the service Pod using PV: Same as using EmptyDir.

  • No mount and rootFsCollectionEnabled: true: Loggie will automatically find the actual path in the container from the rootfs of the docker, and the root path of the docker needs to be mounted at this time:

    1. volumeMounts:
    2. - mountPath: /var/lib/docker
    3. name: docker
    4. volumes:
    5. - hostPath:
    6. path: /var/lib/docker
    7. type: DirectoryOrCreate
    8. name: docker

    If the actual root path of docker is modified, the volumeMount and volume here need to be modified synchronously. For example, if the root path is modified to /data/docker, the mount is as follows:

    1. volumeMounts:
    2. - mountPath: /data/docker
    3. name: docker
    4. volumes:
    5. - hostPath:
    6. path: /data/docker
    7. type: DirectoryOrCreate
    8. name: docker

Note:

  • Loggie needs to record the status of the collected files (offset, etc.) to avoid collecting files from the beginning after restarting. The default mounting path is /data/logie.db, so the /data/loggie--{{ template "loggie.name" . }} directory is mounted.

Schedule

  1. nodeSelector: {}
  2. affinity: {}
  3. # podAntiAffinity:
  4. # requiredDuringSchedulingIgnoredDuringExecution:
  5. # - labelSelector:
  6. # matchExpressions:
  7. # - key: app
  8. # operator: In
  9. # values:
  10. # - loggie
  11. # topologyKey: "kubernetes.io/hostname"

You can use nodeSelector and affinity to control the scheduling of Loggie Pods. For details, please refer to the Kubernetes documentation.

  1. tolerations: []
  2. # - effect: NoExecute
  3. # operator: Exists
  4. # - effect: NoSchedule
  5. # operator: Exists

If a node has its own taints, the Loggie Pod cannot be scheduled to the node. If you need to ignore the taints, you can add the corresponding tolerations.

Updating Strategy

  1. updateStrategy:
  2. type: RollingUpdate

Can be RollingUpdate or OnDelete.

Global Configuration

  1. config:
  2. loggie:
  3. reload:
  4. enabled: true
  5. period: 10s
  6. monitor:
  7. logger:
  8. period: 30s
  9. enabled: true
  10. listeners:
  11. filesource: ~
  12. filewatcher: ~
  13. reload: ~
  14. sink: ~
  15. discovery:
  16. enabled: true
  17. kubernetes:
  18. containerRuntime: containerd
  19. fields:
  20. container.name: containername
  21. logConfig: logconfig
  22. namespace: namespace
  23. node.name: nodename
  24. pod.name: podname
  25. http:
  26. enabled: true
  27. port: 9196

For detailed description, please refer to Configuration. It should be noted that if you use tools such as Kind to deploy Kubernetes locally, Kind will use the containerd runtime by default. In this case, you need to add containerRuntime: containerd to specify the containerd runtime.

Service

If Loggie wants to receive data sent by other services, it needs to expose its own services through service.

Under normal circumstances, Loggie in Agent mode only needs to expose its own management port.

  1. servicePorts:
  2. - name: monitor
  3. port: 9196
  4. targetPort: 9196

Deploy

For the initial deployment, we specify that the deployment is under the loggie namespace, and let helm automatically create the namespace.

  1. helm install loggie ./ -nloggie --create-namespace

If loggie namespace has been created in your environment, you can ignore -nloggie and --create-namespace. Of course, you can use your own namespace.

Kubernetes version issue

  1. failed to install CRD crds/crds.yaml: unable to recognize "": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1"

If you have a similar problem during helm install, it means that your Kubernetes version is too low and does not support the apiextensions.k8s.io/v1 version CRD. Loggie temporarily retains the CRD of the v1beta1 version, please delete the v1beta1 version in the charts, rm loggie/crds/crds.yaml and reinstall it.

Check deployment status

After execution, use the helm command to check the deployment status:

  1. helm list -nloggie

Result should be like:

  1. NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
  2. loggie loggie 1 2021-11-30 18:06:16.976334232 +0800 CST deployed loggie-v0.1.0 v0.1.0

At the same time, you can also use the kubectl command to check whether the Pod has been created.

  1. kubectl -nloggie get po

Result should be like:

  1. loggie-sxxwh 1/1 Running 0 5m21s 10.244.0.5 kind-control-plane <none> <none>

Deploy Loggie Aggregator

Deploying Aggregator is basically the same as deploying Agent. In Helm chart we provide aggregator config. Modify as enabled: true.

StatefulSet method is provided in the helm chart, and you can also modify it to deployment and other methods according to your needs.

At the same time, you can add content in values.yaml according to the cases:

  • nodeSelector or affinity. tolerations according to whether the node has taint. Make the Aggregator StatefulSet scheduled only on certain nodes.
  • add port for service to receive data. For example, to use Grpc source, default 6066 needs to be specified.

    1. servicePorts:
    2. - name: grpc
    3. port: 6066
    4. targetPort: 6066
  • add cluster field in discovery.kubernetes, which indicates the name of the aggregator cluster, which is used to distinguish Agent or other Loggie clusters, as shown below:

    1. config:
    2. loggie:
    3. discovery:
    4. enabled: true
    5. kubernetes:
    6. cluster: aggregator

Command reference:

  1. helm install loggie-aggregator ./ -nloggie-aggregator --create-namespace

Note

The Loggie aggregator can also be deployed using Deployment or StatefulSet. Please refer to DaemonSet to modify the helm chart by yourself.