Changes in Rancher v2.5

The following changes were introduced to logging in Rancher v2.5:

  • The Banzai Cloud Logging operator now powers Rancher’s logging solution in place of the former, in-house solution.
  • Fluent Bit is now used to aggregate the logs, and Fluentd is used for filtering the messages and routing them to the outputs. Previously, only Fluentd was used.
  • Logging can be configured with a Kubernetes manifest, because logging now uses a Kubernetes operator with Custom Resource Definitions.
  • We now support filtering logs.
  • We now support writing logs to multiple outputs.
  • We now always collect Control Plane and etcd logs.

The following figure from the Banzai documentation shows the new logging architecture:

How the Banzai Cloud Logging Operator Works with Fluentd and Fluent Bit

How the Banzai Cloud Logging Operator Works with Fluentd

Enabling Logging for Rancher Managed Clusters

You can enable the logging for a Rancher managed cluster by going to the Apps page and installing the logging app.

  1. In the Rancher UI, go to the cluster where you want to install logging and click Cluster Explorer.
  2. Click Apps.
  3. Click the rancher-logging app.
  4. Scroll to the bottom of the Helm chart README and click Install.

Result: The logging app is deployed in the cattle-logging-system namespace.

Uninstall Logging

  1. From the Cluster Explorer, click Apps & Marketplace.
  2. Click Installed Apps.
  3. Go to the cattle-logging-system namespace and check the boxes for rancher-logging and rancher-logging-crd.
  4. Click Delete.
  5. Confirm Delete.

Result rancher-logging is uninstalled.

Role-based Access Control

Rancher logging has two roles, logging-admin and logging-view.

  • logging-admin gives users full access to namespaced flows and outputs
  • logging-view allows users to view namespaced flows and outputs, and cluster flows and outputs

Why choose one role over the other? Edit access to cluster flow and cluster output resources is powerful. Any user with it has edit access for all logs in the cluster.

In Rancher, the cluster administrator role is the only role with full access to all rancher-logging resources. Cluster members are not able to edit or read any logging resources. Project owners and members have the following privileges:

Project OwnersProject Members
able to create namespaced flows and outputs in their projects’ namespacesonly able to view the flows and outputs in projects’ namespaces
can collect logs from anything in their projects’ namespacescannot collect any logs in their projects’ namespaces

Both project owners and project members require at least one namespace in their project to use logging. If they do not, then they may not see the logging button in the top nav dropdown.

Configuring the Logging Application

To configure the logging application, go to the Cluster Explorer in the Rancher UI. In the upper left corner, click Cluster Explorer > Logging.

Overview of Logging Custom Resources

The following Custom Resource Definitions are used to configure logging:

According to the Banzai Cloud documentation,

You can define outputs (destinations where you want to send your log messages, for example, Elasticsearch, or and Amazon S3 bucket), and flows that use filters and selectors to route log messages to the appropriate outputs. You can also define cluster-wide outputs and flows, for example, to use a centralized output that namespaced users cannot modify.

Examples

Once logging is installed, you can use these examples to help craft your own logging pipeline.

Cluster Output to ElasticSearch

Let’s say you wanted to send all logs in your cluster to an elasticsearch cluster. First, we create a cluster output.

  1. apiVersion: logging.banzaicloud.io/v1beta1
  2. kind: ClusterOutput
  3. metadata:
  4. name: "example-es"
  5. namespace: "cattle-logging-system"
  6. spec:
  7. elasticsearch:
  8. host: elasticsearch.example.com
  9. port: 9200
  10. scheme: http

We have created this cluster output, without elasticsearch configuration, in the same namespace as our operator: cattle-logging-system. Any time we create a cluster flow or cluster output, we have to put it in the cattle-logging-system namespace.

Now that we have configured where we want the logs to go, let’s configure all logs to go to that output.

  1. apiVersion: logging.banzaicloud.io/v1beta1
  2. kind: ClusterFlow
  3. metadata:
  4. name: "all-logs"
  5. namespace: "cattle-logging-system"
  6. spec:
  7. globalOutputRefs:
  8. - "example-es"

We should now see our configured index with logs in it.

Output to Splunk

What if we have an application team who only wants logs from a specific namespaces sent to a splunk server? For this case, we can use namespaced outputs and flows.

Before we start, let’s set up that team’s application: coolapp.

  1. apiVersion: v1
  2. kind: Namespace
  3. metadata:
  4. name: devteam
  5. ---
  6. apiVersion: apps/v1
  7. kind: Deployment
  8. metadata:
  9. name: coolapp
  10. namespace: devteam
  11. labels:
  12. app: coolapp
  13. spec:
  14. replicas: 2
  15. selector:
  16. matchLabels:
  17. app: coolapp
  18. template:
  19. metadata:
  20. labels:
  21. app: coolapp
  22. spec:
  23. containers:
  24. - name: generator
  25. image: paynejacob/loggenerator:latest

With coolapp running, we will follow a similar path as when we created a cluster output. However, unlike cluster outputs, we create our output in our application’s namespace.

  1. apiVersion: logging.banzaicloud.io/v1beta1
  2. kind: Output
  3. metadata:
  4. name: "devteam-splunk"
  5. namespace: "devteam"
  6. spec:
  7. SplunkHec:
  8. host: splunk.example.com
  9. port: 8088
  10. protocol: http

Once again, let’s feed our output some logs.

  1. apiVersion: logging.banzaicloud.io/v1beta1
  2. kind: Flow
  3. metadata:
  4. name: "devteam-logs"
  5. namespace: "devteam"
  6. spec:
  7. localOutputRefs:
  8. - "devteam-splunk"

Unsupported Output

For the final example, we create an output to write logs to a destination that is not supported out of the box (e.g. syslog):

  1. apiVersion: v1
  2. kind: Secret
  3. metadata:
  4. name: syslog-config
  5. namespace: cattle-logging-system
  6. type: Opaque
  7. stringData:
  8. fluent-bit.conf: |
  9. [INPUT]
  10. Name forward
  11. Port 24224
  12. [OUTPUT]
  13. Name syslog
  14. InstanceName syslog-output
  15. Match *
  16. Addr syslog.example.com
  17. Port 514
  18. Cluster ranchers
  19. ---
  20. apiVersion: apps/v1
  21. kind: Deployment
  22. metadata:
  23. name: fluentbit-syslog-forwarder
  24. namespace: cattle-logging-system
  25. labels:
  26. output: syslog
  27. spec:
  28. selector:
  29. matchLabels:
  30. output: syslog
  31. template:
  32. metadata:
  33. labels:
  34. output: syslog
  35. spec:
  36. containers:
  37. - name: fluentbit
  38. image: paynejacob/fluent-bit-out-syslog:latest
  39. ports:
  40. - containerPort: 24224
  41. volumeMounts:
  42. - mountPath: "/fluent-bit/etc/"
  43. name: configuration
  44. volumes:
  45. - name: configuration
  46. secret:
  47. secretName: syslog-config
  48. ---
  49. apiVersion: v1
  50. kind: Service
  51. metadata:
  52. name: syslog-forwarder
  53. namespace: cattle-logging-system
  54. spec:
  55. selector:
  56. output: syslog
  57. ports:
  58. - protocol: TCP
  59. port: 24224
  60. targetPort: 24224
  61. ---
  62. apiVersion: logging.banzaicloud.io/v1beta1
  63. kind: ClusterFlow
  64. metadata:
  65. name: all-logs
  66. namespace: cattle-logging-system
  67. spec:
  68. globalOutputRefs:
  69. - syslog
  70. ---
  71. apiVersion: logging.banzaicloud.io/v1beta1
  72. kind: ClusterOutput
  73. metadata:
  74. name: syslog
  75. namespace: cattle-logging-system
  76. spec:
  77. forward:
  78. servers:
  79. - host: "syslog-forwarder.cattle-logging-system"
  80. require_ack_response: false
  81. ignore_network_errors_at_startup: false

Let’s break down what is happening here. First, we create a deployment of a container that has the additional syslog plugin and accepts logs forwarded from another fluentd. Next we create an output configured as a forwarder to our deployment. The deployment fluentd will then forward all logs to the configured syslog destination.

Note on syslog Official syslog support is coming in Rancher v2.5.4. However, this example still provides an overview on using unsupported plugins.

Working with a Custom Docker Root Directory

Applies to v2.5.6+

If using a custom Docker root directory, you can set global.dockerRootDirectory in values.yaml. This will ensure that the Logging CRs created will use your specified path rather than the default Docker data-root location.

Working with Taints and Tolerations

“Tainting” a Kubernetes node causes pods to repel running on that node. Unless the pods have a toleration for that node’s taint, they will run on other nodes in the cluster. Taints and tolerations can work in conjunction with the nodeSelector field within the PodSpec, which enables the opposite effect of a taint. Using nodeSelector gives pods an affinity towards certain nodes. Both provide choice for the what node(s) the pod will run on.

Default Implementation in Rancher’s Logging Stack

By default, Rancher taints all Linux nodes with cattle.io/os=linux, and does not taint Windows nodes. The logging stack pods have tolerations for this taint, which enables them to run on Linux nodes. Moreover, we can populate the nodeSelector to ensure that our pods only run on Linux nodes. Let’s look at an example pod YAML file with these settings…

  1. apiVersion: v1
  2. kind: Pod
  3. # metadata...
  4. spec:
  5. # containers...
  6. tolerations:
  7. - key: cattle.io/os
  8. operator: "Equal"
  9. value: "linux"
  10. effect: NoSchedule
  11. nodeSelector:
  12. kubernetes.io/os: linux

In the above example, we ensure that our pod only runs on Linux nodes, and we add a toleration for the taint we have on all of our Linux nodes. You can do the same with Rancher’s existing taints, or with your own custom ones.

Windows Support

Clusters with Windows workers support exporting logs from Linux nodes, but Windows node logs are currently unable to be exported. Only Linux node logs are able to be exported.

Adding NodeSelector Settings and Tolerations for Custom Taints

If you would like to add your own nodeSelector settings, or if you would like to add tolerations for additional taints, you can pass the following to the chart’s values.

  1. tolerations:
  2. # insert tolerations...
  3. nodeSelector:
  4. # insert nodeSelector...

These values will add both settings to the fluentd, fluentbit, and logging-operator containers. Essentially, these are global settings for all pods in the logging stack.

However, if you would like to add tolerations for only the fluentbit container, you can add the following to the chart’s values.

  1. fluentbit_tolerations:
  2. # insert tolerations list for fluentbit containers only...

Troubleshooting

The cattle-logging Namespace Being Recreated

If your cluster previously deployed logging from the Cluster Manager UI, you may encounter an issue where its cattle-logging namespace is continually being recreated.

The solution is to delete all clusterloggings.management.cattle.io and projectloggings.management.cattle.io custom resources from the cluster specific namespace in the management cluster. The existence of these custom resources causes Rancher to create the cattle-logging namespace in the downstream cluster if it does not exist.

The cluster namespace matches the cluster ID, so we need to find the cluster ID for each cluster.

  1. In your web browser, navigate to your cluster(s) in either the Cluster Manager UI or the Cluster Explorer UI.
  2. Copy the <cluster-id> portion from one of the URLs below. The <cluster-id> portion is the cluster namespace name.
  1. # Cluster Management UI
  2. https://<your-url>/c/<cluster-id>/
  3. # Cluster Explorer UI (Dashboard)
  4. https://<your-url>/dashboard/c/<cluster-id>/

Now that we have the <cluster-id> namespace, we can delete the CRs that cause cattle-logging to be continually recreated. Warning: ensure that logging, the version installed from the Cluster Manager UI, is not currently in use.

  1. kubectl delete clusterloggings.management.cattle.io -n <cluster-id>
  2. kubectl delete projectloggings.management.cattle.io -n <cluster-id>