Logging with Mixer and Fluentd

Mixer is deprecated. The functionality provided by Mixer is being moved into the Envoy proxies. Use of Mixer with Istio will only be supported through the 1.7 release of Istio.

This task shows how to configure Istio to create custom log entries and send them to a Fluentd daemon. Fluentd is an open source log collector that supports many data outputs and has a pluggable architecture. One popular logging backend is Elasticsearch, and Kibana as a viewer. At the end of this task, a new log stream will be enabled sending logs to an example Fluentd / Elasticsearch / Kibana stack.

The Bookinfo sample application is used as the example application throughout this task.

Before you begin

  • Install Istio in your cluster and deploy an application. This task assumes that Mixer is setup in a default configuration (--configDefaultNamespace=istio-system). If you use a different value, update the configuration and commands in this task to match the value.

Setup Fluentd

In your cluster, you may already have a Fluentd daemon set running, such the add-on described here and here, or something specific to your cluster provider. This is likely configured to send logs to an Elasticsearch system or logging provider.

You may use these Fluentd daemons, or any other Fluentd daemon you have set up, as long as they are listening for forwarded logs, and Istio’s Mixer is able to connect to them. In order for Mixer to connect to a running Fluentd daemon, you may need to add a service for Fluentd. The Fluentd configuration to listen for forwarded logs is:

  1. <source>
  2. type forward
  3. </source>

The full details of connecting Mixer to all possible Fluentd configurations is beyond the scope of this task.

Example Fluentd, Elasticsearch, Kibana Stack

For the purposes of this task, you may deploy the example stack provided. This stack includes Fluentd, Elasticsearch, and Kibana in a non production-ready set of Services and Deployments all in a new Namespace called logging.

Save the following as logging-stack.yaml.

  1. # Logging Namespace. All below are a part of this namespace.
  2. apiVersion: v1
  3. kind: Namespace
  4. metadata:
  5. name: logging
  6. ---
  7. # Elasticsearch Service
  8. apiVersion: v1
  9. kind: Service
  10. metadata:
  11. name: elasticsearch
  12. namespace: logging
  13. labels:
  14. app: elasticsearch
  15. spec:
  16. ports:
  17. - port: 9200
  18. protocol: TCP
  19. targetPort: db
  20. selector:
  21. app: elasticsearch
  22. ---
  23. # Elasticsearch Deployment
  24. apiVersion: apps/v1
  25. kind: Deployment
  26. metadata:
  27. name: elasticsearch
  28. namespace: logging
  29. labels:
  30. app: elasticsearch
  31. spec:
  32. replicas: 1
  33. selector:
  34. matchLabels:
  35. app: elasticsearch
  36. template:
  37. metadata:
  38. labels:
  39. app: elasticsearch
  40. annotations:
  41. sidecar.istio.io/inject: "false"
  42. spec:
  43. containers:
  44. - image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.1.1
  45. name: elasticsearch
  46. resources:
  47. # need more cpu upon initialization, therefore burstable class
  48. limits:
  49. cpu: 1000m
  50. requests:
  51. cpu: 100m
  52. env:
  53. - name: discovery.type
  54. value: single-node
  55. ports:
  56. - containerPort: 9200
  57. name: db
  58. protocol: TCP
  59. - containerPort: 9300
  60. name: transport
  61. protocol: TCP
  62. volumeMounts:
  63. - name: elasticsearch
  64. mountPath: /data
  65. volumes:
  66. - name: elasticsearch
  67. emptyDir: {}
  68. ---
  69. # Fluentd Service
  70. apiVersion: v1
  71. kind: Service
  72. metadata:
  73. name: fluentd-es
  74. namespace: logging
  75. labels:
  76. app: fluentd-es
  77. spec:
  78. ports:
  79. - name: fluentd-tcp
  80. port: 24224
  81. protocol: TCP
  82. targetPort: 24224
  83. - name: fluentd-udp
  84. port: 24224
  85. protocol: UDP
  86. targetPort: 24224
  87. selector:
  88. app: fluentd-es
  89. ---
  90. # Fluentd Deployment
  91. apiVersion: apps/v1
  92. kind: Deployment
  93. metadata:
  94. name: fluentd-es
  95. namespace: logging
  96. labels:
  97. app: fluentd-es
  98. spec:
  99. replicas: 1
  100. selector:
  101. matchLabels:
  102. app: fluentd-es
  103. template:
  104. metadata:
  105. labels:
  106. app: fluentd-es
  107. annotations:
  108. sidecar.istio.io/inject: "false"
  109. spec:
  110. containers:
  111. - name: fluentd-es
  112. image: gcr.io/google-containers/fluentd-elasticsearch:v2.0.1
  113. env:
  114. - name: FLUENTD_ARGS
  115. value: --no-supervisor -q
  116. resources:
  117. limits:
  118. memory: 500Mi
  119. requests:
  120. cpu: 100m
  121. memory: 200Mi
  122. volumeMounts:
  123. - name: config-volume
  124. mountPath: /etc/fluent/config.d
  125. terminationGracePeriodSeconds: 30
  126. volumes:
  127. - name: config-volume
  128. configMap:
  129. name: fluentd-es-config
  130. ---
  131. # Fluentd ConfigMap, contains config files.
  132. kind: ConfigMap
  133. apiVersion: v1
  134. data:
  135. forward.input.conf: |-
  136. # Takes the messages sent over TCP
  137. <source>
  138. type forward
  139. </source>
  140. output.conf: |-
  141. <match **>
  142. type elasticsearch
  143. log_level info
  144. include_tag_key true
  145. host elasticsearch
  146. port 9200
  147. logstash_format true
  148. # Set the chunk limits.
  149. buffer_chunk_limit 2M
  150. buffer_queue_limit 8
  151. flush_interval 5s
  152. # Never wait longer than 5 minutes between retries.
  153. max_retry_wait 30
  154. # Disable the limit on the number of retries (retry forever).
  155. disable_retry_limit
  156. # Use multiple threads for processing.
  157. num_threads 2
  158. </match>
  159. metadata:
  160. name: fluentd-es-config
  161. namespace: logging
  162. ---
  163. # Kibana Service
  164. apiVersion: v1
  165. kind: Service
  166. metadata:
  167. name: kibana
  168. namespace: logging
  169. labels:
  170. app: kibana
  171. spec:
  172. ports:
  173. - port: 5601
  174. protocol: TCP
  175. targetPort: ui
  176. selector:
  177. app: kibana
  178. ---
  179. # Kibana Deployment
  180. apiVersion: apps/v1
  181. kind: Deployment
  182. metadata:
  183. name: kibana
  184. namespace: logging
  185. labels:
  186. app: kibana
  187. spec:
  188. replicas: 1
  189. selector:
  190. matchLabels:
  191. app: kibana
  192. template:
  193. metadata:
  194. labels:
  195. app: kibana
  196. annotations:
  197. sidecar.istio.io/inject: "false"
  198. spec:
  199. containers:
  200. - name: kibana
  201. image: docker.elastic.co/kibana/kibana-oss:6.1.1
  202. resources:
  203. # need more cpu upon initialization, therefore burstable class
  204. limits:
  205. cpu: 1000m
  206. requests:
  207. cpu: 100m
  208. env:
  209. - name: ELASTICSEARCH_URL
  210. value: http://elasticsearch:9200
  211. ports:
  212. - containerPort: 5601
  213. name: ui
  214. protocol: TCP
  215. ---

Create the resources:

  1. $ kubectl apply -f logging-stack.yaml
  2. namespace "logging" created
  3. service "elasticsearch" created
  4. deployment "elasticsearch" created
  5. service "fluentd-es" created
  6. deployment "fluentd-es" created
  7. configmap "fluentd-es-config" created
  8. service "kibana" created
  9. deployment "kibana" created

Configure Istio

Now that there is a running Fluentd daemon, configure Istio with a new log type, and send those logs to the listening daemon. Apply a YAML file with configuration for the log stream that Istio will generate and collect automatically:

Zip

  1. $ kubectl apply -f @samples/bookinfo/telemetry/fluentd-istio.yaml@

If you use Istio 1.1.2 or prior, please use the following configuration instead:

Zip

  1. $ kubectl apply -f @samples/bookinfo/telemetry/fluentd-istio-crd.yaml@

Notice that the address: "fluentd-es.logging:24224" line in the handler configuration is pointing to the Fluentd daemon we setup in the example stack.

View the new logs

  1. Send traffic to the sample application.

    For the Bookinfo sample, visit http://$GATEWAY_URL/productpage in your web browser or issue the following command:

    1. $ curl http://$GATEWAY_URL/productpage
  2. In a Kubernetes environment, setup port-forwarding for Kibana by executing the following command:

    1. $ kubectl -n logging port-forward $(kubectl -n logging get pod -l app=kibana -o jsonpath='{.items[0].metadata.name}') 5601:5601 &

    Leave the command running. Press Ctrl-C to exit when done accessing the Kibana UI.

  3. Navigate to the Kibana UI and click the “Set up index patterns” in the top right.

  4. Use * as the index pattern, and click “Next step.”.

  5. Select @timestamp as the Time Filter field name, and click “Create index pattern.”

  6. Now click “Discover” on the left menu, and start exploring the logs generated

Cleanup

  • Remove the new telemetry configuration:

    Zip

    1. $ kubectl delete -f @samples/bookinfo/telemetry/fluentd-istio.yaml@

    If you are using Istio 1.1.2 or prior:

    Zip

    1. $ kubectl delete -f @samples/bookinfo/telemetry/fluentd-istio-crd.yaml@
  • Remove the example Fluentd, Elasticsearch, Kibana stack:

    1. $ kubectl delete -f logging-stack.yaml
  • Remove any kubectl port-forward processes that may still be running:

    1. $ killall kubectl
  • If you are not planning to explore any follow-on tasks, refer to the Bookinfo cleanup instructions to shutdown the application.

See also

Mixer and the SPOF Myth

Improving availability and reducing latency.

Mixer Adapter Model

Provides an overview of Mixer’s plug-in architecture.

Classifying Metrics Based on Request or Response (Experimental)

This task shows you how to improve telemetry by grouping requests and responses by their type.

Collecting Logs with Mixer

This task shows you how to configure Istio’s Mixer to collect and customize logs.

Collecting Metrics With Mixer

This task shows you how to configure Istio’s Mixer to collect and customize metrics.

Collecting Metrics for TCP Services

This task shows you how to configure Istio to collect metrics for TCP services.