Quarkus - Centralized log management (Graylog, Logstash, Fluentd)

This guide explains how you can send your logs to a centralized log management system like Graylog, Logstash (inside the Elastic Stack or ELK - Elasticsearch, Logstash, Kibana) or Fluentd (inside EFK - Elasticsearch, Fluentd, Kibana).

There are a lot of different ways to centralize your logs (if you are using Kubernetes, the simplest way is to log to the console and ask you cluster administrator to integrate a central log manager inside your cluster). In this guide, we will expose how to send them to an external tool using the quarkus-logging-gelf extension that can use TCP or UDP to send logs in the Graylog Extended Log Format (GELF).

The quarkus-logging-gelf extension will add a GELF log handler to the underlying logging backend that Quarkus uses (jboss-logmanager). By default, it is disabled, if you enable it but still use another handler (by default the console handler is enabled), your logs will be sent to both handlers.

Example application

The following examples will all be based on the same example application that you can create with the following steps.

Create an application with the quarkus-logging-gelf extension. You can use the following Maven command to create it:

  1. mvn io.quarkus:quarkus-maven-plugin:1.7.6.Final:create \
  2. -DprojectGroupId=org.acme \
  3. -DprojectArtifactId=gelf-logging \
  4. -DclassName="org.acme.quickstart.GelfLoggingResource" \
  5. -Dpath="/gelf-logging" \
  6. -Dextensions="logging-gelf"

If you already have your Quarkus project configured, you can add the logging-gelf extension to your project by running the following command in your project base directory:

  1. ./mvnw quarkus:add-extension -Dextensions="logging-gelf"

This will add the following to your pom.xml:

  1. <dependency>
  2. <groupId>io.quarkus</groupId>
  3. <artifactId>quarkus-logging-gelf</artifactId>
  4. </dependency>

For demonstration purposes, we create an endpoint that does nothing but log a sentence. You don’t need to do this inside your application.

  1. import javax.enterprise.context.ApplicationScoped;
  2. import javax.ws.rs.GET;
  3. import javax.ws.rs.Path;
  4. import org.jboss.logging.Logger;
  5. @Path("/gelf-logging")
  6. @ApplicationScoped
  7. public class GelfLoggingResource {
  8. private static final Logger LOG = Logger.getLogger(GelfLoggingResource.class);
  9. @GET
  10. public void log() {
  11. LOG.info("Some useful log message");
  12. }
  13. }

Configure the GELF log handler to send logs to an external UDP endpoint on the port 12201:

  1. quarkus.log.handler.gelf.enabled=true
  2. quarkus.log.handler.gelf.host=localhost
  3. quarkus.log.handler.gelf.port=12201

Send logs to Graylog

To send logs to Graylog, you first need to launch the components that compose the Graylog stack:

  • MongoDB

  • Elasticsearch

  • Graylog

You can do this via the following docker-compose file that you can launch via docker-compose run -d:

  1. version: '3.2'
  2. services:
  3. elasticsearch:
  4. image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.2
  5. ports:
  6. - "9200:9200"
  7. environment:
  8. ES_JAVA_OPTS: "-Xms512m -Xmx512m"
  9. networks:
  10. - graylog
  11. mongo:
  12. image: mongo:4.0
  13. networks:
  14. - graylog
  15. graylog:
  16. image: graylog/graylog:3.1
  17. ports:
  18. - "9000:9000"
  19. - "12201:12201/udp"
  20. - "1514:1514"
  21. environment:
  22. GRAYLOG_HTTP_EXTERNAL_URI: "http://127.0.0.1:9000/"
  23. networks:
  24. - graylog
  25. depends_on:
  26. - elasticsearch
  27. - mongo
  28. networks:
  29. graylog:
  30. driver: bridge

Then, you need to create a UDP input in Graylog. You can do it from the Graylog web console (System → Input → Select GELF UDP) available at http://localhost:9000 or via the API.

This curl example will create a new Input of type GELF UDP, it uses the default login from Graylog (admin/admin).

  1. curl -H "Content-Type: application/json" -H "Authorization: Basic YWRtaW46YWRtaW4=" -H "X-Requested-By: curl" -X POST -v -d \
  2. '{"title":"udp input","configuration":{"recv_buffer_size":262144,"bind_address":"0.0.0.0","port":12201,"decompress_size_limit":8388608},"type":"org.graylog2.inputs.gelf.udp.GELFUDPInput","global":true}' \
  3. http://localhost:9000/api/system/inputs

Launch your application, you should see your logs arriving inside Graylog.

Send logs to Logstash / the Elastic Stack (ELK)

Logstash comes by default with an Input plugin that can understand the GELF format, we will first create a pipeline that enables this plugin.

Create the following file in $HOME/pipelines/gelf.conf:

  1. input {
  2. gelf {
  3. port => 12201
  4. }
  5. }
  6. output {
  7. stdout {}
  8. elasticsearch {
  9. hosts => ["http://elasticsearch:9200"]
  10. }
  11. }

Finally, launch the components that compose the Elastic Stack:

  • Elasticsearch

  • Logstash

  • Kibana

You can do this via the following docker-compose file that you can launch via docker-compose run -d:

  1. # Launch Elasticsearch
  2. version: '3.2'
  3. services:
  4. elasticsearch:
  5. image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.2
  6. ports:
  7. - "9200:9200"
  8. - "9300:9300"
  9. environment:
  10. ES_JAVA_OPTS: "-Xms512m -Xmx512m"
  11. networks:
  12. - elk
  13. logstash:
  14. image: docker.elastic.co/logstash/logstash-oss:6.8.2
  15. volumes:
  16. - source: $HOME/pipelines
  17. target: /usr/share/logstash/pipeline
  18. type: bind
  19. ports:
  20. - "12201:12201/udp"
  21. - "5000:5000"
  22. - "9600:9600"
  23. networks:
  24. - elk
  25. depends_on:
  26. - elasticsearch
  27. kibana:
  28. image: docker.elastic.co/kibana/kibana-oss:6.8.2
  29. ports:
  30. - "5601:5601"
  31. networks:
  32. - elk
  33. depends_on:
  34. - elasticsearch
  35. networks:
  36. elk:
  37. driver: bridge

Launch your application, you should see your logs arriving inside the Elastic Stack; you can use Kibana available at http://localhost:5601/ to access them.

Send logs to Fluentd (EFK)

First, you need to create a Fluentd image with the needed plugins: elasticsearch and input-gelf. You can use the following Dockerfile that should be created inside a fluentd directory.

  1. FROM fluent/fluentd:v1.3-debian
  2. RUN ["gem", "install", "fluent-plugin-elasticsearch", "--version", "3.7.0"]
  3. RUN ["gem", "install", "fluent-plugin-input-gelf", "--version", "0.3.1"]

You can build the image or let docker-compose build it for you.

Then you need to create a fluentd configuration file inside $HOME/fluentd/fluent.conf

  1. <source>
  2. type gelf
  3. tag example.gelf
  4. bind 0.0.0.0
  5. port 12201
  6. </source>
  7. <match example.gelf>
  8. @type elasticsearch
  9. host elasticsearch
  10. port 9200
  11. logstash_format true
  12. </match>

Finally, launch the components that compose the EFK Stack:

  • Elasticsearch

  • Fluentd

  • Kibana

You can do this via the following docker-compose file that you can launch via docker-compose run -d:

  1. version: '3.2'
  2. services:
  3. elasticsearch:
  4. image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.2
  5. ports:
  6. - "9200:9200"
  7. - "9300:9300"
  8. environment:
  9. ES_JAVA_OPTS: "-Xms512m -Xmx512m"
  10. networks:
  11. - efk
  12. fluentd:
  13. build: fluentd
  14. ports:
  15. - "12201:12201/udp"
  16. volumes:
  17. - source: $HOME/fluentd
  18. target: /fluentd/etc
  19. type: bind
  20. networks:
  21. - efk
  22. depends_on:
  23. - elasticsearch
  24. kibana:
  25. image: docker.elastic.co/kibana/kibana-oss:6.8.2
  26. ports:
  27. - "5601:5601"
  28. networks:
  29. - efk
  30. depends_on:
  31. - elasticsearch
  32. networks:
  33. efk:
  34. driver: bridge

Launch your application, you should see your logs arriving inside EFK: you can use Kibana available at http://localhost:5601/ to access them.

Fluentd alternative: use Syslog

You can also send your logs to Fluentd using a Syslog input. As opposed to the GELF input, the Syslog input will not render multiline logs in one event, that’s why we advise to use the GELF input that we implement in Quarkus.

First, you need to create a Fluentd image with the elasticsearch plugin. You can use the following Dockerfile that should be created inside a fluentd directory.

  1. FROM fluent/fluentd:v1.3-debian
  2. RUN ["gem", "install", "fluent-plugin-elasticsearch", "--version", "3.7.0"]

Then, you need to create a fluentd configuration file inside $HOME/fluentd/fluent.conf

  1. <source>
  2. @type syslog
  3. port 5140
  4. bind 0.0.0.0
  5. message_format rfc5424
  6. tag system
  7. </source>
  8. <match **>
  9. @type elasticsearch
  10. host elasticsearch
  11. port 9200
  12. logstash_format true
  13. </match>

Then, launch the components that compose the EFK Stack:

  • Elasticsearch

  • Fluentd

  • Kibana

You can do this via the following docker-compose file that you can launch via docker-compose run -d:

  1. version: '3.2'
  2. services:
  3. elasticsearch:
  4. image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.2
  5. ports:
  6. - "9200:9200"
  7. - "9300:9300"
  8. environment:
  9. ES_JAVA_OPTS: "-Xms512m -Xmx512m"
  10. networks:
  11. - efk
  12. fluentd:
  13. build: fluentd
  14. ports:
  15. - "5140:5140/udp"
  16. volumes:
  17. - source: $HOME/fluentd
  18. target: /fluentd/etc
  19. type: bind
  20. networks:
  21. - efk
  22. depends_on:
  23. - elasticsearch
  24. kibana:
  25. image: docker.elastic.co/kibana/kibana-oss:6.8.2
  26. ports:
  27. - "5601:5601"
  28. networks:
  29. - efk
  30. depends_on:
  31. - elasticsearch
  32. networks:
  33. efk:
  34. driver: bridge

Finally, configure your application to send logs to EFK using Syslog:

  1. quarkus.log.syslog.enable=true
  2. quarkus.log.syslog.endpoint=localhost:5140
  3. quarkus.log.syslog.protocol=udp
  4. quarkus.log.syslog.app-name=quarkus
  5. quarkus.log.syslog.hostname=quarkus-test

Launch your application, you should see your logs arriving inside EFK: you can use Kibana available at http://localhost:5601/ to access them.

Configuration Reference

Configuration is done through the usual application.properties file.

This extension uses the logstash-gelf library that allow more configuration options via system properties, you can access its documentation here: https://logging.paluch.biz/ .