Slow Log

Search Slow Log

Shard level slow search log allows to log slow search (query and fetch phases) into a dedicated log file.

Thresholds can be set for both the query phase of the execution, and fetch phase, here is a sample:

  1. index.search.slowlog.threshold.query.warn: 10s
  2. index.search.slowlog.threshold.query.info: 5s
  3. index.search.slowlog.threshold.query.debug: 2s
  4. index.search.slowlog.threshold.query.trace: 500ms
  5. index.search.slowlog.threshold.fetch.warn: 1s
  6. index.search.slowlog.threshold.fetch.info: 800ms
  7. index.search.slowlog.threshold.fetch.debug: 500ms
  8. index.search.slowlog.threshold.fetch.trace: 200ms
  9. index.search.slowlog.level: info

All of the above settings are dynamic and can be set for each index using the update indices settings API. For example:

  1. PUT /my-index-000001/_settings
  2. {
  3. "index.search.slowlog.threshold.query.warn": "10s",
  4. "index.search.slowlog.threshold.query.info": "5s",
  5. "index.search.slowlog.threshold.query.debug": "2s",
  6. "index.search.slowlog.threshold.query.trace": "500ms",
  7. "index.search.slowlog.threshold.fetch.warn": "1s",
  8. "index.search.slowlog.threshold.fetch.info": "800ms",
  9. "index.search.slowlog.threshold.fetch.debug": "500ms",
  10. "index.search.slowlog.threshold.fetch.trace": "200ms",
  11. "index.search.slowlog.level": "info"
  12. }

By default, none are enabled (set to -1). Levels (warn, info, debug, trace) allow to control under which logging level the log will be logged. Not all are required to be configured (for example, only warn threshold can be set). The benefit of several levels is the ability to quickly “grep” for specific thresholds breached.

The logging is done on the shard level scope, meaning the execution of a search request within a specific shard. It does not encompass the whole search request, which can be broadcast to several shards in order to execute. Some of the benefits of shard level logging is the association of the actual execution on the specific machine, compared with request level.

The logging file is configured by default using the following configuration (found in log4j2.properties):

  1. appender.index_search_slowlog_rolling.type = RollingFile
  2. appender.index_search_slowlog_rolling.name = index_search_slowlog_rolling
  3. appender.index_search_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_search_slowlog.log
  4. appender.index_search_slowlog_rolling.layout.type = PatternLayout
  5. appender.index_search_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] [%node_name]%marker %.-10000m%n
  6. appender.index_search_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_search_slowlog-%i.log.gz
  7. appender.index_search_slowlog_rolling.policies.type = Policies
  8. appender.index_search_slowlog_rolling.policies.size.type = SizeBasedTriggeringPolicy
  9. appender.index_search_slowlog_rolling.policies.size.size = 1GB
  10. appender.index_search_slowlog_rolling.strategy.type = DefaultRolloverStrategy
  11. appender.index_search_slowlog_rolling.strategy.max = 4
  12. logger.index_search_slowlog_rolling.name = index.search.slowlog
  13. logger.index_search_slowlog_rolling.level = trace
  14. logger.index_search_slowlog_rolling.appenderRef.index_search_slowlog_rolling.ref = index_search_slowlog_rolling
  15. logger.index_search_slowlog_rolling.additivity = false

Identifying search slow log origin

It is often useful to identify what triggered a slow running query. If a call was initiated with an X-Opaque-ID header, then the user ID is included in Search Slow logs as an additional id field (scroll to the right).

  1. [2030-08-30T11:59:37,786][WARN ][i.s.s.query ] [node-0] [index6][0] took[78.4micros], took_millis[0], total_hits[0 hits], stats[], search_type[QUERY_THEN_FETCH], total_shards[1], source[{"query":{"match_all":{"boost":1.0}}}], id[MY_USER_ID],

The user ID is also included in JSON logs.

  1. {
  2. "type": "index_search_slowlog",
  3. "timestamp": "2030-08-30T11:59:37,786+02:00",
  4. "level": "WARN",
  5. "component": "i.s.s.query",
  6. "cluster.name": "distribution_run",
  7. "node.name": "node-0",
  8. "message": "[index6][0]",
  9. "took": "78.4micros",
  10. "took_millis": "0",
  11. "total_hits": "0 hits",
  12. "stats": "[]",
  13. "search_type": "QUERY_THEN_FETCH",
  14. "total_shards": "1",
  15. "source": "{\"query\":{\"match_all\":{\"boost\":1.0}}}",
  16. "id": "MY_USER_ID",
  17. "cluster.uuid": "Aq-c-PAeQiK3tfBYtig9Bw",
  18. "node.id": "D7fUYfnfTLa2D7y-xw6tZg"
  19. }

Index Slow log

The indexing slow log, similar in functionality to the search slow log. The log file name ends with _index_indexing_slowlog.log. Log and the thresholds are configured in the same way as the search slowlog. Index slowlog sample:

  1. index.indexing.slowlog.threshold.index.warn: 10s
  2. index.indexing.slowlog.threshold.index.info: 5s
  3. index.indexing.slowlog.threshold.index.debug: 2s
  4. index.indexing.slowlog.threshold.index.trace: 500ms
  5. index.indexing.slowlog.level: info
  6. index.indexing.slowlog.source: 1000

All of the above settings are dynamic and can be set for each index using the update indices settings API. For example:

  1. PUT /my-index-000001/_settings
  2. {
  3. "index.indexing.slowlog.threshold.index.warn": "10s",
  4. "index.indexing.slowlog.threshold.index.info": "5s",
  5. "index.indexing.slowlog.threshold.index.debug": "2s",
  6. "index.indexing.slowlog.threshold.index.trace": "500ms",
  7. "index.indexing.slowlog.level": "info",
  8. "index.indexing.slowlog.source": "1000"
  9. }

By default Elasticsearch will log the first 1000 characters of the _source in the slowlog. You can change that with index.indexing.slowlog.source. Setting it to false or 0 will skip logging the source entirely an setting it to true will log the entire source regardless of size. The original _source is reformatted by default to make sure that it fits on a single log line. If preserving the original document format is important, you can turn off reformatting by setting index.indexing.slowlog.reformat to false, which will cause the source to be logged “as is” and can potentially span multiple log lines.

The index slow log file is configured by default in the log4j2.properties file:

  1. appender.index_indexing_slowlog_rolling.type = RollingFile
  2. appender.index_indexing_slowlog_rolling.name = index_indexing_slowlog_rolling
  3. appender.index_indexing_slowlog_rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_indexing_slowlog.log
  4. appender.index_indexing_slowlog_rolling.layout.type = PatternLayout
  5. appender.index_indexing_slowlog_rolling.layout.pattern = [%d{ISO8601}][%-5p][%-25c] [%node_name]%marker %.-10000m%n
  6. appender.index_indexing_slowlog_rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_index_indexing_slowlog-%i.log.gz
  7. appender.index_indexing_slowlog_rolling.policies.type = Policies
  8. appender.index_indexing_slowlog_rolling.policies.size.type = SizeBasedTriggeringPolicy
  9. appender.index_indexing_slowlog_rolling.policies.size.size = 1GB
  10. appender.index_indexing_slowlog_rolling.strategy.type = DefaultRolloverStrategy
  11. appender.index_indexing_slowlog_rolling.strategy.max = 4
  12. logger.index_indexing_slowlog.name = index.indexing.slowlog.index
  13. logger.index_indexing_slowlog.level = trace
  14. logger.index_indexing_slowlog.appenderRef.index_indexing_slowlog_rolling.ref = index_indexing_slowlog_rolling
  15. logger.index_indexing_slowlog.additivity = false