Scan queries

Apache Druid supports two query languages: Druid SQL and native queries. This document describes a query type in the native language. For information about when Druid SQL will use this query type, refer to the SQL documentation.

The Scan query returns raw Apache Druid rows in streaming mode.

In addition to straightforward usage where a Scan query is issued to the Broker, the Scan query can also be issued directly to Historical processes or streaming ingestion tasks. This can be useful if you want to retrieve large amounts of data in parallel.

An example Scan query object is shown below:

  1. {
  2. "queryType": "scan",
  3. "dataSource": "wikipedia",
  4. "resultFormat": "list",
  5. "columns":[],
  6. "intervals": [
  7. "2013-01-01/2013-01-02"
  8. ],
  9. "batchSize":20480,
  10. "limit":3
  11. }

The following are the main parameters for Scan queries:

propertydescriptionrequired?
queryTypeThis String should always be “scan”; this is the first thing Druid looks at to figure out how to interpret the queryyes
dataSourceA String or Object defining the data source to query, very similar to a table in a relational database. See DataSource for more information.yes
intervalsA JSON Object representing ISO-8601 Intervals. This defines the time ranges to run the query over.yes
resultFormatHow the results are represented: list, compactedList or valueVector. Currently only list and compactedList are supported. Default is listno
filterSee Filtersno
columnsA String array of dimensions and metrics to scan. If left empty, all dimensions and metrics are returned.no
batchSizeThe maximum number of rows buffered before being returned to the client. Default is 20480no
limitHow many rows to return. If not specified, all rows will be returned.no
offsetSkip this many rows when returning results. Skipped rows will still need to be generated internally and then discarded, meaning that raising offsets to high values can cause queries to use additional resources.

Together, “limit” and “offset” can be used to implement pagination. However, note that if the underlying datasource is modified in between page fetches in ways that affect overall query results, then the different pages will not necessarily align with each other.
no
orderThe ordering of returned rows based on timestamp. “ascending”, “descending”, and “none” (default) are supported. Currently, “ascending” and “descending” are only supported for queries where the __time column is included in the columns field and the requirements outlined in the time ordering section are met.none
legacyReturn results consistent with the legacy “scan-query” contrib extension. Defaults to the value set by druid.query.scan.legacy, which in turn defaults to false. See Legacy mode for details.no
contextAn additional JSON Object which can be used to specify certain flags (see the query context properties section below).no

Example results

The format of the result when resultFormat equals list:

  1. [{
  2. "segmentId" : "wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9",
  3. "columns" : [
  4. "timestamp",
  5. "robot",
  6. "namespace",
  7. "anonymous",
  8. "unpatrolled",
  9. "page",
  10. "language",
  11. "newpage",
  12. "user",
  13. "count",
  14. "added",
  15. "delta",
  16. "variation",
  17. "deleted"
  18. ],
  19. "events" : [ {
  20. "timestamp" : "2013-01-01T00:00:00.000Z",
  21. "robot" : "1",
  22. "namespace" : "article",
  23. "anonymous" : "0",
  24. "unpatrolled" : "0",
  25. "page" : "11._korpus_(NOVJ)",
  26. "language" : "sl",
  27. "newpage" : "0",
  28. "user" : "EmausBot",
  29. "count" : 1.0,
  30. "added" : 39.0,
  31. "delta" : 39.0,
  32. "variation" : 39.0,
  33. "deleted" : 0.0
  34. }, {
  35. "timestamp" : "2013-01-01T00:00:00.000Z",
  36. "robot" : "0",
  37. "namespace" : "article",
  38. "anonymous" : "0",
  39. "unpatrolled" : "0",
  40. "page" : "112_U.S._580",
  41. "language" : "en",
  42. "newpage" : "1",
  43. "user" : "MZMcBride",
  44. "count" : 1.0,
  45. "added" : 70.0,
  46. "delta" : 70.0,
  47. "variation" : 70.0,
  48. "deleted" : 0.0
  49. }, {
  50. "timestamp" : "2013-01-01T00:00:00.000Z",
  51. "robot" : "0",
  52. "namespace" : "article",
  53. "anonymous" : "0",
  54. "unpatrolled" : "0",
  55. "page" : "113_U.S._243",
  56. "language" : "en",
  57. "newpage" : "1",
  58. "user" : "MZMcBride",
  59. "count" : 1.0,
  60. "added" : 77.0,
  61. "delta" : 77.0,
  62. "variation" : 77.0,
  63. "deleted" : 0.0
  64. } ]
  65. } ]

The format of the result when resultFormat equals compactedList:

  1. [{
  2. "segmentId" : "wikipedia_editstream_2012-12-29T00:00:00.000Z_2013-01-10T08:00:00.000Z_2013-01-10T08:13:47.830Z_v9",
  3. "columns" : [
  4. "timestamp", "robot", "namespace", "anonymous", "unpatrolled", "page", "language", "newpage", "user", "count", "added", "delta", "variation", "deleted"
  5. ],
  6. "events" : [
  7. ["2013-01-01T00:00:00.000Z", "1", "article", "0", "0", "11._korpus_(NOVJ)", "sl", "0", "EmausBot", 1.0, 39.0, 39.0, 39.0, 0.0],
  8. ["2013-01-01T00:00:00.000Z", "0", "article", "0", "0", "112_U.S._580", "en", "1", "MZMcBride", 1.0, 70.0, 70.0, 70.0, 0.0],
  9. ["2013-01-01T00:00:00.000Z", "0", "article", "0", "0", "113_U.S._243", "en", "1", "MZMcBride", 1.0, 77.0, 77.0, 77.0, 0.0]
  10. ]
  11. } ]

Time ordering

The Scan query currently supports ordering based on timestamp for non-legacy queries. Note that using time ordering will yield results that do not indicate which segment rows are from (segmentId will show up as null). Furthermore, time ordering is only supported where the result set limit is less than druid.query.scan.maxRowsQueuedForOrdering rows or all segments scanned have fewer than druid.query.scan.maxSegmentPartitionsOrderedInMemory partitions. Also, time ordering is not supported for queries issued directly to historicals unless a list of segments is specified. The reasoning behind these limitations is that the implementation of time ordering uses two strategies that can consume too much heap memory if left unbounded. These strategies (listed below) are chosen on a per-Historical basis depending on query result set limit and the number of segments being scanned.

  1. Priority Queue: Each segment on a Historical is opened sequentially. Every row is added to a bounded priority queue which is ordered by timestamp. For every row above the result set limit, the row with the earliest (if descending) or latest (if ascending) timestamp will be dequeued. After every row has been processed, the sorted contents of the priority queue are streamed back to the Broker(s) in batches. Attempting to load too many rows into memory runs the risk of Historical nodes running out of memory. The druid.query.scan.maxRowsQueuedForOrdering property protects from this by limiting the number of rows in the query result set when time ordering is used.

  2. N-Way Merge: For each segment, each partition is opened in parallel. Since each partition’s rows are already time-ordered, an n-way merge can be performed on the results from each partition. This approach doesn’t persist the entire result set in memory (like the Priority Queue) as it streams back batches as they are returned from the merge function. However, attempting to query too many partition could also result in high memory usage due to the need to open decompression and decoding buffers for each. The druid.query.scan.maxSegmentPartitionsOrderedInMemory limit protects from this by capping the number of partitions opened at any times when time ordering is used.

Both druid.query.scan.maxRowsQueuedForOrdering and druid.query.scan.maxSegmentPartitionsOrderedInMemory are configurable and can be tuned based on hardware specs and number of dimensions being queried. These config properties can also be overridden using the maxRowsQueuedForOrdering and maxSegmentPartitionsOrderedInMemory properties in the query context (see the Query Context Properties section).

Legacy mode

The Scan query supports a legacy mode designed for protocol compatibility with the former scan-query contrib extension. In legacy mode you can expect the following behavior changes:

  • The __time column is returned as "timestamp" rather than "__time". This will take precedence over any other column you may have that is named "timestamp".
  • The __time column is included in the list of columns even if you do not specifically ask for it.
  • Timestamps are returned as ISO8601 time strings rather than integers (milliseconds since 1970-01-01 00:00:00 UTC).

Legacy mode can be triggered either by passing "legacy" : true in your query JSON, or by setting druid.query.scan.legacy = true on your Druid processes. If you were previously using the scan-query contrib extension, the best way to migrate is to activate legacy mode during a rolling upgrade, then switch it off after the upgrade is complete.

Configuration Properties

Configuration properties:

propertydescriptionvaluesdefault
druid.query.scan.maxRowsQueuedForOrderingThe maximum number of rows returned when time ordering is usedAn integer in [1, 2147483647]100000
druid.query.scan.maxSegmentPartitionsOrderedInMemoryThe maximum number of segments scanned per historical when time ordering is usedAn integer in [1, 2147483647]50
druid.query.scan.legacyWhether legacy mode should be turned on for Scan queriestrue or falsefalse

Query context properties

propertydescriptionvaluesdefault
maxRowsQueuedForOrderingThe maximum number of rows returned when time ordering is used. Overrides the identically named config.An integer in [1, 2147483647]druid.query.scan.maxRowsQueuedForOrdering
maxSegmentPartitionsOrderedInMemoryThe maximum number of segments scanned per historical when time ordering is used. Overrides the identically named config.An integer in [1, 2147483647]druid.query.scan.maxSegmentPartitionsOrderedInMemory

Sample query context JSON object:

  1. {
  2. "maxRowsQueuedForOrdering": 100001,
  3. "maxSegmentPartitionsOrderedInMemory": 100
  4. }