Query context

The query context is used for various query configuration parameters. Query context parameters can be specified in the following ways:

  • For Druid SQL, context parameters are provided either in a JSON object named context to the HTTP POST API, or as properties to the JDBC connection.
  • For native queries, context parameters are provided in a JSON object named context.

Note that setting query context will override both the default value and the runtime properties value in the format of druid.query.default.context.{property_key} (if set).

General parameters

Unless otherwise noted, the following parameters apply to all query types.

ParameterDefaultDescription
timeoutdruid.server.http.defaultQueryTimeoutQuery timeout in millis, beyond which unfinished queries will be cancelled. 0 timeout means no timeout (up to the server-side maximum query timeout, druid.server.http.maxQueryTimeout). To set the default timeout and maximum timeout, see Broker configuration
priorityThe default priority is one of the following:
  • Value of priority in the query context, if set
  • The value of the runtime property druid.query.default.context.priority, if set and not null
  • 0 if the priority is not set in the query context or runtime properties
Query priority. Queries with higher priority get precedence for computational resources.
lanenullQuery lane, used to control usage limits on classes of queries. See Broker configuration for more details.
queryIdauto-generatedUnique identifier given to this query. If a query ID is set or known, this can be used to cancel the query
brokerServicenullBroker service to which this query should be routed. This parameter is honored only by a broker selector strategy of type manual. See Router strategies for more details.
useCachetrueFlag indicating whether to leverage the query cache for this query. When set to false, it disables reading from the query cache for this query. When set to true, Apache Druid uses druid.broker.cache.useCache or druid.historical.cache.useCache to determine whether or not to read from the query cache
populateCachetrueFlag indicating whether to save the results of the query to the query cache. Primarily used for debugging. When set to false, it disables saving the results of this query to the query cache. When set to true, Druid uses druid.broker.cache.populateCache or druid.historical.cache.populateCache to determine whether or not to save the results of this query to the query cache
useResultLevelCachetrueFlag indicating whether to leverage the result level cache for this query. When set to false, it disables reading from the query cache for this query. When set to true, Druid uses druid.broker.cache.useResultLevelCache to determine whether or not to read from the result-level query cache
populateResultLevelCachetrueFlag indicating whether to save the results of the query to the result level cache. Primarily used for debugging. When set to false, it disables saving the results of this query to the query cache. When set to true, Druid uses druid.broker.cache.populateResultLevelCache to determine whether or not to save the results of this query to the result-level query cache
bySegmentfalseNative queries only. Return “by segment” results. Primarily used for debugging, setting it to true returns results associated with the data segment they came from
finalizeN/AFlag indicating whether to “finalize” aggregation results. Primarily used for debugging. For instance, the hyperUnique aggregator returns the full HyperLogLog sketch instead of the estimated cardinality when this flag is set to false
maxScatterGatherBytesdruid.server.http.maxScatterGatherBytesMaximum number of bytes gathered from data processes such as Historicals and realtime processes to execute a query. This parameter can be used to further reduce maxScatterGatherBytes limit at query time. See Broker configuration for more details.
maxQueuedBytesdruid.broker.http.maxQueuedBytesMaximum number of bytes queued per query before exerting backpressure on the channel to the data server. Similar to maxScatterGatherBytes, except unlike that configuration, this one will trigger backpressure rather than query failure. Zero means disabled.
serializeDateTimeAsLongfalseIf true, DateTime is serialized as long in the result returned by Broker and the data transportation between Broker and compute process
serializeDateTimeAsLongInnerfalseIf true, DateTime is serialized as long in the data transportation between Broker and compute process
enableParallelMergetrueEnable parallel result merging on the Broker. Note that druid.processing.merge.useParallelMergePool must be enabled for this setting to be set to true. See Broker configuration for more details.
parallelMergeParallelismdruid.processing.merge.pool.parallelismMaximum number of parallel threads to use for parallel result merging on the Broker. See Broker configuration for more details.
parallelMergeInitialYieldRowsdruid.processing.merge.task.initialYieldNumRowsNumber of rows to yield per ForkJoinPool merge task for parallel result merging on the Broker, before forking off a new task to continue merging sequences. See Broker configuration for more details.
parallelMergeSmallBatchRowsdruid.processing.merge.task.smallBatchNumRowsSize of result batches to operate on in ForkJoinPool merge tasks for parallel result merging on the Broker. See Broker configuration for more details.
useFilterCNFfalseIf true, Druid will attempt to convert the query filter to Conjunctive Normal Form (CNF). During query processing, columns can be pre-filtered by intersecting the bitmap indexes of all values that match the eligible filters, often greatly reducing the raw number of rows which need to be scanned. But this effect only happens for the top level filter, or individual clauses of a top level ‘and’ filter. As such, filters in CNF potentially have a higher chance to utilize a large amount of bitmap indexes on string columns during pre-filtering. However, this setting should be used with great caution, as it can sometimes have a negative effect on performance, and in some cases, the act of computing CNF of a filter can be expensive. We recommend hand tuning your filters to produce an optimal form if possible, or at least verifying through experimentation that using this parameter actually improves your query performance with no ill-effects.
secondaryPartitionPruningtrueEnable secondary partition pruning on the Broker. The Broker will always prune unnecessary segments from the input scan based on a filter on time intervals, but if the data is further partitioned with hash or range partitioning, this option will enable additional pruning based on a filter on secondary partition dimensions.
enableJoinLeftTableScanDirectfalseThis flag applies to queries which have joins. For joins, where left child is a simple scan with a filter, by default, druid will run the scan as a query and the join the results to the right child on broker. Setting this flag to true overrides that behavior and druid will attempt to push the join to data servers instead. Please note that the flag could be applicable to queries even if there is no explicit join. since queries can internally translated into a join by the SQL planner.
debugfalseFlag indicating whether to enable debugging outputs for the query. When set to false, no additional logs will be produced (logs produced will be entirely dependent on your logging level). When set to true, the following addition logs will be produced:
- Log the stack trace of the exception (if any) produced by the query
setProcessingThreadNamestrueWhether processing thread names will be set to queryType_dataSource_intervals while processing a query. This aids in interpreting thread dumps, and is on by default. Query overhead can be reduced slightly by setting this to false. This has a tiny effect in most scenarios, but can be meaningful in high-QPS, low-per-segment-processing-time scenarios.
maxNumericInFilters-1Max limit for the amount of numeric values that can be compared for a string type dimension when the entire SQL WHERE clause of a query translates only to an OR of Bound filter. By default, Druid does not restrict the amount of of numeric Bound Filters on String columns, although this situation may block other queries from running. Set this parameter to a smaller value to prevent Druid from running queries that have prohibitively long segment processing times. The optimal limit requires some trial and error; we recommend starting with 100. Users who submit a query that exceeds the limit of maxNumericInFilters should instead rewrite their queries to use strings in the WHERE clause instead of numbers. For example, WHERE someString IN (‘123’, ‘456’). This value cannot exceed the set system configuration druid.sql.planner.maxNumericInFilters. This value is ignored if druid.sql.planner.maxNumericInFilters is not set explicitly.
inSubQueryThreshold2147483647Threshold for minimum number of values in an IN clause to convert the query to a JOIN operation on an inlined table rather than a predicate. A threshold of 0 forces usage of an inline table in all cases; a threshold of [Integer.MAX_VALUE] forces usage of OR in all cases.

Druid SQL parameters

See SQL query context for query context parameters specific to Druid SQL queries.

Parameters by query type

Some query types offer context parameters specific to that query type.

TopN

ParameterDefaultDescription
minTopNThreshold1000The top minTopNThreshold local results from each segment are returned for merging to determine the global topN.

Timeseries

ParameterDefaultDescription
skipEmptyBucketsfalseDisable timeseries zero-filling behavior, so only buckets with results will be returned.

Join filter

ParameterDefaultDescription
enableJoinFilterPushDowntrueControls whether a join query will attempt filter push down, which reduces the number of rows that have to be compared in a join operation.
enableJoinFilterRewritetrueControls whether filter clauses that reference non-base table columns will be rewritten into filters on base table columns.
enableJoinFilterRewriteValueColumnFiltersfalseControls whether Druid rewrites non-base table filters on non-key columns in the non-base table. Requires a scan of the non-base table.
enableRewriteJoinToFiltertrueControls whether a join can be pushed partial or fully to the base table as a filter at runtime.
joinFilterRewriteMaxSize10000The maximum size of the correlated value set used for filter rewrites. Set this limit to prevent excessive memory use.

GroupBy

See the list of GroupBy query context parameters available on the groupBy query page.

Vectorization parameters

The GroupBy and Timeseries query types can run in vectorized mode, which speeds up query execution by processing batches of rows at a time. Not all queries can be vectorized. In particular, vectorization currently has the following requirements:

  • All query-level filters must either be able to run on bitmap indexes or must offer vectorized row-matchers. These include “selector”, “bound”, “in”, “like”, “regex”, “search”, “and”, “or”, and “not”.
  • All filters in filtered aggregators must offer vectorized row-matchers.
  • All aggregators must offer vectorized implementations. These include “count”, “doubleSum”, “floatSum”, “longSum”, “longMin”, “longMax”, “doubleMin”, “doubleMax”, “floatMin”, “floatMax”, “longAny”, “doubleAny”, “floatAny”, “stringAny”, “hyperUnique”, “filtered”, “approxHistogram”, “approxHistogramFold”, and “fixedBucketsHistogram” (with numerical input).
  • All virtual columns must offer vectorized implementations. Currently for expression virtual columns, support for vectorization is decided on a per expression basis, depending on the type of input and the functions used by the expression. See the currently supported list in the expression documentation.
  • For GroupBy: All dimension specs must be “default” (no extraction functions or filtered dimension specs).
  • For GroupBy: No multi-value dimensions.
  • For Timeseries: No “descending” order.
  • Only immutable segments (not real-time).
  • Only table datasources (not joins, subqueries, lookups, or inline datasources).

Other query types (like TopN, Scan, Select, and Search) ignore the vectorize parameter, and will execute without vectorization. These query types will ignore the vectorize parameter even if it is set to "force".

ParameterDefaultDescription
vectorizetrueEnables or disables vectorized query execution. Possible values are false (disabled), true (enabled if possible, disabled otherwise, on a per-segment basis), and force (enabled, and groupBy or timeseries queries that cannot be vectorized will fail). The “force” setting is meant to aid in testing, and is not generally useful in production (since real-time segments can never be processed with vectorized execution, any queries on real-time data will fail). This will override druid.query.default.context.vectorize if it’s set.
vectorSize512Sets the row batching size for a particular query. This will override druid.query.default.context.vectorSize if it’s set.
vectorizeVirtualColumnstrueEnables or disables vectorized query processing of queries with virtual columns, layered on top of vectorize (vectorize must also be set to true for a query to utilize vectorization). Possible values are false (disabled), true (enabled if possible, disabled otherwise, on a per-segment basis), and force (enabled, and groupBy or timeseries queries with virtual columns that cannot be vectorized will fail). The “force” setting is meant to aid in testing, and is not generally useful in production. This will override druid.query.default.context.vectorizeVirtualColumns if it’s set.