Router Process

The Apache Druid Router process can be used to route queries to different Broker processes. By default, the broker routes queries based on how Rules are set up. For example, if 1 month of recent data is loaded into a hot cluster, queries that fall within the recent month can be routed to a dedicated set of brokers. Queries outside this range are routed to another set of brokers. This set up provides query isolation such that queries for more important data are not impacted by queries for less important data.

For query routing purposes, you should only ever need the Router process if you have a Druid cluster well into the terabyte range.

In addition to query routing, the Router also runs the web console, a management UI for datasources, segments, tasks, data processes (Historicals and MiddleManagers), and coordinator dynamic configuration. The user can also run SQL and native Druid queries within the console.

Configuration

For Apache Druid Router Process Configuration, see Router Configuration.

For basic tuning guidance for the Router process, see Basic cluster tuning.

HTTP endpoints

For a list of API endpoints supported by the Router, see Router API.

Running

  1. org.apache.druid.cli.Main server router

Router as management proxy

The Router can be configured to forward requests to the active Coordinator or Overlord process. This may be useful for setting up a highly available cluster in situations where the HTTP redirect mechanism of the inactive -> active Coordinator/Overlord does not function correctly (servers are behind a load balancer, the hostname used in the redirect is only resolvable internally, etc.).

Enabling the management proxy

To enable this functionality, set the following in the Router’s runtime.properties:

  1. druid.router.managementProxy.enabled=true

Management proxy routing

The management proxy supports implicit and explicit routes. Implicit routes are those where the destination can be determined from the original request path based on Druid API path conventions. For the Coordinator the convention is /druid/coordinator/* and for the Overlord the convention is /druid/indexer/*. These are convenient because they mean that using the management proxy does not require modifying the API request other than issuing the request to the Router instead of the Coordinator or Overlord. Most Druid API requests can be routed implicitly.

Explicit routes are those where the request to the Router contains a path prefix indicating which process the request should be routed to. For the Coordinator this prefix is /proxy/coordinator and for the Overlord it is /proxy/overlord. This is required for API calls with an ambiguous destination. For example, the /status API is present on all Druid processes, so explicit routing needs to be used to indicate the proxy destination.

This is summarized in the table below:

Request RouteDestinationRewritten RouteExample
/druid/coordinator/Coordinator/druid/coordinator/router:8888/druid/coordinator/v1/datasources -> coordinator:8081/druid/coordinator/v1/datasources
/druid/indexer/Overlord/druid/indexer/router:8888/druid/indexer/v1/task -> overlord:8090/druid/indexer/v1/task
/proxy/coordinator/Coordinator/router:8888/proxy/coordinator/status -> coordinator:8081/status
/proxy/overlord/Overlord/router:8888/proxy/overlord/druid/indexer/v1/isLeader -> overlord:8090/druid/indexer/v1/isLeader

Router strategies

The Router has a configurable list of strategies for how it selects which Brokers to route queries to. The order of the strategies matter because as soon as a strategy condition is matched, a Broker is selected.

timeBoundary

  1. {
  2. "type":"timeBoundary"
  3. }

Including this strategy means all timeBoundary queries are always routed to the highest priority Broker.

priority

  1. {
  2. "type":"priority",
  3. "minPriority":0,
  4. "maxPriority":1
  5. }

Queries with a priority set to less than minPriority are routed to the lowest priority Broker. Queries with priority set to greater than maxPriority are routed to the highest priority Broker. By default, minPriority is 0 and maxPriority is 1. Using these default values, if a query with priority 0 (the default query priority is 0) is sent, the query skips the priority selection logic.

manual

This strategy reads the parameter brokerService from the query context and routes the query to that broker service. If no valid brokerService is specified in the query context, the field defaultManualBrokerService is used to determine target broker service given the value is valid and non-null. A value is considered valid if it is present in druid.router.tierToBrokerMap This strategy can route both Native and SQL queries (when enabled).

Example: A strategy that routes queries to the Broker “druid:broker-hot” if no valid brokerService is found in the query context.

  1. {
  2. "type": "manual",
  3. "defaultManualBrokerService": "druid:broker-hot"
  4. }

JavaScript

Allows defining arbitrary routing rules using a JavaScript function. The function is passed the configuration and the query to be executed, and returns the tier it should be routed to, or null for the default tier.

Example: a function that sends queries containing more than three aggregators to the lowest priority Broker.

  1. {
  2. "type" : "javascript",
  3. "function" : "function (config, query) { if (query.getAggregatorSpecs && query.getAggregatorSpecs().size() >= 3) { var size = config.getTierToBrokerMap().values().size(); if (size > 0) { return config.getTierToBrokerMap().values().toArray()[size-1] } else { return config.getDefaultBrokerServiceName() } } else { return null } }"
  4. }

JavaScript-based functionality is disabled by default. Please refer to the Druid JavaScript programming guide for guidelines about using Druid’s JavaScript functionality, including instructions on how to enable it.

Routing of SQL queries using strategies

To enable routing of SQL queries using strategies, set druid.router.sql.enable to true. The broker service for a given SQL query is resolved using only the provided Router strategies. If not resolved using any of the strategies, the Router uses the defaultBrokerServiceName. This behavior is slightly different from native queries where the Router first tries to resolve the broker service using strategies, then load rules and finally using the defaultBrokerServiceName if still not resolved. When druid.router.sql.enable is set to false (default value), the Router uses the defaultBrokerServiceName.

Setting druid.router.sql.enable does not affect either Avatica JDBC requests or native queries. Druid always routes native queries using the strategies and load rules as documented. Druid always routes Avatica JDBC requests based on connection ID.

Avatica query balancing

All Avatica JDBC requests with a given connection ID must be routed to the same Broker, since Druid Brokers do not share connection state with each other.

To accomplish this, Druid provides two built-in balancers that use rendezvous hashing and consistent hashing of a request’s connection ID respectively to assign requests to Brokers.

Note that when multiple Routers are used, all Routers should have identical balancer configuration to ensure that they make the same routing decisions.

Rendezvous hash balancer

This balancer uses Rendezvous Hashing on an Avatica request’s connection ID to assign the request to a Broker.

To use this balancer, specify the following property:

  1. druid.router.avatica.balancer.type=rendezvousHash

If no druid.router.avatica.balancer property is set, the Router will also default to using the Rendezvous Hash Balancer.

Consistent hash balancer

This balancer uses Consistent Hashing on an Avatica request’s connection ID to assign the request to a Broker.

To use this balancer, specify the following property:

  1. druid.router.avatica.balancer.type=consistentHash

This is a non-default implementation that is provided for experimentation purposes. The consistent hasher has longer setup times on initialization and when the set of Brokers changes, but has a faster Broker assignment time than the rendezvous hasher when tested with 5 Brokers. Benchmarks for both implementations have been provided in ConsistentHasherBenchmark and RendezvousHasherBenchmark. The consistent hasher also requires locking, while the rendezvous hasher does not.

Example production configuration

In this example, we have two tiers in our production cluster: hot and _default_tier. Queries for the hot tier are routed through the broker-hot set of Brokers, and queries for the _default_tier are routed through the broker-cold set of Brokers. If any exceptions or network problems occur, queries are routed to the broker-cold set of brokers. In our example, we are running with a c3.2xlarge EC2 instance. We assume a common.runtime.properties already exists.

JVM settings:

  1. -server
  2. -Xmx13g
  3. -Xms13g
  4. -XX:NewSize=256m
  5. -XX:MaxNewSize=256m
  6. -XX:+UseConcMarkSweepGC
  7. -XX:+PrintGCDetails
  8. -XX:+PrintGCTimeStamps
  9. -XX:+UseLargePages
  10. -XX:+HeapDumpOnOutOfMemoryError
  11. -XX:HeapDumpPath=/mnt/galaxy/deploy/current/
  12. -Duser.timezone=UTC
  13. -Dfile.encoding=UTF-8
  14. -Djava.io.tmpdir=/mnt/tmp
  15. -Dcom.sun.management.jmxremote.port=17071
  16. -Dcom.sun.management.jmxremote.authenticate=false
  17. -Dcom.sun.management.jmxremote.ssl=false

Runtime.properties:

  1. druid.host=#{IP_ADDR}:8080
  2. druid.plaintextPort=8080
  3. druid.service=druid/router
  4. druid.router.defaultBrokerServiceName=druid:broker-cold
  5. druid.router.coordinatorServiceName=druid:coordinator
  6. druid.router.tierToBrokerMap={"hot":"druid:broker-hot","_default_tier":"druid:broker-cold"}
  7. druid.router.http.numConnections=50
  8. druid.router.http.readTimeout=PT5M
  9. # Number of threads used by the Router proxy http client
  10. druid.router.http.numMaxThreads=100
  11. druid.server.http.numThreads=100