OpenTSDB Emitter

To use this Apache Druid extension, make sure to include opentsdb-emitter extension.

Introduction

This extension emits druid metrics to OpenTSDB over HTTP (Using Jersey client). And this emitter only emits service metric events to OpenTSDB (See Druid metrics for a list of metrics).

Configuration

All the configuration parameters for the OpenTSDB emitter are under druid.emitter.opentsdb.

propertydescriptionrequired?default
druid.emitter.opentsdb.hostThe host of the OpenTSDB server.yesnone
druid.emitter.opentsdb.portThe port of the OpenTSDB server.yesnone
druid.emitter.opentsdb.connectionTimeoutJersey client connection timeout(in milliseconds).no2000
druid.emitter.opentsdb.readTimeoutJersey client read timeout(in milliseconds).no2000
druid.emitter.opentsdb.flushThresholdQueue flushing threshold.(Events will be sent as one batch)no100
druid.emitter.opentsdb.maxQueueSizeMaximum size of the queue used to buffer events.no1000
druid.emitter.opentsdb.consumeDelayQueue consuming delay(in milliseconds). Actually, we use ScheduledExecutorService to schedule consuming events, so this consumeDelay means the delay between the termination of one execution and the commencement of the next. If your druid processes produce metric events fast, then you should decrease this consumeDelay or increase the maxQueueSize.no10000
druid.emitter.opentsdb.metricMapPathJSON file defining the desired metrics and dimensions for every Druid metricno./src/main/resources/defaultMetrics.json
druid.emitter.opentsdb.namespacePrefixOptional (string) prefix for metric names, for example the default metric name query.count with a namespacePrefix set to druid would be emitted as druid.query.countnonull

Druid to OpenTSDB Event Converter

The OpenTSDB emitter will send only the desired metrics and dimensions which is defined in a JSON file. If the user does not specify their own JSON file, a default file is used. All metrics are expected to be configured in the JSON file. Metrics which are not configured will be logged. Desired metrics and dimensions is organized using the following schema:<druid metric name> : [ <dimension list> ]
e.g.

  1. "query/time": [
  2. "dataSource",
  3. "type"
  4. ]

For most use-cases, the default configuration is sufficient.