Materialized View

To use this Apache Druid feature, make sure to load materialized-view-selection and materialized-view-maintenance. In addition, this feature currently requires a Hadoop cluster.

This feature enables Druid to greatly improve the query performance, especially when the query dataSource has a very large number of dimensions but the query only required several dimensions. This feature includes two parts. One is materialized-view-maintenance, and the other is materialized-view-selection.

Materialized-view-maintenance

In materialized-view-maintenance, dataSources user ingested are called “base-dataSource”. For each base-dataSource, we can submit derivativeDataSource supervisors to create and maintain other dataSources which we called “derived-dataSource”. The dimensions and metrics of derived-dataSources are the subset of base-dataSource’s. The derivativeDataSource supervisor is used to keep the timeline of derived-dataSource consistent with base-dataSource. Each derivativeDataSource supervisor is responsible for one derived-dataSource.

A sample derivativeDataSource supervisor spec is shown below:

  1. {
  2. "type": "derivativeDataSource",
  3. "baseDataSource": "wikiticker",
  4. "dimensionsSpec": {
  5. "dimensions": [
  6. "isUnpatrolled",
  7. "metroCode",
  8. "namespace",
  9. "page",
  10. "regionIsoCode",
  11. "regionName",
  12. "user"
  13. ]
  14. },
  15. "metricsSpec": [
  16. {
  17. "name": "count",
  18. "type": "count"
  19. },
  20. {
  21. "name": "added",
  22. "type": "longSum",
  23. "fieldName": "added"
  24. }
  25. ],
  26. "tuningConfig": {
  27. "type": "hadoop"
  28. }
  29. }

Supervisor Configuration

FieldDescriptionRequired
TypeThe supervisor type. This should always be derivativeDataSource.yes
baseDataSourceThe name of base dataSource. This dataSource data should be already stored inside Druid, and the dataSource will be used as input data.yes
dimensionsSpecSpecifies the dimensions of the data. These dimensions must be the subset of baseDataSource’s dimensions.yes
metricsSpecA list of aggregators. These metrics must be the subset of baseDataSource’s metrics. See aggregations.yes
tuningConfigTuningConfig must be HadoopTuningConfig. See Hadoop tuning config.yes
dataSourceThe name of this derived dataSource.no(default=baseDataSource-hashCode of supervisor)
hadoopDependencyCoordinatesA JSON array of Hadoop dependency coordinates that Druid will use, this property will override the default Hadoop coordinates. Once specified, Druid will look for those Hadoop dependencies from the location specified by druid.extensions.hadoopDependenciesDirno
classpathPrefixClasspath that will be prepended for the Peon process.no
contextSee below.no

Context

FieldDescriptionRequired
maxTaskCountThe max number of tasks the supervisor can submit simultaneously.no(default=1)

Materialized-view-selection

In materialized-view-selection, we implement a new query type view. When we request a view query, Druid will try its best to optimize the query based on query dataSource and intervals.

A sample view query spec is shown below:

  1. {
  2. "queryType": "view",
  3. "query": {
  4. "queryType": "groupBy",
  5. "dataSource": "wikiticker",
  6. "granularity": "all",
  7. "dimensions": [
  8. "user"
  9. ],
  10. "limitSpec": {
  11. "type": "default",
  12. "limit": 1,
  13. "columns": [
  14. {
  15. "dimension": "added",
  16. "direction": "descending",
  17. "dimensionOrder": "numeric"
  18. }
  19. ]
  20. },
  21. "aggregations": [
  22. {
  23. "type": "longSum",
  24. "name": "added",
  25. "fieldName": "added"
  26. }
  27. ],
  28. "intervals": [
  29. "2015-09-12/2015-09-13"
  30. ]
  31. }
  32. }

There are 2 parts in a view query:

FieldDescriptionRequired
queryTypeThe query type. This should always be viewyes
queryThe real query of this view query. The real query must be groupBy, topN, or timeseries type.yes

Note that Materialized View is currently designated as experimental. Please make sure the time of all processes are the same and increase monotonically. Otherwise, some unexpected errors may happen on query results.