Alibaba Cloud is the 3rd largest cloud infrastructure provider in the world. It provides its own storage solution known as OSS, Object Storage Service. This document describes how to use OSS as Druid deep storage.

Installation

Use the pull-deps tool shipped with Druid to install the aliyun-oss-extensions extension, as described here on middle manager and historical nodes.

  1. java -classpath "{YOUR_DRUID_DIR}/lib/*" org.apache.druid.cli.Main tools pull-deps -c org.apache.druid.extensions.contrib:aliyun-oss-extensions:{YOUR_DRUID_VERSION}

Enabling

After installation, add this aliyun-oss-extensions extension to druid.extensions.loadList in common.runtime.properties and then restart middle manager and historical nodes.

Configuration

First add the following OSS configurations to common.runtime.properties

PropertyDescriptionRequired
druid.oss.accessKeyThe AccessKey ID of the account to be used to access the OSS bucketyes
druid.oss.secretKeyThe AccessKey Secret of the account to be used to access the OSS bucketyes
druid.oss.endpointThe endpoint URL of your OSS storage.
If your Druid cluster is also hosted in the same region on Alibaba Cloud as the region of your OSS bucket, it’s recommended to use the internal network endpoint url, so that any inbound and outbound traffic to the OSS bucket is free of charge.
yes

To use OSS as deep storage, add the following configurations:

PropertyDescriptionRequired
druid.storage.typeGlobal deep storage provider. Must be set to oss to make use of this extension.yes
druid.storage.oss.bucketStorage bucket name.yes
druid.storage.oss.prefixFolder where segments will be published to. druid/segments is recommended.No

If OSS is used as deep storage for segment files, it’s also recommended saving index logs in the OSS too. To do this, add following configurations:

PropertyDescriptionRequired
druid.indexer.logs.typeGlobal deep storage provider. Must be set to oss to make use of this extension.yes
druid.indexer.logs.oss.bucketThe bucket used to keep logs. It could be the same as druid.storage.oss.bucketyes
druid.indexer.logs.oss.prefixFolder where log files will be published to. druid/logs is recommended.no

Reading data from OSS

Currently, Web Console does not support ingestion from OSS, but it could be done by submitting an ingestion task with OSS’s input source configuration.

Below shows the configurations of OSS’s input source.

OSS Input Source

propertydescriptionRequired
typeThis should be oss.yes
urisJSON array of URIs where OSS objects to be ingested are located.
For example, oss://{your_bucket}/{source_file_path}
uris or prefixes or objects must be set
prefixesJSON array of URI prefixes for the locations of OSS objects to be ingested. Empty objects starting with one of the given prefixes will be skipped.uris or prefixes or objects must be set
objectsJSON array of OSS Objects to be ingested.uris or prefixes or objects must be set
propertiesProperties Object for overriding the default OSS configuration. See below for more information.no (defaults will be used if not given)

OSS Object

PropertyDescriptionDefaultRequired
bucketName of the OSS bucketNoneyes
pathThe path where data is located.Noneyes

Properties Object

PropertyDescriptionDefaultRequired
accessKeyThe Password Provider or plain text string of this OSS InputSource’s access keyNoneyes
secretKeyThe Password Provider or plain text string of this OSS InputSource’s secret keyNoneyes
endpointThe endpoint of this OSS InputSourceNoneno

Reading from a file

Say that the file rollup-data.json, which can be found under Druid’s quickstart/tutorial directory, has been uploaded to a folder druid in your OSS bucket, the bucket for which your Druid is configured. In this case, the uris property of the OSS’s input source can be used for reading, as shown:

  1. {
  2. "type" : "index_parallel",
  3. "spec" : {
  4. "dataSchema" : {
  5. "dataSource" : "rollup-tutorial-from-oss",
  6. "timestampSpec": {
  7. "column": "timestamp",
  8. "format": "iso"
  9. },
  10. "dimensionsSpec" : {
  11. "dimensions" : [
  12. "srcIP",
  13. "dstIP"
  14. ]
  15. },
  16. "metricsSpec" : [
  17. { "type" : "count", "name" : "count" },
  18. { "type" : "longSum", "name" : "packets", "fieldName" : "packets" },
  19. { "type" : "longSum", "name" : "bytes", "fieldName" : "bytes" }
  20. ],
  21. "granularitySpec" : {
  22. "type" : "uniform",
  23. "segmentGranularity" : "week",
  24. "queryGranularity" : "minute",
  25. "intervals" : ["2018-01-01/2018-01-03"],
  26. "rollup" : true
  27. }
  28. },
  29. "ioConfig" : {
  30. "type" : "index_parallel",
  31. "inputSource" : {
  32. "type" : "oss",
  33. "uris" : [
  34. "oss://{YOUR_BUCKET_NAME}/druid/rollup-data.json"
  35. ]
  36. },
  37. "inputFormat" : {
  38. "type" : "json"
  39. },
  40. "appendToExisting" : false
  41. },
  42. "tuningConfig" : {
  43. "type" : "index_parallel",
  44. "maxRowsPerSegment" : 5000000,
  45. "maxRowsInMemory" : 25000
  46. }
  47. }
  48. }

By posting the above ingestion task spec to http://{YOUR_ROUTER_IP}:8888/druid/indexer/v1/task, an ingestion task will be created by the indexing service to ingest.

Reading files in folders

If we want to read files in a same folder, we could use the prefixes property to specify the folder name where Druid could find input files instead of specifying file URIs one by one.

  1. ...
  2. "ioConfig" : {
  3. "type" : "index_parallel",
  4. "inputSource" : {
  5. "type" : "oss",
  6. "prefixes" : [
  7. "oss://{YOUR_BUCKET_NAME}/2020", "oss://{YOUR_BUCKET_NAME}/2021"
  8. ]
  9. },
  10. "inputFormat" : {
  11. "type" : "json"
  12. },
  13. "appendToExisting" : false
  14. }
  15. ...

The spec above tells the ingestion task to read all files under 2020 and 2021 folders.

Reading from other buckets

If you want to read from files in buckets which are different from the bucket Druid is configured, use objects property of OSS’s InputSource for task submission as below:

  1. ...
  2. "ioConfig" : {
  3. "type" : "index_parallel",
  4. "inputSource" : {
  5. "type" : "oss",
  6. "objects" : [
  7. {"bucket": "YOUR_BUCKET_NAME", "path": "druid/rollup-data.json"}
  8. ]
  9. },
  10. "inputFormat" : {
  11. "type" : "json"
  12. },
  13. "appendToExisting" : false
  14. }
  15. ...

Reading with customized accessKey

If the default druid.oss.accessKey is not able to access a bucket, properties could be used to customize these secret information as below:

  1. ...
  2. "ioConfig" : {
  3. "type" : "index_parallel",
  4. "inputSource" : {
  5. "type" : "oss",
  6. "objects" : [
  7. {"bucket": "YOUR_BUCKET_NAME", "path": "druid/rollup-data.json"}
  8. ],
  9. "properties": {
  10. "endpoint": "YOUR_ENDPOINT_OF_BUCKET",
  11. "accessKey": "YOUR_ACCESS_KEY",
  12. "secretKey": "YOUR_SECRET_KEY"
  13. }
  14. },
  15. "inputFormat" : {
  16. "type" : "json"
  17. },
  18. "appendToExisting" : false
  19. }
  20. ...

This properties could be applied to any of uris, objects, prefixes property above.

Troubleshooting

When using OSS as deep storage or reading from OSS, the most problems that users will encounter are related to OSS permission. Please refer to the official OSS permission troubleshooting document to find a solution.