Manual Deployment with Docker

This document describes how to run HStreamDB cluster with docker.

WARING

This tutorial only shows the main process of starting HStreamDB cluster with docker, the parameters are not configured with any security in mind, so please do not use them directly when deploying!

Set up a ZooKeeper ensemble

HServer and HStore require ZooKeeper in order to store some metadata. We need to set up a ZooKeeper ensemble first.

You can find a tutorial online on how to build a proper ZooKeeper ensemble. As an example, here we just quickly start a single-node ZooKeeper via docker.

  1. docker run --rm -d --name zookeeper --network host zookeeper

Create data folders on storage nodes

Storage nodes store data in shards. Typically each shard maps to a different physical disk. Assume your data disk is mounted on /mnt/data0

  1. # creates the root folder for data
  2. sudo mkdir -p /data/logdevice/
  3. # writes the number of shards that this box will have
  4. echo 1 | sudo tee /data/logdevice/NSHARDS
  5. # creates symlink for shard 0
  6. sudo ln -s /mnt/data0 /data/logdevice/shard0
  7. # adds the user for the logdevice daemon
  8. sudo useradd logdevice
  9. # changes ownership for the data directory and the disk
  10. sudo chown -R logdevice /data/logdevice/
  11. sudo chown -R logdevice /mnt/data0/

Create a configuration file

Here is a minimal configuration file example. Before using it, please modify it to suit your situation.

  1. {
  2. "server_settings": {
  3. "enable-nodes-configuration-manager": "true",
  4. "use-nodes-configuration-manager-nodes-configuration": "true",
  5. "enable-node-self-registration": "true",
  6. "enable-cluster-maintenance-state-machine": "true"
  7. },
  8. "client_settings": {
  9. "enable-nodes-configuration-manager": "true",
  10. "use-nodes-configuration-manager-nodes-configuration": "true",
  11. "admin-client-capabilities": "true"
  12. },
  13. "cluster": "logdevice",
  14. "internal_logs": {
  15. "config_log_deltas": {
  16. "replicate_across": {
  17. "node": 3
  18. }
  19. },
  20. "config_log_snapshots": {
  21. "replicate_across": {
  22. "node": 3
  23. }
  24. },
  25. "event_log_deltas": {
  26. "replicate_across": {
  27. "node": 3
  28. }
  29. },
  30. "event_log_snapshots": {
  31. "replicate_across": {
  32. "node": 3
  33. }
  34. },
  35. "maintenance_log_deltas": {
  36. "replicate_across": {
  37. "node": 3
  38. }
  39. },
  40. "maintenance_log_snapshots": {
  41. "replicate_across": {
  42. "node": 3
  43. }
  44. }
  45. },
  46. "metadata_logs": {
  47. "nodeset": [],
  48. "replicate_across": {
  49. "node": 3
  50. }
  51. },
  52. "zookeeper": {
  53. "zookeeper_uri": "ip://10.100.2.11:2181",
  54. "timeout": "30s"
  55. }
  56. }
  • If you have a multi-node ZooKeeper ensemble, use the list of ZooKeeper ensemble nodes and ports to modify zookeeper_uri in the zookeeper section:

    1. "zookeeper": {
    2. "zookeeper_uri": "ip://10.100.2.11:2181,10.100.2.12:2181,10.100.2.13:2181",
    3. "timeout": "30s"
    4. }
  • Detailed explanations of all the attributes can be found in the Cluster configurationManual Deployment with Docker - 图2 (opens new window) docs.

Store the configuration file

You can the store configuration file in ZooKeeper, or store it on each storage nodes.

Store configuration file in ZooKeeper

Suppose you have a configuration file on one of your ZooKeeper nodes with the path ~/logdevice.conf. Save the configuration file to the ZooKeeper by running the following command.

  1. docker exec zookeeper zkCli.sh create /logdevice.conf "`cat ~/logdevice.conf`"

You can verify the create operation by:

  1. docker exec zookeeper zkCli.sh get /logdevice.conf

Set up HStore cluster

For the configuration file stored in ZooKeeper, assume that the value of the zookeeper_uri field in the configuration file is "ip:/10.100.2.11:2181" and the path to the configuration file in ZooKeeper is /logdevice.conf.

For the configuration file stored on each node, assume that your file path is /data/logdevice/logdevice.conf.

Start admin server on a single node

  • Configuration file stored in ZooKeeper:

    1. docker run --rm -d --name storeAdmin --network host -v /data/logdevice:/data/logdevice \
    2. hstreamdb/hstream:v0.11.0 /usr/local/bin/ld-admin-server \
    3. --config-path zk:10.100.2.11:2181/logdevice.conf \
    4. --enable-maintenance-manager \
    5. --maintenance-log-snapshotting \
    6. --enable-safety-check-periodic-metadata-update
    • If you have a multi-node ZooKeeper ensemble, Replace --config-path parameter to: --config-path zk:10.100.2.11:2181,10.100.2.12:2181,10.100.2.13:2181/logdevice.conf
  • Configuration file stored in each node:

    Replace --config-path parameter to --config-path /data/logdevice/logdevice.conf

Start logdeviced on every node

  • Configuration file stored in ZooKeeper:

    1. docker run --rm -d --name hstore --network host -v /data/logdevice:/data/logdevice \
    2. hstreamdb/hstream:v0.11.0 /usr/local/bin/logdeviced \
    3. --config-path zk:10.100.2.11:2181/logdevice.conf \
    4. --name store-0 \
    5. --address 192.168.0.3 \
    6. --local-log-store-path /data/logdevice
    • For each node, you should update the --name to a different value and --address to the host IP address of that node.
  • Configuration file stored in each node:

    Replace --config-path parameter to --config-path /data/logdevice/logdevice.conf

Bootstrap the cluster

After starting the admin server and logdeviced for each storage node, now we can bootstrap our cluster.

On the admin server node, run:

  1. docker exec storeAdmin hadmin store nodes-config bootstrap --metadata-replicate-across 'node:3'

And you should see something like this:

  1. Successfully bootstrapped the cluster, new nodes configuration version: 7
  2. Took 0.019s

You can check the cluster status by run:

  1. docker exec storeAdmin hadmin store status

And the result should be:

  1. +----+---------+----------+-------+-----------+---------+---------------+
  2. | ID | NAME | PACKAGE | STATE | UPTIME | SEQ. | HEALTH STATUS |
  3. +----+---------+----------+-------+-----------+---------+---------------+
  4. | 0 | store-0 | 99.99.99 | ALIVE | 2 min ago | ENABLED | HEALTHY |
  5. | 1 | store-2 | 99.99.99 | ALIVE | 2 min ago | ENABLED | HEALTHY |
  6. | 2 | store-1 | 99.99.99 | ALIVE | 2 min ago | ENABLED | HEALTHY |
  7. +----+---------+----------+-------+-----------+---------+---------------+
  8. Took 7.745s

Now we finish setting up the HStore cluster.

Set up HServer cluster

To start a single HServer instance, you can modify the start command to fit your situation:

  1. docker run -d --name hstream-server --network host \
  2. hstreamdb/hstream:v0.11.0 /usr/local/bin/hstream-server \
  3. --bind-address $SERVER_HOST \
  4. --advertised-address $SERVER_HOST \
  5. --seed-nodes $SERVER_HOST \
  6. --metastore-uri zk://$ZK_ADDRESS \
  7. --store-config zk:$ZK_ADDRESS/logdevice.conf \
  8. --store-admin-host $ADMIN_HOST \
  9. --server-id 1
  • $SERVER_HOST :The host IP address of your server node, e.g 192.168.0.1
  • metastore-uri: The address of HMeta, it currently support zk://$ZK_ADDRESS for zookeeper and rq://$RQ_ADDRESS for rqlite (experimental).
  • $ZK_ADDRESS :Your ZooKeeper ensemble address list, e.g 10.100.2.11:2181,10.100.2.12:2181,10.100.2.13:2181
  • --store-config :The path to your HStore configuration file. Should match the value of the --config-path parameter when starting the HStore cluster
  • --store-admin-host: The IP address of the HStore Admin Server node
  • --server-id :You should set a unique identifier for each server instance

You can start multiple server instances on different nodes in the same way.