Deployment with hdt

This document provides a way to start an HStreamDB cluster quickly using the deployment tool hdt.

Pre-Require

  • Start HStreamDB requires an operating system kernel version greater than at least Linux 4.14. Check with command:

    1. uname -r
  • The local host needs to be able to connect to the remote server via SSH

  • Make sure remote server has docker installed.

  • Make sure that the log-in user has sudo execute privileges, and configure sudo without password.

  • For nodes which deploy HStore instances, mount the data disk to /mnt/data*/.

    • “*“ Matching incremental numbers, start from zero
    • one disk should mount to one directory. e.g. if we have two data disks /dev/vdb and /dev/vdc, then /dev/vdb should mount to /mnt/data0 and /dev/vdc should mount to /mnt/data1

Deploy hdt on the control machine

We’ll use a deployment tool hdt to help us set up the cluster. The binaries are available here: https://github.com/hstreamdb/deployment-tool/releases.

  1. Log in to the control machine and download the binaries.

  2. Generate configuration template with command:

    1. ./hdt init

    The current directory structure will be as follows after running the init command:

    1. ├── hdt
    2. └── template
    3. ├── config.yaml
    4. ├── grafana
    5. ├── dashboards
    6. └── datasources
    7. ├── prometheus
    8. └── script

Update Config.yaml

template/config.yaml contains the template for the configuration file. Refer to the description of the fields in the file and modify the template according to your actual needs.

Here we will deploy a cluster on 3 nodes, each consisting of a HServer instance, a HStore instance and a Meta-Store instance as a simple example. For hstream monitor stack, refer to monitor components config.

The final configuration file may looks like:

  1. global:
  2. user: "root"
  3. key_path: "~/.ssh/id_rsa"
  4. ssh_port: 22
  5. hserver:
  6. - host: 172.24.47.175
  7. - host: 172.24.47.174
  8. - host: 172.24.47.173
  9. hstore:
  10. - host: 172.24.47.175
  11. enable_admin: true
  12. - host: 172.24.47.174
  13. - host: 172.24.47.173
  14. meta_store:
  15. - host: 172.24.47.175
  16. - host: 172.24.47.174
  17. - host: 172.24.47.173

Set up cluster

set up cluster with ssh key-value pair

  1. ./hdt start -c template/config.yaml -i ~/.ssh/id_rsa -u root

set up cluster with passwd

  1. ./hdt start -c template/config.yaml -p -u root

then type your password.

use ./hdt start -h for more information

Remove cluster

remove cluster will stop cluster and remove ALL related data.

remove cluster with ssh key-value pair

  1. ./hdt remove -c template/config.yaml -i ~/.ssh/id_rsa -u root

remove cluster with passwd

  1. ./hdt remove -c template/config.yaml -p -u root

then type your password.

Detailed configuration items

This section describes in detail the meaning of each field in the configuration file. The configuration file is divided into three large sections: global configuration items, monitoring component configuration items and other component configuration items.

Global

  1. global:
  2. # # Username to login via SSH
  3. user: "root"
  4. # # The path of SSH identity file
  5. key_path: "~/.ssh/hstream-aliyun.pem"
  6. # # SSH service monitor port
  7. ssh_port: 22
  8. # # Replication factors of store metadata
  9. meta_replica: 3
  10. # # Local path to MetaStore config file
  11. meta_store_config_path: ""
  12. # # Local path to HStore config file
  13. hstore_config_path: ""
  14. # # HStore config file can be loaded from network filesystem, for example, the config file
  15. # # can be stored in meta store and loaded via network request. Set this option to true will
  16. # # force store load config file from its local filesystem.
  17. disable_store_network_config_path: true
  18. # # Local path to HServer config file
  19. hserver_config_path: ""
  20. # # Global container configuration
  21. container_config:
  22. cpu_limit: 200
  23. memory_limit: 8G
  24. disable_restart: true
  25. remove_when_exit: true

Global section set the default configuration value for all other configuration items. Here are some notes:

  • meta_replica set the replication factors of HStreamDB metadata. This value should not exceed the number of hstore instances.
  • meta_store_config_pathhstore_config_path and hserver_config_path are configuration file path for meta_storehstore and hserver in the control machine. If the paths are set, these configuration files will be synchronized to the specified location on the node where the respective instance is located, and the corresponding configuration items will be updated when the instance is started.
  • container_config let you set resource limitations for all containers.

monitor

  1. monitor:
  2. # # Node exporter port
  3. node_exporter_port: 9100
  4. # # Node exporter image
  5. node_exporter_image: "prom/node-exporter"
  6. # # Cadvisor port
  7. cadvisor_port: 7000
  8. # # Cadvisor image
  9. cadvisor_image: "gcr.io/cadvisor/cadvisor:v0.39.3"
  10. # # List of nodes that won't be monitored.
  11. excluded_hosts: []
  12. # # root directory for all monitor related config files.
  13. remote_config_path: "/home/deploy/monitor"
  14. # # root directory for all monitor related data files.
  15. data_dir: "/home/deploy/data/monitor"
  16. # # Set up grafana without login
  17. grafana_disable_login: true
  18. # # Global container configuration for monitor stacks.
  19. container_config:
  20. cpu_limit: 200
  21. memory_limit: 8G
  22. disable_restart: true
  23. remove_when_exit: true

Monitor section sets configuration items related to cadvisor and node-exporter

hserver

  1. hserver:
  2. # # The ip address of the HServer
  3. - host: 10.1.0.10
  4. # # HServer docker image
  5. image: "hstreamdb/hstream"
  6. # # HServer listen port
  7. port: 6570
  8. # # HServer internal port
  9. internal_port: 6571
  10. # # HServer configuration
  11. server_config:
  12. # # HServer log level, valid values: [critical|error|warning|notify|info|debug]
  13. server_log_level: info
  14. # # HStore log level, valid values: [critical|error|warning|notify|info|debug|spew]
  15. store_log_level: info
  16. # # Specific server compression algorithm, valid values: [none|lz4|lz4hc]
  17. compression: lz4
  18. # # Root directory of HServer config files
  19. remote_config_path: "/home/deploy/hserver"
  20. # # Root directory of HServer data files
  21. data_dir: "/home/deploy/data/hserver"
  22. # # HServer container configuration
  23. container_config:
  24. cpu_limit: 200
  25. memory_limit: 8G
  26. disable_restart: true
  27. remove_when_exit: true

HServer section sets configuration items for hserver

hstore

  1. hstore:
  2. - host: 10.1.0.10
  3. # # HStore docker image
  4. image: "hstreamdb/hstream"
  5. # # HStore admin port
  6. admin_port: 6440
  7. # # Root directory of HStore config files
  8. remote_config_path: "/home/deploy/hstore"
  9. # # Root directory of HStore data files
  10. data_dir: "/home/deploy/data/store"
  11. # # Total used disks
  12. disk: 1
  13. # # Total shards
  14. shards: 2
  15. # # The role of the HStore instance.
  16. role: "Both" # [Storage|Sequencer|Both]
  17. # # When Enable_admin is turned on, the instance can receive and process admin requests
  18. enable_admin: true
  19. # # HStore container configuration
  20. container_config:
  21. cpu_limit: 200
  22. memory_limit: 8G
  23. disable_restart: true
  24. remove_when_exit: true

HStore section sets configuration items for hstore.

  • admin_port: HStore service will listen on this port.
  • disk and shards: Set total used disks and total shards. For example, disk: 2 and shards: 4 means the hstore will persistant data in two disks, and each disk will contain 2 shards.
  • role: a HStore instance can act as a Storage, a Sequencer or both, default is both.
  • enable_admin: set the HStore instance with an admin server embedded.

meta-store

  1. meta_store:
  2. - host: 10.1.0.10
  3. # # Meta-store docker image
  4. image: "zookeeper:3.6"
  5. # # Meta-store port, currently only works for rqlite. zk will
  6. # # monitor on 4001
  7. port: 4001
  8. # # Raft port used by rqlite
  9. raft_port: 4002
  10. # # Root directory of Meta-Store config files
  11. remote_config_path: "/home/deploy/metastore"
  12. # # Root directory of Meta-store data files
  13. data_dir: "/home/deploy/data/metastore"
  14. # # Meta-store container configuration
  15. container_config:
  16. cpu_limit: 200
  17. memory_limit: 8G
  18. disable_restart: true
  19. remove_when_exit: true

Meta-store section sets configuration items for meta-store.

  • port and raft_port: these are used by rqlite

monitor stack components

  1. http_server:
  2. - host: 10.1.0.15
  3. # # Http_server docker image
  4. image: "hstreamdb/http-server"
  5. # # Http_server service monitor port
  6. port: 8081
  7. # # Root directory of http_server config files
  8. remote_config_path: "/home/deploy/http-server"
  9. # # Root directory of http_server data files
  10. data_dir: "/home/deploy/data/http-server"
  11. container_config:
  12. cpu_limit: 200
  13. memory_limit: 8G
  14. disable_restart: true
  15. remove_when_exit: true
  16. prometheus:
  17. - host: 10.1.0.15
  18. # # Prometheus docker image
  19. image: "prom/prometheus"
  20. # # Prometheus service monitor port
  21. port: 9090
  22. # # Root directory of Prometheus config files
  23. remote_config_path: "/home/deploy/prometheus"
  24. # # Root directory of Prometheus data files
  25. data_dir: "/home/deploy/data/prometheus"
  26. # # Prometheus container configuration
  27. container_config:
  28. cpu_limit: 200
  29. memory_limit: 8G
  30. disable_restart: true
  31. remove_when_exit: true
  32. grafana:
  33. - host: 10.1.0.15
  34. # # Grafana docker image
  35. image: "grafana/grafana-oss:main"
  36. # # Grafana service monitor port
  37. port: 3000
  38. # # Root directory of Grafana config files
  39. remote_config_path: "/home/deploy/grafana"
  40. # # Root directory of Grafana data files
  41. data_dir: "/home/deploy/data/grafana"
  42. # # Grafana container configuration
  43. container_config:
  44. cpu_limit: 200
  45. memory_limit: 8G
  46. disable_restart: true
  47. remove_when_exit: true
  48. alertmanager:
  49. # # The ip address of the Alertmanager Server.
  50. - host: 10.0.1.15
  51. # # Alertmanager docker image
  52. image: "prom/alertmanager"
  53. # # Alertmanager service monitor port
  54. port: 9093
  55. # # Root directory of Alertmanager config files
  56. remote_config_path: "/home/deploy/alertmanager"
  57. # # Root directory of Alertmanager data files
  58. data_dir: "/home/deploy/data/alertmanager"
  59. # # Alertmanager container configuration
  60. container_config:
  61. cpu_limit: 200
  62. memory_limit: 8G
  63. disable_restart: true
  64. remove_when_exit: true
  65. hstream_exporter:
  66. - host: 10.1.0.15
  67. # # hstream_exporter docker image
  68. image: "hstreamdb/hstream-exporter"
  69. # # hstream_exporter service monitor port
  70. port: 9200
  71. # # Root directory of hstream_exporter config files
  72. remote_config_path: "/home/deploy/hstream-exporter"
  73. # # Root directory of hstream_exporter data files
  74. data_dir: "/home/deploy/data/hstream-exporter"
  75. container_config:
  76. cpu_limit: 200
  77. memory_limit: 8G
  78. disable_restart: true
  79. remove_when_exit: true

Currently, HStreamDB monitor stack contains the following components:node-exporter, cadvisor, http-server, prometheus, grafana, alertmanager and hstream-exporter. The global configuration of the monitor stack is available in monitor field.