Deploy YDB On-Premises

This document describes how to deploy a multi-tenant YDB cluster on multiple servers.

Before you start

Prerequisites

Make sure you have SSH access to all servers. This is necessary to install artifacts and run the YDB binary file.
Your network configuration must allow TCP connections on the following ports (by default):

  • 2135, 2136: GRPC for client-cluster interaction.
  • 19001, 19002 - Interconnect for intra-cluster node interaction.
  • 8765, 8766: The HTTP interface for cluster monitoring.

Select the servers and disks to be used for data storage:

  • Use the block-4-2 fault tolerance model for cluster deployment in one availability zone (AZ). To survive the loss of 2 nodes, use at least 8 nodes.
  • Use the mirror-3-dc fault tolerance model for cluster deployment in three availability zones (AZ). To survive the loss of 1 AZ and 1 node in another AZ, use at least 9 nodes. The number of nodes in each AZ should be the same.

Run each static node on a separate server.

Create a system user and a group to run YDB under

On each server where YDB will be running, execute:

  1. sudo groupadd ydb
  2. sudo useradd ydb -g ydb

Local deployment - 图1

To make sure the YDB server has access to block store disks to run, add the user to start the process under to the disk group.

  1. sudo usermod -aG disk ydb

Local deployment - 图2

Prepare and format disks on each server

Warning

We don’t recommend using disks that are used by other processes (including the OS) for data storage.

  1. Create a partition on the selected disk

Alert

Be careful! The following step will delete all partitions on the specified disks.
Make sure that you specified the disks that have no other data!

  1. sudo parted /dev/nvme0n1 mklabel gpt -s
  2. sudo parted -a optimal /dev/nvme0n1 mkpart primary 0% 100%
  3. sudo parted /dev/nvme0n1 name 1 ydb_disk_01
  4. sudo partx --u /dev/nvme0n1

Local deployment - 图3

As a result, a disk labeled as /dev/disk/by-partlabel/ydb_disk_01 will appear in the system.

If you plan to use more than one disk on each server, specify a label that is unique for each of them instead of ydb_disk_01. You’ll need to use these disks later in the configuration files.

Download an archive with the ydbd executable file and the libraries necessary for working with YDB:

  1. curl https://binaries.ydb.tech/ydbd-main-linux-amd64.tar.gz | tar -xz

Local deployment - 图4

Create directories to run:

  1. mkdir -p /opt/ydb
  2. chown ydb:ydb /opt/ydb
  3. mkdir /opt/ydb/bin
  4. mkdir /opt/ydb/cfg

Local deployment - 图5

  1. Copy the binary file, libraries, and configuration file to the appropriate directories:
  1. sudo cp -i ydbd-main-linux-amd64/bin/ydbd /opt/ydb/bin/
  2. sudo cp -i ydbd-main-linux-amd64/lib/libaio.so /opt/ydb/lib/
  3. sudo cp -i ydbd-main-linux-amd64/lib/libiconv.so /opt/ydb/lib/
  4. sudo cp -i ydbd-main-linux-amd64/lib/libidn.so /opt/ydb/lib/

Local deployment - 图6

  1. Format the disk with the built-in command
  1. sudo LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/ydb/lib /opt/ydb/bin/ydbd admin bs disk obliterate /dev/disk/by-partlabel/ydb_disk_01

Local deployment - 图7

Perform this operation for each disk that will be used for data storage.

Prepare the configuration files:

Unprotected mode

Protected mode

In this mode, traffic between cluster nodes and between client and cluster uses an unencrypted connection. Use this mode for testing purposes.

Prepare a configuration file for YDB

Download a sample config for the appropriate failure model of your cluster:

  • block-4-2: For a single-datacenter cluster.
  • mirror-3dc: For a cross-datacenter cluster consisting of 9 nodes.
  • mirror-3dc: For a cross-datacenter cluster consisting of 3 nodes.
  1. In the host_configs section, specify all disks and their types on each cluster node. Possible disk types:
  1. host_configs:
  2. - drive:
  3. - path: /dev/disk/by-partlabel/ydb_disk_01
  4. type: SSD
  5. host_config_id: 1

Local deployment - 图8

  • ROT (rotational): HDD.
  • SSD: SSD or NVMe.
  1. In the hosts section, specify the FQDN of each node, their configuration and location in a data_center or rack.
  1. hosts:
  2. - host: node1.ydb.tech
  3. host_config_id: 1
  4. walle_location:
  5. body: 1
  6. data_center: 'zone-a'
  7. rack: '1'
  8. - host: node2.ydb.tech
  9. host_config_id: 1
  10. walle_location:
  11. body: 2
  12. data_center: 'zone-b'
  13. rack: '1'
  14. - host: node3.ydb.tech
  15. host_config_id: 1
  16. walle_location:
  17. body: 3
  18. data_center: 'zone-c'
  19. rack: '1'

Local deployment - 图9

Save the YDB configuration file as /opt/ydb/cfg/config.yaml

In this mode, traffic between cluster nodes and between the client and cluster is encrypted using the TLS protocol.

Create TLS certificates using OpenSSL

Note

You can use existing TLS certificates. It’s important that certificates support both server and client authentication (extendedKeyUsage = serverAuth,clientAuth).

Create a CA key

Create a directory named secure to store the CA key and one named certs for certificates and node keys:

  1. mkdir secure
  2. mkdir certs

Local deployment - 图10

Create a ca.cnf configuration file with the following content:

  1. [ ca ]
  2. default_ca = CA_default
  3. [ CA_default ]
  4. default_days = 365
  5. database = index.txt
  6. serial = serial.txt
  7. default_md = sha256
  8. copy_extensions = copy
  9. unique_subject = no
  10. [ req ]
  11. prompt=no
  12. distinguished_name = distinguished_name
  13. x509_extensions = extensions
  14. [ distinguished_name ]
  15. organizationName = YDB
  16. commonName = YDB CA
  17. [ extensions ]
  18. keyUsage = critical,digitalSignature,nonRepudiation,keyEncipherment,keyCertSign
  19. basicConstraints = critical,CA:true,pathlen:1
  20. [ signing_policy ]
  21. organizationName = supplied
  22. commonName = optional
  23. [ signing_node_req ]
  24. keyUsage = critical,digitalSignature,keyEncipherment
  25. extendedKeyUsage = serverAuth,clientAuth
  26. # Used to sign client certificates.
  27. [ signing_client_req ]
  28. keyUsage = critical,digitalSignature,keyEncipherment
  29. extendedKeyUsage = clientAuth

Local deployment - 图11

Create a CA key by running the command:

  1. openssl genrsa -out secure/ca.key 2048

Local deployment - 图12

Save this key separately, you’ll need it for issuing certificates. If it’s lost, you’ll have to reissue all certificates.

Create a private Certificate Authority (CA) certificate by running the command:

  1. openssl req -new -x509 -config ca.cnf -key secure/ca.key -out ca.crt -days 365 -batch

Local deployment - 图13

Creating keys and certificates for cluster nodes

Create a node.conf configuration file with the following content:

  1. # OpenSSL node configuration file
  2. [ req ]
  3. prompt=no
  4. distinguished_name = distinguished_name
  5. req_extensions = extensions
  6. [ distinguished_name ]
  7. organizationName = YDB
  8. [ extensions ]
  9. subjectAltName = DNS:<node>.<domain>

Local deployment - 图14

Create a certificate key by running the command:

  1. openssl genrsa -out node.key 2048

Local deployment - 图15

Create a Certificate Signing Request (CSR) by running the command:

  1. openssl req -new -sha256 -config node.cnf -key certs/node.key -out node.csr -batch

Local deployment - 图16

Create a node certificate with the following command:

  1. openssl ca -config ca.cnf -keyfile secure/ca.key -cert certs/ca.crt -policy signing_policy \
  2. -extensions signing_node_req -out certs/node.crt -outdir certs/ -in node.csr -batch

Local deployment - 图17

Create similar certificate-key pairs for each node.

Create directories for certificates on each node

  1. mkdir /opt/ydb/certs
  2. chmod 0750 /opt/ydb/certs

Local deployment - 图18

Copy the node certificates and keys

  1. sudo -u ydb cp certs/ca.crt certs/node.crt certs/node.key /opt/ydb/certs/

Local deployment - 图19

{% include prepare-configs.md %}
3. In the interconnect_config and grpc_config sections, specify the path to the certificate, key, and CA certificates:

  1. interconnect_config:
  2. start_tcp: true
  3. encryption_mode: OPTIONAL
  4. path_to_certificate_file: "/opt/ydb/certs/node.crt"
  5. path_to_private_key_file: "/opt/ydb/certs/node.key"
  6. path_to_ca_file: "/opt/ydb/certs/ca.crt"
  7. grpc_config:
  8. cert: "/opt/ydb/certs/node.crt"
  9. key: "/opt/ydb/certs/node.key"
  10. ca: "/opt/ydb/certs/ca.crt"

Local deployment - 图20

Save the configuration file as /opt/ydb/cfg/config.yaml

Start static nodes

Manual

Using systemd

  1. Run YDB storage on each node:
  1. sudo su - ydb
  2. cd /opt/ydb
  3. export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/ydb/lib
  4. /opt/ydb/bin/ydbd server --log-level 3 --syslog --tcp --yaml-config /opt/ydb/cfg/config.yaml \
  5. --grpc-port 2135 --ic-port 19001 --mon-port 8765 --node static

Local deployment - 图21

TBD: how and where to write logs? Log rotation

  1. On each node, create a configuration file named /etc/systemd/system/ydbd-storage.service with the following content:
  1. [Unit]
  2. Description=YDB storage node
  3. After=network-online.target rc-local.service
  4. Wants=network-online.target
  5. StartLimitInterval=10
  6. StartLimitBurst=15
  7. [Service]
  8. Restart=always
  9. RestartSec=1
  10. User=ydb
  11. PermissionsStartOnly=true
  12. StandardOutput=syslog
  13. StandardError=syslog
  14. SyslogIdentifier=ydbd
  15. SyslogFacility=daemon
  16. SyslogLevel=err
  17. Environment=LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/ydb/lib
  18. ExecStart=/opt/ydb/bin/ydbd server --log-level 3 --syslog --tcp --yaml-config /opt/ydb/cfg/config.yaml --grpc-port 2135 --ic-port 19001 --mon-port 8765 --node static
  19. LimitNOFILE=65536
  20. LimitCORE=0
  21. LimitMEMLOCK=3221225472
  22. [Install]
  23. WantedBy=multi-user.target

Local deployment - 图22

  1. Run YDB storage on each node:
  1. sudo systemctl start ydbd-storage

Local deployment - 图23

Initialize a cluster

On one of the cluster nodes, run the command:

  1. LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/ydb/lib ; /opt/ydb/bin/ydbd admin blobstorage config init --yaml-file /opt/ydb/cfg/config.yaml ; echo $?

Local deployment - 图24

The command execution code should be null.

Creating the first database

To work with tables, you need to create at least one database and run a process serving this database (a dynamic node).

  1. LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/ydb/lib ; /opt/ydb/bin/ydbd admin database /Root/testdb create ssd:1

Local deployment - 图25

Start the DB dynamic node

Manual

Using systemd

  1. Start the YDB dynamic node for the /Root/testdb database:
  1. sudo su - ydb
  2. cd /opt/ydb
  3. export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/ydb/lib
  4. /opt/ydbd/bin/ydbd server --grpc-port 2136 --ic-port 19002 --mon-port 8766 --yaml-config /opt/ydb/cfg/config.yaml \
  5. --tenant /Root/testdb --node-broker --node-broker --node-broker

Local deployment - 图26

Run additional dynamic nodes on other servers to ensure database availability.

  1. Create a configuration file named /etc/systemd/system/ydbd-testdb.service with the following content:
  1. [Unit]
  2. Description=YDB testdb dynamic node
  3. After=network-online.target rc-local.service
  4. Wants=network-online.target
  5. StartLimitInterval=10
  6. StartLimitBurst=15
  7. [Service]
  8. Restart=always
  9. RestartSec=1
  10. User=ydb
  11. PermissionsStartOnly=true
  12. StandardOutput=syslog
  13. StandardError=syslog
  14. SyslogIdentifier=ydbd
  15. SyslogFacility=daemon
  16. SyslogLevel=err
  17. Environment=LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/ydb/lib
  18. ExecStart=LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/ydb/lib ; /opt/ydb/bin/ydbd server --grpc-port 2136 --ic-port 19002 --mon-port 8766 --yaml-config /opt/ydb/cfg/config.yaml --tenant /Root/testdb --node-broker --node-broker --node-broker
  19. LimitNOFILE=65536
  20. LimitCORE=0
  21. LimitMEMLOCK=32212254720
  22. [Install]
  23. WantedBy=multi-user.target

Local deployment - 图27

  1. Start the YDB dynamic node for the /Root/testdb database:
  1. sudo systemctl start ydbd-testdb

Local deployment - 图28

  1. Run additional dynamic nodes on other servers to ensure database availability.

Test the created database

  1. Install the YDB CLI as described in Installing the YDB CLI
  2. Create a test_table:
  1. ydb -e grpc://<node1.domain>:2136 -d /Root/testdb scripting yql \
  2. --script 'CREATE TABLE `testdir/test_table` (id Uint64, title Utf8, PRIMARY KEY (id));'

Local deployment - 图29

Where node.domain is the FQDN of the server running the dynamic nodes that support the /Root/testdb database.