MatrixOne Distributed Cluster Deployment

This document will mainly describe how to deploy MatrixOne distributed database, based on a private Kubernetes cluster that separates computing and storage resources in a cloud-native manner, starting from scratch.

Main Steps

  1. Deploy Kubernetes cluster
  2. Deploy object storage MinIO
  3. Create and connect MatrixOne cluster

Key Concepts

As this document involves many Kubernetes-related terms, to help everyone understand the deployment process, we will provide brief explanations of important terms related to Kubernetes. If you need to know more about Kubernetes-related content, see Kubernetes Documentation

  • Pod

Pod is the smallest resource management component in Kubernetes and the smallest resource object for running containerized applications. A Pod represents a process running in the cluster. In simple terms, we can consider a group of applications that provide specific functions as a pod containing one or more container objects that work together to provide services to the outside world.

  • Storage Class

Storage Class, abbreviated as SC, marks the characteristics and performance of storage resources. According to the description of SC, we can intuitively understand the aspects of various storage resources and then apply storage resources according to the application’s requirements. Administrators can define storage resources as a specific category, just as storage devices describe their configuration profiles.

  • PersistentVolume

PersistentVolume, abbreviated as PV, mainly includes setting key information such as storage capacity, access mode, storage type, recycling strategy, and backend storage type as a storage resource.

  • PersistentVolumeClaim

PersistentVolumeClaim, or PVC, is used as a user’s request for storage resources, mainly including the setting of information such as storage space request, access mode, PV selection conditions, and storage category.

1. Deploying a Kubernetes Cluster

As MatrixOne’s distributed deployment relies on a Kubernetes cluster, we need to have one in place. This article will guide you through setting up a Kubernetes cluster using Kuboard-Spray.

Preparing the Cluster Environment

To prepare the cluster environment, you need to do the following:

  • Have three VirtualBox virtual machines
  • Use Ubuntu 20.04 as the operating system (by default, it does not allow root account remote login, so you need to modify the configuration file for sshd in advance to enable remote login for root). Two machines will be used for deploying Kubernetes and other dependencies for MatrixOne, while the third will act as a jump host to set up the Kubernetes cluster.

The specific distribution of the machines is shown below:

hostIPmemcpudiskrole
kuboardspray192.168.56.92G1C50GJump server
master0192.168.56.104G2C50Gmaster etcd
node0192.168.56.114G2C50Gworker

Deploying Kuboard Spray on a Jump Server

Kuboard Spray is a tool used for visualizing the deployment of Kubernetes clusters. It uses Docker to quickly launch a web application that can visualize the deployment of a Kubernetes cluster. Once the Kubernetes cluster environment has been deployed, the Docker application can be stopped.

Preparing the Jump Server Environment

  • Installing Docker

Since Docker will be used, the environment must have Docker installed. Use the following command to install and start Docker on the jump server:

  1. sudo apt-get update && sudo apt-get install -y docker.io

Once the environment is prepared, Kuboard Spray can be deployed.

Deploying Kuboard Spray

Execute the following command to install Kuboard Spray:

  1. docker run -d \
  2. --privileged \
  3. --restart=unless-stopped \
  4. --name=kuboard-spray \
  5. -p 80:80/tcp \
  6. -v /var/run/docker.sock:/var/run/docker.sock \
  7. -v ~/kuboard-spray-data:/data \
  8. eipwork/kuboard-spray:v1.2.2-amd64

If the image pull fails due to network issues, use the backup address below:

  1. docker run -d \
  2. --privileged \
  3. --restart=unless-stopped \
  4. --name=kuboard-spray \
  5. -p 80:80/tcp \
  6. -v /var/run/docker.sock:/var/run/docker.sock \
  7. -v ~/kuboard-spray-data:/data \
  8. swr.cn-east-2.myhuaweicloud.com/kuboard/kuboard-spray:latest-amd64

After executing the command, open the Kuboard Spray web interface by entering http://192.168.56.9 (jump server IP address) in a web browser, then log in to the Kuboard Spray interface using the username admin and the default password Kuboard123, as shown below:

MatrixOne distributed cluster deployment - 图1

After logging in, the Kubernetes cluster deployment can be started.

Visual Deployment of Kubernetes Cluster

After logging into the Kuboard-Spray interface, you can begin visually deploying a Kubernetes cluster.

The installation interface will download the Kubernetes cluster’s corresponding resource package via online downloading to achieve offline installation of the Kubernetes cluster.

  1. Click Resource Package Management and select the appropriate version of the Kubernetes resource package to download:

    Download spray-v2.19.0c_Kubernetes-v1.24.10_v2.9-amd64 版本

    MatrixOne distributed cluster deployment - 图2

  2. Click Import > Load Resource Package, select the appropriate download source, and wait for the resource package to finish downloading.

    MatrixOne distributed cluster deployment - 图3

  3. This will pull the related image dependencies:

    MatrixOne distributed cluster deployment - 图4

  4. After the image resource package is successfully pulled, return to the Kuboard-Spray web interface. You can see that the corresponding version of the resource package has been imported.

    MatrixOne distributed cluster deployment - 图5

Installing a Kubernetes Cluster

This chapter will guide you through the installation of a Kubernetes cluster.

  1. Select Cluster Management and choose Add Cluster Installation Plan:

    MatrixOne distributed cluster deployment - 图6

  2. In the pop-up dialog box, define the name of the cluster, select the version of the resource package that was just imported, and click OK, as shown in the following figure:

    MatrixOne distributed cluster deployment - 图7

Cluster Planning

Based on the predefined roles, the Kubernetes cluster is deployed with a pattern of 1 master + 1 worker + 1 etcd.

After defining the cluster name and selecting the resource package version, click OK, and then proceed to the cluster planning stage.

  1. Select the corresponding node roles and names:

    MatrixOne distributed cluster deployment - 图8

    • Master node: select the ETCD and control nodes, and fill in the name as master0.
    • Worker node: select only the worker node, and fill in the name as node0.
  2. After filling in the roles and node names for each node, please fill in the corresponding connection information on the right, as shown in the following figure:

    MatrixOne distributed cluster deployment - 图9

  3. After filling in all the roles, click Save. You can now prepare to install the Kubernetes cluster.

Installing Kubernetes Cluster

After completing all roles and saving in the previous step, click Execute to start installing the Kubernetes cluster.

  1. Click OK as shown in the figure below to start installing the Kubernetes cluster:

    MatrixOne distributed cluster deployment - 图10

  2. When installing the Kubernetes cluster, the ansible script will be executed on the corresponding node to install the Kubernetes cluster. The overall installation time will vary depending on the machine configuration and network. Generally, it takes 5 to 10 minutes.

    Note: If an error occurs, you can check the log to confirm whether the version of Kuboard-Spray is mismatched. If the version is mismatched, please replace it with a suitable version.

  3. After the installation is complete, execute kubectl get node on the master node of the Kubernetes cluster:

    MatrixOne distributed cluster deployment - 图11

  4. The command result shown in the figure above indicates that the Kubernetes cluster has been successfully installed.

2. Deploying Helm

The installation of Operator depends on Helm, so Helm needs to be installed first.

Note: All operations in this section are performed on the master0 node.

  1. Download the Helm installation package:

    1. wget https://get.helm.sh/helm-v3.10.2-linux-amd64.tar.gz

    If the download is slow due to network issues, you can download the latest binary installation package from the official website and upload it to the server.

  2. Extract and install:

    1. tar -zxf helm-v3.10.2-linux-amd64.tar.gz
    2. mv linux-amd64/helm /usr/local/bin/helm
  3. Verify the version to check if it is installed:

    1. [root@k8s01 home]# helm version
    2. version.BuildInfo{Version:"v3.10.2", GitCommit:"50f003e5ee8704ec937a756c646870227d7c8b58", GitTreeState:"clean", GoVersion:"go1.18.8"}

    The version information shown above indicates that the installation is complete.

3. CSI Deployment

CSI is a storage plugin for Kubernetes that provides storage services for MinIO and MatrixOne. This section will guide you through the use of the local-path-provisioner plugin.

Note: All the commands in this section should be executed on the master0 node.

  1. Install CSI using the following command line:

    1. wget https://github.com/rancher/local-path-provisioner/archive/refs/tags/v0.0.23.zip
    2. unzip v0.0.23.zip
    3. cd local-path-provisioner-0.0.23/deploy/chart/local-path-provisioner
    4. helm install --set nodePathMap[0].paths[0]="/opt/local-path-provisioner",nodePathMap[0].node=DEFAULT_PATH_FOR_NON_LISTED_NODES --create-namespace --namespace local-path-storage local-path-storage ./
  2. After a successful installation, the command line should display as follows:

    1. root@master0:~# kubectl get pod -n local-path-storage
    2. NAME READY STATUS RESTARTS AGE
    3. local-path-storage-local-path-provisioner-57bf67f7c-lcb88 1/1 Running 0 89s

    Note: After installation, this storageClass will provide storage services in the “/opt/local-path-provisioner” directory on the worker node. You can modify it to another path.

  3. Set the default storageClass:

    1. kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
  4. After setting the default, the command line should display as follows:

    1. root@master0:~# kubectl get storageclass
    2. NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
    3. local-path (default) cluster.local/local-path-storage-local-path-provisioner Delete WaitForFirstConsumer true 115s

4. MinIO Deployment

MinIO is used to provide object storage for MatrixOne. This section will guide you through the deployment of a single-node MinIO.

Note: All the commands in this section should be executed on the master0 node.

Installation and Startup

  1. The command line for installing and starting MinIO is as follows:

    1. helm repo add minio https://charts.min.io/
    2. helm install --create-namespace --namespace mostorage --set resources.requests.memory=512Mi --set replicas=1 --set persistence.size=10G --set mode=standalone --set rootUser=rootuser,rootPassword=rootpass123 --set consoleService.type=NodePort minio minio/minio

    Note

    • --set resources.requests.memory=512Mi sets the minimum memory consumption of MinIO
    • --set persistence.size=1G sets the storage size of MinIO to 1G
    • --set rootUser=rootuser,rootPassword=rootpass123 the parameters set for rootUser and rootPassword are required for creating the secrets file for the Kubernetes cluster later, so use something that you can remember.
  2. After a successful installation and start, the command line should display as follows:

    MatrixOne distributed cluster deployment - 图12

    Then, execute the following command line to connect mo-log to port 9000:

    1. nohup kubectl port-forward --address 0.0.0.0 pod-name -n most storage 9000:9000 &
  3. After starting, you can log in to the MinIO page using the IP address of any machine in the Kubernetes cluster and port 32001. As shown in the following figure, the account password is the rootUser and rootPassword set in the previous step, i.e., --set rootUser=rootuser,rootPassword=rootpass123:

    MatrixOne distributed cluster deployment - 图13

  4. After logging in, you need to create object storage related information:

    a. Fill in the Bucket Name with minio-mo under Bucket > Create Bucket. After filling it in, click the Create Bucket button at the bottom right.

    MatrixOne distributed cluster deployment - 图14

    b. In the current minio-mo bucket, click Choose or create a new path, and fill in the name test in the New Folder Path field. After filling it in, click Create to complete the creation.

    MatrixOne distributed cluster deployment - 图15

5. Deploying a MatrixOne Cluster

This section will guide you through the process of deploying a MatrixOne cluster.

Note: All steps in this section are performed on the master0 node.

Installing the matrixone-operator

Use the following command to install the matrixone-operator:

  1. wget https://github.com/matrixorigin/matrixone-operator/releases/download/0.7.0-alpha.1/matrixone-operator-0.7.0-alpha.1.tgz
  2. tar -xvf matrixone-operator-0.7.0-alpha.1.tgz
  3. cd /root/matrixone-operator/
  4. helm install --create-namespace --namespace mo-hn matrixone-operator ./ --dependency-update

After the installation is successful, use the following command to confirm again:

  1. root@master0:~# kubectl get pod -n mo-hn
  2. NAME READY STATUS RESTARTS AGE
  3. matrixone-operator-66b896bbdd-qdfrp 1/1 Running 0 2m28s

As shown in the above line of code, the status of the corresponding Pods is normal.

Create a MatrixOne cluster

Customize the yaml file of the MatrixOne cluster; the example is as follows:

  1. Write the following mo.yaml file:

    1. apiVersion: core.matrixorigin.io/v1alpha1
    2. kind: MatrixOneCluster
    3. metadata:
    4. name: mo
    5. namespace: mo-hn
    6. spec:
    7. dn:
    8. config: |
    9. [dn.Txn.Storage]
    10. backend = "TAE"
    11. log-backend = "logservice"
    12. [dn.Ckp]
    13. flush-interval = "60s"
    14. min-count = 100
    15. scan-interval = "5s"
    16. incremental-interval = "60s"
    17. global-interval = "100000s"
    18. [log]
    19. level = "error"
    20. format = "json"
    21. max-size = 512
    22. replicas: 1
    23. logService:
    24. replicas: 3
    25. sharedStorage:
    26. s3:
    27. type: minio
    28. path: minio
    29. endpoint: http://minio.mostorage:9000
    30. secretRef:
    31. name: minio
    32. pvcRetentionPolicy: Retain
    33. volume:
    34. size: 1Gi
    35. config: |
    36. [log]
    37. level = "error"
    38. format = "json"
    39. max-size = 512
    40. tp:
    41. serviceType: NodePort
    42. config: |
    43. [cn.Engine]
    44. type = "distributed-tae"
    45. [log]
    46. level = "debug"
    47. format = "json"
    48. max-size = 512
    49. replicas: 1
    50. version: nightly-556de418
    51. imageRepository: matrixorigin/matrixone
    52. imagePullPolicy: Always
  2. Define the secret service for MatrixOne to access MinIO:

    1. kubectl -n mo-hn create secret generic minio --from-literal=AWS_ACCESS_KEY_ID=rootuser --from-literal=AWS_SECRET_ACCESS_KEY=rootpass123

    The user name and password use the rootUser and rootPassword set when creating the MinIO cluster.

  3. Deploy the MatrixOne cluster using the following command line:

    1. kubectl apply -f mo.yaml
  4. Wait for about 10 minutes. If the pod restarts, please continue to wait. Until the following display indicates that the deployment is successful:

    1. root@k8s-master0:~# kubectl get pods -n mo-hn
    2. NAME READY STATUS RESTARTS AGE
    3. matrixone-operator-66b896bbdd-qdfrp 1/1 Running 1 (99m ago) 10h
    4. mo-dn-0 1/1 Running 0 46m
    5. mo-log-0 1/1 Running 0 47m
    6. mo-log-1 1/1 Running 0 47m
    7. mo-log-2 1/1 Running 0 47m
    8. mo-tp-cn-0 1/1 Running 1 (45m ago) 46m

6. Connect to MatrixOne cluster

Since the pod id of the CN that provides external access is not the node IP, you need to map the port of the corresponding service to the MatrixOne node. This chapter will guide you to use kubectl port-forward to connect to the MatrixOne cluster.

  • Only allow local access:
  1. nohup kubectl port-forward svc/mo-tp-cn 6001:6001 &
  • Specify a specific machine or all machines to access:
  1. nohup kubectl port-forward --address 0.0.0.0 svc/mo-tp-cn 6001:6001 &

After specifying Allow local access or Specify a specific machine or all machines to access, use the MySQL client to connect to MatrixOne:

  1. mysql -h $(kubectl get svc/mo-tp-cn -n mo-hn -o jsonpath='{.spec.clusterIP}') -P 6001 -udump -p111
  2. mysql: [Warning] Using a password on the command line interface can be insecure.
  3. Welcome to the MySQL monitor. Commands end with ; or \g.
  4. Your MySQL connection id is 1004
  5. Server version: 638358 MatrixOne
  6. Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.
  7. Oracle is a registered trademark of Oracle Corporation and/or its
  8. affiliates. Other names may be trademarks of their respective
  9. owners.
  10. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
  11. mysql>

After explicit mysql>, the distributed MatrixOne cluster is established and connected.