MatrixOne Distributed Cluster Deployment

This document will mainly describe how to deploy MatrixOne distributed database, based on a private Kubernetes cluster that separates computing and storage resources in a cloud-native manner, starting from scratch.

Main Steps

  1. Deploy Kubernetes cluster
  2. Deploy object storage MinIO
  3. Create and connect MatrixOne cluster

Key Concepts

As this document involves many Kubernetes-related terms, to help everyone understand the deployment process, we will provide brief explanations of important terms related to Kubernetes. If you need to know more about Kubernetes-related content, see Kubernetes Documentation

  • Pod

Pod is the smallest resource management component in Kubernetes and the smallest resource object for running containerized applications. A Pod represents a process running in the cluster. In simple terms, we can consider a group of applications that provide specific functions as a pod containing one or more container objects that work together to provide services to the outside world.

  • Storage Class

Storage Class, abbreviated as SC, marks the characteristics and performance of storage resources. According to the description of SC, we can intuitively understand the aspects of various storage resources and then apply storage resources according to the application’s requirements. Administrators can define storage resources as a specific category, just as storage devices describe their configuration profiles.

  • CSI

Kubernetes provides the CSI interface (Container Storage Interface, Container Storage Interface). Based on this set of CSI interfaces, custom CSI plug-ins can be developed to support specific storage and achieve the purpose of decoupling.

  • PersistentVolume

PersistentVolume, abbreviated as PV, mainly includes setting key information such as storage capacity, access mode, storage type, recycling strategy, and backend storage type as a storage resource.

  • PersistentVolumeClaim

PersistentVolumeClaim, or PVC, is used as a user’s request for storage resources, mainly including the setting of information such as storage space request, access mode, PV selection conditions, and storage category.

  • Service

Also called SVC, it matches a group of Pods to external access services through label selection. Each svc can be understood as a microservice.

  • Operator

Kubernetes Operator is a way to package, deploy and manage Kubernetes applications. We use the Kubernetes API (Application Programming Interface) and the Kubectl tool to deploy and manage Kubernetes applications on Kubernetes.

Deployment Architecture

Dependent components

MatrixOne distributed system depends on the following components:

  • Kubernetes: As a resource management platform for the entire MatrixOne cluster, components such as Logservice, CN, and TN all run in Pods managed by Kubernetes. In the event of a failure, Kubernetes will cull the failed Pod and start a new one to replace it.

  • Minio: Provides object storage services for the entire MatrixOne cluster, and all MatrixOne data is stored in the object storage provided by Minio.

Additionally, for container management and orchestration on Kubernetes, we need the following plugins:

  • Helm: Helm is a package management tool for managing Kubernetes applications, similar to APT for Ubuntu and YUM for CentOS. It is used to manage pre-configured installation package resources called Charts.

  • local-path-provisioner: As a plug-in that implements the CSI (Container Storage Interface) interface in Kubernetes, local-path-provisioner is responsible for creating persistent volumes (PVs) for Pods and Minios of MatrixOne components to achieve data persistence storage.

Overall structure

The overall deployment architecture is shown in the following figure:

MatrixOne distributed cluster deployment - 图1

The overall architecture consists of the following components:

  • The bottom layer is three server nodes: the first host1 is the springboard machine for installing Kubernetes, the second is the master node (master) of Kubernetes, and the third is Kubernetes’ working node (node).

  • The installed Kubernetes and Docker environment is the upper layer, which constitutes the cloud-native platform layer.

  • A Kubernetes plugin layer for management based on Helm, including the local-path-storage plugin implementing the CSI interface, Minio, and the MatrixOne Operator.

  • The topmost layer is several Pods and Services generated by these component configurations.

Pod and storage architecture of MatrixOne

MatrixOne creates a series of Kubernetes objects according to the rules of the Operator, and these objects are classified according to components and classified into resource groups, namely CNSet, TNSet, and LogSet.

  • Service: The services in each resource group must be provided externally through the Service. Service hosts the external connection function to ensure the service can still be provided when the Pod crashes or is replaced. External applications connect through the Service’s exposed ports, and the Service forwards connections to the corresponding Pods through internal forwarding rules.

  • Pod: A containerized instance of MatrixOne components in which MatrixOne’s core kernel code runs.

  • PVC: Each Pod declares the storage resources it needs through PVC (Persistent Volume Claim). In our architecture, CN and TN must apply for a storage resource as a cache, and LogService requires corresponding S3 resources. These requirements are declared through PVCs.

  • PV: PV (Persistent Volume) is an abstract representation of storage media, which can be regarded as a storage unit. After applying for a PVC, create a PV through software that implements the CSI interface and binds it to the PVC used for resources.

MatrixOne distributed cluster deployment - 图2

1. Deploying a Kubernetes Cluster

As MatrixOne’s distributed deployment relies on a Kubernetes cluster, we need to have one in place. This article will guide you through setting up a Kubernetes cluster using Kuboard-Spray.

Preparing the Cluster Environment

To prepare the cluster environment, you need to do the following:

  • Have three virtual machines

  • Use CentOS 7.9 as the operating system (by default, it does allow root account remote login). Two machines will be used for deploying Kubernetes and other dependencies for MatrixOne, while the third will act as a jump host to set up the Kubernetes cluster.

  • External network access conditions. The three servers all need to pull the external network image.

The specific distribution of the machines is shown below:

HostIntranet IPExtranet IPmemCPUDiskRole
kuboardspray10.206.0.61.13.2.1002G2C50G跳板机
master010.206.134.8118.195.255.2528G2C50Gmaster etcd
node010.206.134.141.13.13.1998G2C50Gworker

Deploying Kuboard Spray on a Jump Server

Kuboard Spray is a tool used for visualizing the deployment of Kubernetes clusters. It uses Docker to quickly launch a web application that can visualize the deployment of a Kubernetes cluster. Once the Kubernetes cluster environment has been deployed, the Docker application can be stopped.

Preparing the Jump Server Environment
  1. Install Docker: A Docker environment is required. Install and start Docker on the springboard machine with the following commands:

    1. curl -sSL https://get.docker.io/ | sh
    2. #If you are in a domestic network restricted environment, you can change the following domestic mirror address
    3. curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
  2. Start Docker:

    1. [root@VM-0-6-centos ~]# systemctl start docker
    2. [root@VM-0-6-centos ~]# systemctl status docker
    3. docker.service - Docker Application Container Engine
    4. Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
    5. Active: active (running) since Sun 2023-05-07 11:48:06 CST; 15s ago
    6. Docs: https://docs.docker.com
    7. Main PID: 5845 (dockerd)
    8. Tasks: 8
    9. Memory: 27.8M
    10. CGroup: /system.slice/docker.service
    11. └─5845 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
    12. May 07 11:48:06 VM-0-6-centos systemd[1]: Starting Docker Application Container Engine...
    13. May 07 11:48:06 VM-0-6-centos dockerd[5845]: time="2023-05-07T11:48:06.391166236+08:00" level=info msg="Starting up"
    14. May 07 11:48:06 VM-0-6-centos dockerd[5845]: time="2023-05-07T11:48:06.421736631+08:00" level=info msg="Loading containers: start."
    15. May 07 11:48:06 VM-0-6-centos dockerd[5845]: time="2023-05-07T11:48:06.531022702+08:00" level=info msg="Loading containers: done."
    16. May 07 11:48:06 VM-0-6-centos dockerd[5845]: time="2023-05-07T11:48:06.544715135+08:00" level=info msg="Docker daemon" commit=94d3ad6 graphdriver= overlay2 version=23.0.5
    17. May 07 11:48:06 VM-0-6-centos dockerd[5845]: time="2023-05-07T11:48:06.544798391+08:00" level=info msg="Daemon has completed initialization"
    18. May 07 11:48:06 VM-0-6-centos systemd[1]: Started Docker Application Container Engine.
    19. May 07 11:48:06 VM-0-6-centos dockerd[5845]: time="2023-05-07T11:48:06.569274215+08:00" level=info msg="API listen on /run/docker. sock"

Once the environment is prepared, Kuboard Spray can be deployed.

Deploying Kuboard Spray

Execute the following command to install Kuboard Spray:

  1. docker run -d \
  2. --privileged \
  3. --restart=unless-stopped \
  4. --name=kuboard-spray \
  5. -p 80:80/tcp \
  6. -v /var/run/docker.sock:/var/run/docker.sock \
  7. -v ~/kuboard-spray-data:/data \
  8. eipwork/kuboard-spray:latest-amd64

If the image pull fails due to network issues, use the backup address below:

  1. docker run -d \
  2. --privileged \
  3. --restart=unless-stopped \
  4. --name=kuboard-spray \
  5. -p 80:80/tcp \
  6. -v /var/run/docker.sock:/var/run/docker.sock \
  7. -v ~/kuboard-spray-data:/data \
  8. swr.cn-east-2.myhuaweicloud.com/kuboard/kuboard-spray:latest-amd64

After executing the command, open the Kuboard Spray web interface by entering http://1.13.2.100 (jump server IP address) in a web browser, then log in to the Kuboard Spray interface using the username admin and the default password Kuboard123, as shown below:

MatrixOne distributed cluster deployment - 图3

After logging in, the Kubernetes cluster deployment can be started.

Visual Deployment of Kubernetes Cluster

After logging into the Kuboard-Spray interface, you can begin visually deploying a Kubernetes cluster.

The installation interface will download the Kubernetes cluster’s corresponding resource package via online downloading to achieve offline installation of the Kubernetes cluster.

  1. Click Resource Package Management and select the appropriate version of the Kubernetes resource package to download:

    Download spray-v2.18.0b-2_k8s-v1.23.17_v1.24-amd64 版本

    MatrixOne distributed cluster deployment - 图4

  2. Click Import > Load Resource Package, select the appropriate download source, and wait for the resource package to finish downloading.

    Note

    We recommend choosing Docker as the container engine for your K8s cluster. Once Docker is selected as the container engine for K8s, Kuboard-Spray will automatically utilize Docker to run various components of the K8s cluster, including containers on both Master and Worker nodes.

    MatrixOne distributed cluster deployment - 图5

  3. This will pull the related image dependencies:

    MatrixOne distributed cluster deployment - 图6

  4. After the image resource package is successfully pulled, return to the Kuboard-Spray web interface. You can see that the corresponding version of the resource package has been imported.

    MatrixOne distributed cluster deployment - 图7

Installing a Kubernetes Cluster

This chapter will guide you through the installation of a Kubernetes cluster.

  1. Select Cluster Management and choose Add Cluster Installation Plan:

    MatrixOne distributed cluster deployment - 图8

  2. In the pop-up dialog box, define the name of the cluster, select the version of the resource package that was just imported, and click OK, as shown in the following figure:

    MatrixOne distributed cluster deployment - 图9

Cluster Planning

Based on the predefined roles, the Kubernetes cluster is deployed with a pattern of 1 master + 1 worker + 1 etcd.

After defining the cluster name and selecting the resource package version, click OK, and then proceed to the cluster planning stage.

  1. Select the corresponding node roles and names:

    MatrixOne distributed cluster deployment - 图10

    • Master node: Select the etcd and control node and name it master0. (If you want the master node to participate in the work, you can select the worker node simultaneously. This method can improve resource utilization but will reduce the high availability of Kubernetes.)
    • Worker node: Select only the worker node and name it node0.
  2. After filling in the roles and node names for each node, please fill in the corresponding connection information on the right, as shown in the following figure:

    MatrixOne distributed cluster deployment - 图11

    MatrixOne distributed cluster deployment - 图12

  3. After filling in all the roles, click Save. You can now prepare to install the Kubernetes cluster.

Installing Kubernetes Cluster

After completing all roles and saving in the previous step, click Execute to start installing the Kubernetes cluster.

  1. Click OK as shown in the figure below to start installing the Kubernetes cluster:

    MatrixOne distributed cluster deployment - 图13

  2. When installing the Kubernetes cluster, the ansible script will be executed on the corresponding node to install the Kubernetes cluster. The overall installation time will vary depending on the machine configuration and network. Generally, it takes 5 to 10 minutes.

    Note: If an error occurs, you can check the log to confirm whether the version of Kuboard-Spray is mismatched. If the version is mismatched, please replace it with a suitable version.

  3. After the installation is complete, execute kubectl get node on the master node of the Kubernetes cluster:

    1. [root@master0 ~]# kubectl get node
    2. NAME STATUS ROLES AGE VERSION
    3. master0 Ready control-plane,master 52m v1.23.17
    4. node0 Ready <none> 52m v1.23.17
  4. The command result shown in the figure above indicates that the Kubernetes cluster has been successfully installed.

  5. Adjust the DNS routing table on each node in Kubernetes. Please execute the following command on each machine to find the nameserver containing 169.254.25.10 and delete the record. (This record may affect the communication efficiency between Pods, if this record does not exist, there is no need to change it)

    1. vim /etc/resolve.conf

    MatrixOne distributed cluster deployment - 图14

2. Deploying Helm

Helm is a package management tool for managing Kubernetes applications. Similar to APT for Ubuntu and YUM for CentOS, Helm provides a convenient way to install, upgrade, and manage Kubernetes applications. It simplifies the application deployment and management process using charts (preconfigured installation package resources).

Before installing Minio, we need to install Helm first because the installation process of Minio depends on Helm. Here are the steps to install Helm:

Note: All operations in this section are performed on the master0 node.

  1. Download the Helm installation package:

    1. wget https://get.helm.sh/helm-v3.10.2-linux-amd64.tar.gz
    2. #You can use the following domestic mirror address if the trained network is limited.
    3. wget https://mirrors.huaweicloud.com/helm/v3.10.2/helm-v3.10.2-linux-amd64.tar.gz
  2. Extract and install:

    1. tar -zxf helm-v3.10.2-linux-amd64.tar.gz
    2. mv linux-amd64/helm /usr/local/bin/helm
  3. Verify the version to check if it is installed:

    1. [root@k8s01 home]# helm version
    2. version.BuildInfo{Version:"v3.10.2", GitCommit:"50f003e5ee8704ec937a756c646870227d7c8b58", GitTreeState:"clean", GoVersion:"go1.18.8"}

    The version information shown above indicates that the installation is complete.

3. CSI Deployment

CSI is a storage plugin for Kubernetes that provides storage services for MinIO and MatrixOne. This section will guide you through the use of the local-path-provisioner plugin.

Note: All the commands in this section should be executed on the master0 node.

  1. Install CSI using the following command line:

    1. wget https://github.com/rancher/local-path-provisioner/archive/refs/tags/v0.0.23.zip
    2. unzip v0.0.23.zip
    3. cd local-path-provisioner-0.0.23/deploy/chart/local-path-provisioner
    4. helm install --set nodePathMap[0].paths[0]="/opt/local-path-provisioner",nodePathMap[0].node=DEFAULT_PATH_FOR_NON_LISTED_NODES --create-namespace --namespace local-path-storage local-path-storage ./
  2. After a successful installation, the command line should display as follows:

    1. root@master0:~# kubectl get pod -n local-path-storage
    2. NAME READY STATUS RESTARTS AGE
    3. local-path-storage-local-path-provisioner-57bf67f7c-lcb88 1/1 Running 0 89s

    Note: After installation, this storageClass will provide storage services in the “/opt/local-path-provisioner” directory on the worker node. You can modify it to another path.

  3. Set the default storageClass:

    1. kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
  4. After setting the default, the command line should display as follows:

    1. root@master0:~# kubectl get storageclass
    2. NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
    3. local-path (default) cluster.local/local-path-storage-local-path-provisioner Delete WaitForFirstConsumer true 115s

4. MinIO Deployment

MinIO is used to provide object storage for MatrixOne. This section will guide you through the deployment of a single-node MinIO.

Note: All the commands in this section should be executed on the master0 node.

Installation and Startup

  1. The command line for installing and starting MinIO is as follows:

    1. helm repo add minio https://charts.min.io/
    2. mkdir minio_ins && cd minio_ins
    3. helm fetch minio/minio
    4. ls -lth
    5. tar -zxvf minio-5.0.9.tgz # This version may change; the actual download shall prevail
    6. cd ./minio/
    7. kubectl create ns mostorage
    8. helm install minio \
    9. --namespace mostorage \
    10. --set resources.requests.memory=512Mi \
    11. --set replicas=1 \
    12. --set persistence.size=10G \
    13. --set mode=standalone \
    14. --set rootUser=rootuser,rootPassword=rootpass123 \
    15. --set consoleService.type=NodePort \
    16. --set image.repository=minio/minio \
    17. --set image.tag=latest \
    18. --set mcImage.repository=minio/mc \
    19. --set mcImage.tag=latest \
    20. -f values.yaml minio/minio

    Note

    • --set resources.requests.memory=512Mi sets the minimum memory consumption of MinIO

      • --set persistence.size=1G sets the storage size of MinIO to 1G
      • --set rootUser=rootuser,rootPassword=rootpass123 the parameters set for rootUser and rootPassword are required for creating the secrets file for the Kubernetes cluster later, so use something that you can remember.
    • If it is repeatedly executed due to network or other reasons, it needs to be uninstalled first:

      1. helm uninstall minio --namespace most storage
  2. After a successful installation and start, the command line should display as follows:

    1. NAME: minio
    2. LAST DEPLOYED: Sun May 7 14:17:18 2023
    3. NAMESPACE: most storage
    4. STATUS: deployed
    5. REVISION: 1
    6. TEST SUITE: None
    7. NOTES:
    8. MinIO can be accessed via port 9000 on the following DNS name from within your cluster:
    9. minio.mostorage.svc.cluster.local
    10. To access MinIO from localhost, run the following commands:
    11. 1. export POD_NAME=$(kubectl get pods --namespace moststorage -l "release=minio" -o jsonpath="{.items[0].metadata.name}")
    12. 2. kubectl port-forward $POD_NAME 9000 --namespace most storage
    13. Read more about port forwarding here: http://kubernetes.io/docs/user-guide/kubectl/kubectl_port-forward/
    14. You can now access MinIO server on http://localhost:9000. Follow the below steps to connect to MinIO server with mc client:
    15. 1. Download the MinIO mc client - https://min.io/docs/minio/linux/reference/minio-mc.html#quickstart
    16. 2. export MC_HOST_minio-local=http://$(kubectl get secret --namespace most storage minio -o jsonpath="{.data.rootUser}" | base64 --decode):$(kubectl get secret --namespace most storage minio -o jsonpath="{.data.rootPassword}" | base64 --decode)@localhost:9000
    17. 3. mc ls minio-local

    So far, Minio has been successfully installed. During the subsequent installation of MatrixOne, MatrixOne will communicate with Minio directly through the Kubernetes Service (SVC) without additional configuration.

    However, if you want to connect to Minio from localhost, you can execute the following command line to set the POD_NAME variable and connect mostorage to port 9000:

    1. export POD_NAME=$(kubectl get pods --namespace mostorage -l "release=minio" -o jsonpath="{.items[0].metadata.name}")
    2. nohup kubectl port-forward --address 0.0.0.0 $POD_NAME -n mostorage 9000:9000 &
  3. After startup, use http://118.195.255.252:32001/ to log in to the MinIO page and create object storage information. As shown in the figure below, the account password is the rootUser and rootPassword set by --set rootUser=rootuser,rootPassword=rootpass123 in the above steps:

    MatrixOne distributed cluster deployment - 图15

  4. After logging in, you need to create object storage related information:

    Fill in the Bucket Name with minio-mo under Bucket > Create Bucket. After filling it in, click the Create Bucket button at the bottom right.

    MatrixOne distributed cluster deployment - 图16

5. Deploying a MatrixOne Cluster

This section will guide you through the process of deploying a MatrixOne cluster.

Note: All steps in this section are performed on the master0 node.

Installing the MatrixOne-Operator

MatrixOne Operator is a standalone software tool for deploying and managing MatrixOne clusters on Kubernetes. You can install the latest Operator Release installation package from the project’s Release List.

Follow the steps below to install MatrixOne Operator on master0. We will create a separate namespace, matrixone-operator for the Operator.

  1. Download the latest MatrixOne Operator installation package:

    1. wget https://github.com/matrixorigin/matrixone-operator/releases/download/chart-0.8.0-alpha.7/matrixone-operator-0.8.0-alpha.7.tgz
  2. Unzip the installation package:

    1. tar -xvf matrixone-operator-0.8.0-alpha.7.tgz
    2. cd /root/matrixone-operator/
  3. Define the namespace variable:

    1. NS="matrixone-operator"
  4. Use Helm to install MatrixOne Operator and create a namespace:

    1. helm install --create-namespace --namespace ${NS} matrixone-operator ./ --dependency-update
  5. After the installation is successful, use the following command to confirm the installation status:

    1. kubectl get pod -n matrixone-operator

    Ensure all pods have a running status in the above command output.

    1. [root@master0 matrixone-operator]# kubectl get pod -n matrixone-operator
    2. NAME READY STATUS RESTARTS AGE
    3. matrixone-operator-f8496ff5c-fp6zm 1/1 Running 0 3m26s

As shown in the above line of code, the status of the corresponding Pods is normal.

Create a MatrixOne cluster

  1. First, create the namespace of MatrixOne:

    1. NS="mo-hn"
    2. kubectl create ns ${NS}
  2. Customize the yaml file of the MatrixOne cluster, and write the following mo.yaml file:

    1. apiVersion: core.matrixorigin.io/v1alpha1
    2. kind: MatrixOneCluster
    3. metadata:
    4. name: mo
    5. namespace: mo-hn
    6. spec:
    7. # 1. Configuration for tn
    8. Tn:
    9. cacheVolume: # Disk cache for tn
    10. size: 5Gi # Modify according to actual disk size and requirements
    11. storageClassName: local-path # If not specified, the default storage class of the system will be used
    12. resources:
    13. requests:
    14. cpu: 100m #1000m=1c
    15. memory: 500Mi # 1024Mi
    16. limits: # Note that limits should not be lower than requests and should not exceed the capacity of a single node. Generally allocate based on actual circumstances, usually set limits and requests to be consistent.
    17. cpu: 200m
    18. memory: 1Gi
    19. config: | # Configuration for tn
    20. [dn.Txn.Storage]
    21. backend = "TAE"
    22. log-backend = "logservice"
    23. [dn.Ckp]
    24. flush-interval = "60s"
    25. min-count = 100
    26. scan-interval = "5s"
    27. incremental-interval = "60s"
    28. global-interval = "100000s"
    29. [log]
    30. level = "error"
    31. format = "json"
    32. max-size = 512
    33. replicas: 1 # The number of copies of TN, which cannot be modified. The current version only supports a setting of 1.
    34. # 2. Configuration for logservice
    35. logService:
    36. replicas: 3 # Number of logservice replicas
    37. resources:
    38. requests:
    39. cpu: 100m #1000m=1c
    40. memory: 500Mi # 1024Mi
    41. limits: # Note that limits should not be lower than requests and should not exceed the capacity of a single node. Generally allocate based on actual circumstances, usually set limits and requests to be consistent.
    42. cpu: 200m
    43. memory: 1Gi
    44. sharedStorage: # Configuration for logservice to connect to s3 storage
    45. s3:
    46. type: minio # Type of s3 storage to connect to is minio
    47. path: minio-mo # Path to the minio bucket used by mo, previously created through the console or mc command
    48. endpoint: http://minio.mostorage:9000 # The svc address and port of the minio service
    49. secretRef: # Configuration for accessing minio, the secret name is minio
    50. name: minio
    51. pvcRetentionPolicy: Retain # Configuration for the lifecycle policy of the pvc bucket after the cluster is destroyed, Retain means to keep, Delete means to delete
    52. volume:
    53. size: 1Gi # Configuration for the size of S3 object storage, modify according to actual disk size and requirements
    54. config: | # Configuration for logservice
    55. [log]
    56. level = "error"
    57. format = "json"
    58. max-size = 512
    59. # 3. Configuration for cn
    60. tp:
    61. cacheVolume: # Disk cache for cn
    62. size: 5Gi # Modify according to actual disk size and requirements
    63. storageClassName: local-path # If not specified, the default storage class of the system will be used
    64. resources:
    65. requests:
    66. cpu: 100m #1000m=1c
    67. memory: 500Mi # 1024Mi
    68. limits: # Note that limits should not be lower than requests and should not exceed the capacity of a single node. Generally allocate based on actual circumstances, usually set limits and requests to be consistent.
    69. cpu: 200m
    70. memory: 2Gi
    71. serviceType: NodePort # cn needs to provide access entry to the outside, so its svc is set to NodePort
    72. nodePort: 31429 # NodePort port setting
    73. config: | # Configuration for cn
    74. [cn.Engine]
    75. type = "distributed-tae"
    76. [log]
    77. level = "debug"
    78. format = "json"
    79. max-size = 512
    80. replicas: 1
    81. version: nightly-54b5e8c # The version of the MO image. You can check it on Docker Hub. Generally, cn, TN, and logservice are packaged in the same image, so the same field can be used to specify it. It also supports specifying separately in each section, but unless there are special circumstances, use a unified image version.
    82. # https://hub.docker.com/r/matrixorigin/matrixone/tags
    83. imageRepository: matrixorigin/matrixone # Image repository address. If it is pulled locally and the tag has been modified, you can adjust this configuration item.
    84. imagePullPolicy: IfNotPresent # Image pull policy, consistent with the configurable values of k8s official.
  3. Execute the following command to create a Secret service for accessing MinIO in the namespace mo-hn:

    1. kubectl -n mo-hn create secret generic minio --from-literal=AWS_ACCESS_KEY_ID=rootuser --from-literal=AWS_SECRET_ACCESS_KEY=rootpass123

    The username and password use the rootUser and rootPassword set when creating the MinIO cluster.

  4. Execute the following command to deploy the MatrixOne cluster:

    1. kubectl apply -f mo.yaml
  5. Please wait patiently for about 10 minutes. If the Pod restarts, please continue to wait. Until you see the following information, the deployment is successful:

    1. [root@master0 mo]# kubectl get pods -n mo-hn
    2. NAME READY STATUS RESTARTS AGE
    3. mo-tn-0 1/1 Running 0 74s
    4. mo-log-0 1/1 Running 1 (25s ago) 2m2s
    5. mo-log-1 1/1 Running 1 (24s ago) 2m2s
    6. mo-log-2 1/1 Running 1 (22s ago) 2m2s
    7. mo-tp-cn-0 1/1 Running 0 50s

6. Connect to MatrixOne cluster

To connect to the MatrixOne cluster, you need to map the port of the corresponding service to the MatrixOne node. Here are the instructions for connecting to a MatrixOne cluster using kubectl port-forward:

  • Only allow local access:
  1. nohup kubectl port-forward -nmo-hn svc/mo-tp-cn 6001:6001 &
  • Specify a specific machine or all machines to access:
  1. nohup kubectl port-forward -nmo-hn --address 0.0.0.0 svc/mo-tp-cn 6001:6001 &

After specifying Allow local access or Specify a specific machine or all machines to access, you can use the MySQL client to connect to MatrixOne:

  1. # Connect to the MySQL server using the 'mysql' command line tool
  2. # Use 'kubectl get svc/mo-tp-cn -n mo-hn -o jsonpath='{.spec.clusterIP}' ' to get the cluster IP address of the service in the Kubernetes cluster
  3. # The '-h' parameter specifies the hostname or IP address of the MySQL service
  4. # The '-P' parameter specifies the port number of the MySQL service, here is 6001
  5. # '-uroot' means log in with root user
  6. # '-p111' means the initial password is 111
  7. mysql -h $(kubectl get svc/mo-tp-cn -n mo-hn -o jsonpath='{.spec.clusterIP}') -P 6001 -uroot -p111
  8. mysql: [Warning] Using a password on the command line interface can be insecure.
  9. Welcome to the MySQL monitor. Commands end with ; or \g.
  10. Your MySQL connection id is 163
  11. Server version: 8.0.30-MatrixOne-v1.1.0 MatrixOne
  12. Copyright (c) 2000, 2023, Oracle and/or its affiliates.
  13. Oracle is a registered trademark of Oracle Corporation and/or its
  14. affiliates. Other names may be trademarks of their respective
  15. owners.
  16. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
  17. mysql>

After explicit mysql>, the distributed MatrixOne cluster is established and connected.

Info

The login account in the above code snippet is the initial account; please change the initial password after logging in to MatrixOne; see Password Management.