Deploy TiDB on GCP GKE

This document describes how to deploy a GCP Google Kubernetes Engine (GKE) cluster and deploy a TiDB cluster on GCP GKE.

To deploy TiDB Operator and the TiDB cluster in a self-managed Kubernetes environment, refer to Deploy TiDB Operator and Deploy TiDB in General Kubernetes.

Prerequisites

Before deploying a TiDB cluster on GCP GKE, make sure the following requirements are satisfied:

  • Install Helm 3: used for deploying TiDB Operator.

  • Install gcloud: a command-line tool used for creating and managing GCP services.

  • Complete the operations in the Before you begin section of GKE Quickstart.

    This guide includes the following contents:

    • Enable Kubernetes APIs
    • Configure enough quota
  • Instance types: to gain better performance, the following is recommended:
    • PD nodes: n2-standard-4
    • TiDB nodes: n2-standard-8
    • TiKV or TiFlash nodes: n2-highmem-8
  • Storage: For TiKV or TiFlash, it is recommended to use pd-ssd disk type.

Configure the GCP service

Configure your GCP project and default region:

  1. gcloud config set core/project <gcp-project>
  2. gcloud config set compute/region <gcp-region>

Create a GKE cluster and node pool

  1. Create a GKE cluster and a default node pool:

    1. gcloud container clusters create tidb --region us-east1 --machine-type n1-standard-4 --num-nodes=1
    • The command above creates a regional cluster.
    • The --num-nodes=1 option indicates that one node is created in each zone. So if there are three zones in the region, there are three nodes in total, which ensures high availability.
    • It is recommended to use regional clusters in production environments. For other types of clusters, refer to Types of GKE clusters.
    • The command above creates a cluster in the default network. If you want to specify a network, use the --network/subnet option. For more information, refer to Creating a regional cluster.
  2. Create separate node pools for PD, TiKV, and TiDB:

    1. gcloud container node-pools create pd --cluster tidb --machine-type n2-standard-4 --num-nodes=1 \
    2. --node-labels=dedicated=pd --node-taints=dedicated=pd:NoSchedule
    3. gcloud container node-pools create tikv --cluster tidb --machine-type n2-highmem-8 --num-nodes=1 \
    4. --node-labels=dedicated=tikv --node-taints=dedicated=tikv:NoSchedule
    5. gcloud container node-pools create tidb --cluster tidb --machine-type n2-standard-8 --num-nodes=1 \
    6. --node-labels=dedicated=tidb --node-taints=dedicated=tidb:NoSchedule

    The process might take a few minutes.

Configure StorageClass

After the GKE cluster is created, the cluster contains three StorageClasses of different disk types.

  • standard: pd-standard disk type (default)
  • standard-rwo: pd-balanced disk type
  • premium-rwo: pd-ssd disk type (recommended)

To improve I/O write performance, it is recommended to configure nodelalloc and noatime in the mountOptions field of the StorageClass resource. For details, see TiDB Environment and System Configuration Check.

  1. kind: StorageClass
  2. apiVersion: storage.k8s.io/v1
  3. # ...
  4. mountOptions:
  5. - nodelalloc,noatime

GCP GKE - 图1Note

Configuring nodelalloc and noatime is not supported for the default disk type pd-standard.

Use local storage

For the production environment, use zonal persistent disks.

If you need to simulate bare-metal performance, some GCP instance types provide additional local store volumes. You can choose such instances for the TiKV node pool to achieve higher IOPS and lower latency.

GCP GKE - 图2Note

You cannot dynamically change StorageClass for a running TiDB cluster. For testing purposes, create a new TiDB cluster with the desired StorageClass.

GKE upgrade might cause node reconstruction. In such cases, data in the local storage might be lost. To avoid data loss, you need to back up TiKV data before node reconstruction. It is thus not recommended to use local disks in the production environment.

  1. Create a node pool with local storage for TiKV:

    1. gcloud container node-pools create tikv --cluster tidb --machine-type n2-highmem-8 --num-nodes=1 --local-ssd-count 1 \
    2. --node-labels dedicated=tikv --node-taints dedicated=tikv:NoSchedule

    If the TiKV node pool already exists, you can either delete the old pool and then create a new one, or change the pool name to avoid conflict.

  2. Deploy the local volume provisioner.

    You need to use the local-volume-provisioner to discover and manage the local storage. Executing the following command deploys and creates a local-storage storage class:

    1. kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.3.2/manifests/gke/local-ssd-provision/local-ssd-provision.yaml
  3. Use the local storage.

    After the steps above, the local volume provisioner can discover all the local NVMe SSD disks in the cluster.

    Modify tikv.storageClassName in the tidb-cluster.yaml file to local-storage.

Deploy TiDB Operator

To deploy TiDB Operator on GKE, refer to deploy TiDB Operator.

Deploy a TiDB cluster and the monitoring component

This section describes how to deploy a TiDB cluster and its monitoring component on GCP GKE.

Create namespace

To create a namespace to deploy the TiDB cluster, run the following command:

  1. kubectl create namespace tidb-cluster

GCP GKE - 图3Note

A namespace is a virtual cluster backed by the same physical cluster. This document takes tidb-cluster as an example. If you want to use other namespace, modify the corresponding arguments of -n or --namespace.

Deploy

First, download the sample TidbCluster and TidbMonitor configuration files:

  1. curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/gcp/tidb-cluster.yaml && \
  2. curl -O https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/gcp/tidb-monitor.yaml

Refer to configure the TiDB cluster to further customize and configure the CR before applying.

To deploy the TidbCluster and TidbMonitor CR in the GKE cluster, run the following command:

  1. kubectl create -f tidb-cluster.yaml -n tidb-cluster && \
  2. kubectl create -f tidb-monitor.yaml -n tidb-cluster

After the yaml file above is applied to the Kubernetes cluster, TiDB Operator creates the desired TiDB cluster and its monitoring component according to the yaml file.

GCP GKE - 图4Note

If you need to deploy a TiDB cluster on ARM64 machines, refer to Deploy a TiDB Cluster on ARM64 Machines.

View the cluster status

To view the status of the starting TiDB cluster, run the following command:

  1. kubectl get pods -n tidb-cluster

When all the Pods are in the Running or Ready state, the TiDB cluster is successfully started. For example:

  1. NAME READY STATUS RESTARTS AGE
  2. tidb-discovery-5cb8474d89-n8cxk 1/1 Running 0 47h
  3. tidb-monitor-6fbcc68669-dsjlc 3/3 Running 0 47h
  4. tidb-pd-0 1/1 Running 0 47h
  5. tidb-pd-1 1/1 Running 0 46h
  6. tidb-pd-2 1/1 Running 0 46h
  7. tidb-tidb-0 2/2 Running 0 47h
  8. tidb-tidb-1 2/2 Running 0 46h
  9. tidb-tikv-0 1/1 Running 0 47h
  10. tidb-tikv-1 1/1 Running 0 47h
  11. tidb-tikv-2 1/1 Running 0 47h

Access the TiDB database

After you deploy a TiDB cluster, you can access the TiDB database via MySQL client.

Prepare a bastion host

The LoadBalancer created for your TiDB cluster is an intranet LoadBalancer. You can create a bastion host in the cluster VPC to access the database.

  1. gcloud compute instances create bastion \
  2. --machine-type=n1-standard-4 \
  3. --image-project=centos-cloud \
  4. --image-family=centos-7 \
  5. --zone=${your-region}-a

GCP GKE - 图5Note

${your-region}-a is the a zone in the region of the cluster, such as us-central1-a. You can also create the bastion host in other zones in the same region.

Install the MySQL client and connect

After the bastion host is created, you can connect to the bastion host via SSH and access the TiDB cluster via the MySQL client.

  1. Connect to the bastion host via SSH:

    1. gcloud compute ssh tidb@bastion
  2. Install the MySQL client:

    1. sudo yum install mysql -y
  3. Connect the client to the TiDB cluster:

    1. mysql --comments -h ${tidb-nlb-dnsname} -P 4000 -u root

    ${tidb-nlb-dnsname} is the LoadBalancer IP of the TiDB service. You can view the IP in the EXTERNAL-IP field of the kubectl get svc basic-tidb -n tidb-cluster execution result.

    For example:

    1. $ mysql --comments -h 10.128.15.243 -P 4000 -u root
    2. Welcome to the MariaDB monitor. Commands end with ; or \g.
    3. Your MySQL connection id is 7823
    4. Server version: 5.7.25-TiDB-v4.0.4 TiDB Server (Apache License 2.0) Community Edition, MySQL 5.7 compatible
    5. <!-- Copy -->right (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
    6. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
    7. MySQL [(none)]> show status;
    8. +--------------------+--------------------------------------+
    9. | Variable_name | Value |
    10. +--------------------+--------------------------------------+
    11. | Ssl_cipher | |
    12. | Ssl_cipher_list | |
    13. | Ssl_verify_mode | 0 |
    14. | Ssl_version | |
    15. | ddl_schema_version | 22 |
    16. | server_id | 717420dc-0eeb-4d4a-951d-0d393aff295a |
    17. +--------------------+--------------------------------------+
    18. 6 rows in set (0.01 sec)

GCP GKE - 图6Note

  • The default authentication plugin of MySQL 8.0 is updated from mysql_native_password to caching_sha2_password. Therefore, if you use MySQL client from MySQL 8.0 to access the TiDB service (TiDB version < v4.0.7), and if the user account has a password, you need to explicitly specify the --default-auth=mysql_native_password parameter.
  • By default, TiDB (starting from v4.0.2) periodically shares usage details with PingCAP to help understand how to improve the product. For details about what is shared and how to disable the sharing, see Telemetry.

Access the Grafana monitoring dashboard

Obtain the LoadBalancer IP of Grafana:

  1. kubectl -n tidb-cluster get svc basic-grafana

For example:

  1. $ kubectl -n tidb-cluster get svc basic-grafana
  2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
  3. basic-grafana LoadBalancer 10.15.255.169 34.123.168.114 3000:30657/TCP 35m

In the output above, the EXTERNAL-IP column is the LoadBalancer IP.

You can access the ${grafana-lb}:3000 address using your web browser to view monitoring metrics. Replace ${grafana-lb} with the LoadBalancer IP.

GCP GKE - 图7Note

The default Grafana username and password are both admin.

Upgrade

To upgrade the TiDB cluster, execute the following command:

  1. kubectl patch tc basic -n tidb-cluster --type merge -p '{"spec":{"version":"${version}"}}`.

The upgrade process does not finish immediately. You can watch the upgrade progress by executing kubectl get pods -n tidb-cluster --watch.

Scale out

Before scaling out the cluster, you need to scale out the corresponding node pool so that the new instances have enough resources for operation.

This section describes how to scale out the EKS node group and TiDB components.

Scale out GKE node group

The following example shows how to scale out the tikv node pool of the tidb cluster to 6 nodes:

  1. gcloud container clusters resize tidb --node-pool tikv --num-nodes 2

GCP GKE - 图8Note

In the regional cluster, the nodes are created in 3 zones. Therefore, after scaling out, the number of nodes is 2 * 3 = 6.

Scale out TiDB components

After that, execute kubectl edit tc basic -n tidb-cluster and modify each component’s replicas to the desired number of replicas. The scaling-out process is then completed.

For more information on managing node pools, refer to GKE Node pools.

Deploy TiFlash and TiCDC

TiFlash is the columnar storage extension of TiKV.

TiCDC is a tool for replicating the incremental data of TiDB by pulling TiKV change logs.

The two components are not required in the deployment. This section shows a quick start example.

Create new node pools

  • Create a node pool for TiFlash:

    1. gcloud container node-pools create tiflash --cluster tidb --machine-type n1-highmem-8 --num-nodes=1 \
    2. --node-labels dedicated=tiflash --node-taints dedicated=tiflash:NoSchedule
  • Create a node pool for TiCDC:

    1. gcloud container node-pools create ticdc --cluster tidb --machine-type n1-standard-4 --num-nodes=1 \
    2. --node-labels dedicated=ticdc --node-taints dedicated=ticdc:NoSchedule

Configure and deploy

  • To deploy TiFlash, configure spec.tiflash in tidb-cluster.yaml. For example:

    1. spec:
    2. ...
    3. tiflash:
    4. baseImage: pingcap/tiflash
    5. maxFailoverCount: 0
    6. replicas: 1
    7. storageClaims:
    8. - resources:
    9. requests:
    10. storage: 100Gi
    11. nodeSelector:
    12. dedicated: tiflash
    13. tolerations:
    14. - effect: NoSchedule
    15. key: dedicated
    16. operator: Equal
    17. value: tiflash

    To configure other parameters, refer to Configure a TiDB Cluster.

    GCP GKE - 图9Warning

    TiDB Operator automatically mounts PVs in the order of the configuration in the storageClaims list. Therefore, if you need to add disks for TiFlash, make sure that you add the disks only to the end of the original configuration in the list. In addition, you must not alter the order of the original configuration.

  • To deploy TiCDC, configure spec.ticdc in tidb-cluster.yaml. For example:

    1. spec:
    2. ...
    3. ticdc:
    4. baseImage: pingcap/ticdc
    5. replicas: 1
    6. nodeSelector:
    7. dedicated: ticdc
    8. tolerations:
    9. - effect: NoSchedule
    10. key: dedicated
    11. operator: Equal
    12. value: ticdc

    Modify replicas according to your needs.

Finally, execute kubectl -n tidb-cluster apply -f tidb-cluster.yaml to update the TiDB cluster configuration.

For detailed CR configuration, refer to API references and Configure a TiDB Cluster.

Deploy TiDB Enterprise Edition

To deploy TiDB/PD/TiKV/TiFlash/TiCDC Enterprise Edition, configure spec.[tidb|pd|tikv|tiflash|ticdc].baseImage in tidb-cluster.yaml as the enterprise image. The enterprise image format is pingcap/[tidb|pd|tikv|tiflash|ticdc]-enterprise.

For example:

  1. spec:
  2. ...
  3. pd:
  4. baseImage: pingcap/pd-enterprise
  5. ...
  6. tikv:
  7. baseImage: pingcap/tikv-enterprise