Maintain Different TiDB Clusters Separately Using Multiple Sets of TiDB Operator

You can use one set of TiDB Operator to manage multiple TiDB clusters. If you have the following application needs, you can deploy multiple sets of TiDB Operator to manage different TiDB clusters:

  • You need to perform a canary upgrade on TiDB Operator so that the potential issues of the new version do not affect your application.
  • Multiple TiDB clusters exist in your organization, and each cluster belongs to different teams. Each team needs to manage their own cluster.

This document describes how to deploy multiple sets of TiDB Operator to manage different TiDB clusters.

When you use TiDB Operator, tidb-scheduler is not mandatory. Refer to tidb-scheduler and default-scheduler to confirm whether you need to deploy tidb-scheduler.

Maintain Different TiDB Clusters Separately Using Multiple TiDB Operator - 图1Note

  • Currently, you can only deploy multiple sets of tidb-controller-manager and tidb-scheduler. Deploying multiple sets of AdvancedStatefulSet controller and tidb-admission-webhook is not supported.
  • If you have deployed multiple sets of TiDB Operator and only some of them enable Advanced StatefulSet, the same TidbCluster Custom Resource (CR) cannot be switched among these TiDB Operator.
  • This feature is supported since v1.1.10.

Deploy multiple sets of TiDB Operator

  1. Deploy the first set of TiDB Operator.

    Refer to Deploy TiDB Operator - Customize TiDB Operator to deploy the first set of TiDB Operator. Add the following configuration in the values.yaml:

    1. controllerManager:
    2. selector:
    3. - user=dev
  2. Deploy the first TiDB cluster.

    1. Refer to Configure the TiDB Cluster - Configure TiDB deployment to configure the TidbCluster CR, and configure labels to match the selector set in the last step. For example:

      1. apiVersion: pingcap.com/v1alpha1
      2. kind: TidbCluster
      3. metadata:
      4. name: basic1
      5. labels:
      6. user: dev
      7. spec:
      8. ...

      If labels is not set when you deploy the TiDB cluster, you can configure labels by running the following command:

      1. kubectl -n ${namespace} label tidbcluster ${cluster_name} user=dev
    2. Refer to Deploy TiDB in General Kubernetes to deploy the TiDB cluster. Confirm that each component in the cluster is started normally.

  3. Deploy the second set of TiDB Operator.

    Refer to Deploy TiDB Operator to deploy the second set of TiDB Operator without tidb-scheduler. Add the following configuration in the values.yaml file, and deploy the second TiDB Operator (without tidb-scheduler) in a different namespace (such as tidb-admin-qa) with a different Helm Release Name (such as helm install tidb-operator-qa ...):

    1. controllerManager:
    2. selector:
    3. - user=qa
    4. appendReleaseSuffix: true
    5. scheduler:
    6. # If you do not need tidb-scheduler, set this value to false.
    7. create: false
    8. advancedStatefulset:
    9. create: false
    10. admissionWebhook:
    11. create: false

    Maintain Different TiDB Clusters Separately Using Multiple TiDB Operator - 图2Note

    • It is recommended to deploy the new TiDB Operator in a separate namespace.
    • Set appendReleaseSuffix to true.
    • If you configure scheduler.create: true, a tidb-scheduler named {{ .scheduler.schedulerName }}-{{.Release.Name}} is created. To use this tidb-scheduler, you need to configure spec.schedulerName in the TidbCluster CR to the name of this scheduler.
    • You need to set advancedStatefulset.create: false and admissionWebhook.create: false, because deploying multiple sets of AdvancedStatefulSet controller and tidb-admission-webhook is not supported.
  4. Deploy the second TiDB cluster.

    1. Refer to Configure the TiDB Cluster to configure the TidbCluster CR, and configure labels to match the selector set in the last step. For example:

      1. apiVersion: pingcap.com/v1alpha1
      2. kind: TidbCluster
      3. metadata:
      4. name: basic2
      5. labels:
      6. user: qa
      7. spec:
      8. ...

      If labels is not set when you deploy the TiDB cluster, you can configure labels by running the following command:

      1. kubectl -n ${namespace} label tidbcluster ${cluster_name} user=qa
    2. Refer to Deploy TiDB in General Kubernetes to deploy the TiDB cluster. Confirm that each component in the cluster is started normally.

  5. View the logs of the two sets of TiDB Operator, and confirm that each TiDB Operator manages the TiDB cluster that matches the corresponding selectors.

    For example:

    View the log of tidb-controller-manager of the first TiDB Operator:

    1. kubectl -n tidb-admin logs tidb-controller-manager-55b887bdc9-lzdwv

    Output

    1. ... I0113 02:50:13.195779 1 main.go:69] FLAG: --selector="user=dev" ... I0113 02:50:32.409378 1 tidbcluster_control.go:69] TidbCluster: [tidb-cluster-1/basic1] updated successfully I0113 02:50:32.773635 1 tidbcluster_control.go:69] TidbCluster: [tidb-cluster-1/basic1] updated successfully I0113 02:51:00.294241 1 tidbcluster_control.go:69] TidbCluster: [tidb-cluster-1/basic1] updated successfully

    View the log of tidb-controller-manager of the second TiDB Operator:

    1. kubectl -n tidb-admin-qa logs tidb-controller-manager-qa-5dfcd7f9-vll4c

    Output

    1. ... I0113 02:50:13.195779 1 main.go:69] FLAG: --selector="user=qa" ... I0113 03:38:43.859387 1 tidbcluster_control.go:69] TidbCluster: [tidb-cluster-2/basic2] updated successfully I0113 03:38:45.060028 1 tidbcluster_control.go:69] TidbCluster: [tidb-cluster-2/basic2] updated successfully I0113 03:38:46.261045 1 tidbcluster_control.go:69] TidbCluster: [tidb-cluster-2/basic2] updated successfully

    By comparing the logs of the two sets of TiDB Operator, you can confirm that the first TiDB Operator only manages the tidb-cluster-1/basic1 cluster, and the second TiDB Operator only manages the tidb-cluster-2/basic2 cluster.

If you want to deploy a third or more sets of TiDB Operator, repeat step 3, step 4, and step 5.

In the values.yaml file in the tidb-operator chart, the following parameters are related to the deployment of multiple sets of TiDB Operator:

  • appendReleaseSuffix

    If this parameter is set to true, when you deploy TiDB Operator, the Helm chart automatically adds a suffix (-{{ .Release.Name }}) to the name of resources related to tidb-controller-manager and tidb-scheduler.

    For example, if you execute helm install canary pingcap/tidb-operator ..., the name of the tidb-controller-manager deployment is tidb-controller-manager-canary.

    If you need to deploy multiple sets of TiDB Operator, set this parameter to true.

    Default value: false.

  • controllerManager.create

    Controls whether to create tidb-controller-manager.

    Default value: true.

  • controllerManager.selector

    Sets the -selector parameter for tidb-controller-manager. The parameter is used to filter the CRs controlled by tidb-controller-manager according to the CR labels. If multiple selectors exist, the selectors are in and relationship.

    Default value: [] (tidb-controller-manager controls all CRs).

    Example:

    1. selector:
    2. - canary-release=v1
    3. - k1==v1
    4. - k2!=v2
  • scheduler.create

    Controls whether to create tidb-scheduler.

    Default value: true.