Enable TLS between TiDB Components

This document describes how to enable Transport Layer Security (TLS) between components of the TiDB cluster in Kubernetes, which is supported since TiDB Operator v1.1.

To enable TLS between TiDB components, perform the following steps:

  1. Generate certificates for each component of the TiDB cluster to be created:

    • A set of server-side certificates for the PD/TiKV/TiDB/Pump/Drainer/TiFlash/TiKV Importer/TiDB Lightning component, saved as the Kubernetes Secret objects: ${cluster_name}-${component_name}-cluster-secret.

    • A set of shared client-side certificates for the various clients of each component, saved as the Kubernetes Secret objects: ${cluster_name}-cluster-client-secret.

      Enable TLS between TiDB Components - 图1Note

      The Secret objects you created must follow the above naming convention. Otherwise, the deployment of the TiDB components will fail.

  2. Deploy the cluster, and set .spec.tlsCluster.enabled to true.

    Enable TLS between TiDB Components - 图2Note

    After the cluster is created, do not modify this field; otherwise, the cluster will fail to upgrade.

  3. Configure pd-ctl and tikv-ctl to connect to the cluster.

Enable TLS between TiDB Components - 图3Note

  • TiDB 4.0.5 (or later versions) and TiDB Operator 1.1.4 (or later versions) support enabling TLS for TiFlash.
  • TiDB 4.0.3 (or later versions) and TiDB Operator 1.1.3 (or later versions) support enabling TLS for TiCDC.

Certificates can be issued in multiple methods. This document describes two methods. You can choose either of them to issue certificates for the TiDB cluster:

If you need to renew the existing TLS certificate, refer to Renew and Replace the TLS Certificate.

Generate certificates for components of the TiDB cluster

This section describes how to issue certificates using two methods: cfssl and cert-manager.

Using cfssl

  1. Download cfssl and initialize the certificate issuer:

    1. mkdir -p ~/bin
    2. curl -s -L -o ~/bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
    3. curl -s -L -o ~/bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
    4. chmod +x ~/bin/{cfssl,cfssljson}
    5. export PATH=$PATH:~/bin
    6. mkdir -p cfssl
    7. cd cfssl
  2. Generate the ca-config.json configuration file:

    1. cat << EOF > ca-config.json
    2. {
    3. "signing": {
    4. "default": {
    5. "expiry": "8760h"
    6. },
    7. "profiles": {
    8. "internal": {
    9. "expiry": "8760h",
    10. "usages": [
    11. "signing",
    12. "key encipherment",
    13. "server auth",
    14. "client auth"
    15. ]
    16. },
    17. "client": {
    18. "expiry": "8760h",
    19. "usages": [
    20. "signing",
    21. "key encipherment",
    22. "client auth"
    23. ]
    24. }
    25. }
    26. }
    27. }
    28. EOF
  3. Generate the ca-csr.json configuration file:

    1. cat << EOF > ca-csr.json
    2. {
    3. "CN": "TiDB",
    4. "CA": {
    5. "expiry": "87600h"
    6. },
    7. "key": {
    8. "algo": "rsa",
    9. "size": 2048
    10. },
    11. "names": [
    12. {
    13. "C": "US",
    14. "L": "CA",
    15. "O": "PingCAP",
    16. "ST": "Beijing",
    17. "OU": "TiDB"
    18. }
    19. ]
    20. }
    21. EOF
  4. Generate CA by the configured option:

    1. cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
  5. Generate the server-side certificates:

    In this step, a set of server-side certificate is created for each component of the TiDB cluster.

    • PD

      First, generate the default pd-server.json file:

      1. cfssl print-defaults csr > pd-server.json

      Then, edit this file to change the CN and hosts attributes:

      1. ...
      2. "CN": "TiDB",
      3. "hosts": [
      4. "127.0.0.1",
      5. "::1",
      6. "${cluster_name}-pd",
      7. "${cluster_name}-pd.${namespace}",
      8. "${cluster_name}-pd.${namespace}.svc",
      9. "${cluster_name}-pd-peer",
      10. "${cluster_name}-pd-peer.${namespace}",
      11. "${cluster_name}-pd-peer.${namespace}.svc",
      12. "*.${cluster_name}-pd-peer",
      13. "*.${cluster_name}-pd-peer.${namespace}",
      14. "*.${cluster_name}-pd-peer.${namespace}.svc"
      15. ],
      16. ...

      ${cluster_name} is the name of the cluster. ${namespace} is the namespace in which the TiDB cluster is deployed. You can also add your customized hosts.

      Finally, generate the PD server-side certificate:

      1. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=internal pd-server.json | cfssljson -bare pd-server
    • TiKV

      First, generate the default tikv-server.json file:

      1. cfssl print-defaults csr > tikv-server.json

      Then, edit this file to change the CN and hosts attributes:

      1. ...
      2. "CN": "TiDB",
      3. "hosts": [
      4. "127.0.0.1",
      5. "::1",
      6. "${cluster_name}-tikv",
      7. "${cluster_name}-tikv.${namespace}",
      8. "${cluster_name}-tikv.${namespace}.svc",
      9. "${cluster_name}-tikv-peer",
      10. "${cluster_name}-tikv-peer.${namespace}",
      11. "${cluster_name}-tikv-peer.${namespace}.svc",
      12. "*.${cluster_name}-tikv-peer",
      13. "*.${cluster_name}-tikv-peer.${namespace}",
      14. "*.${cluster_name}-tikv-peer.${namespace}.svc"
      15. ],
      16. ...

      ${cluster_name} is the name of the cluster. ${namespace} is the namespace in which the TiDB cluster is deployed. You can also add your customized hosts.

      Finally, generate the TiKV server-side certificate:

      1. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=internal tikv-server.json | cfssljson -bare tikv-server
    • TiDB

      First, create the default tidb-server.json file:

      1. cfssl print-defaults csr > tidb-server.json

      Then, edit this file to change the CN, hosts attributes:

      1. ...
      2. "CN": "TiDB",
      3. "hosts": [
      4. "127.0.0.1",
      5. "::1",
      6. "${cluster_name}-tidb",
      7. "${cluster_name}-tidb.${namespace}",
      8. "${cluster_name}-tidb.${namespace}.svc",
      9. "${cluster_name}-tidb-peer",
      10. "${cluster_name}-tidb-peer.${namespace}",
      11. "${cluster_name}-tidb-peer.${namespace}.svc",
      12. "*.${cluster_name}-tidb-peer",
      13. "*.${cluster_name}-tidb-peer.${namespace}",
      14. "*.${cluster_name}-tidb-peer.${namespace}.svc"
      15. ],
      16. ...

      ${cluster_name} is the name of the cluster. ${namespace} is the namespace in which the TiDB cluster is deployed. You can also add your customized hosts.

      Finally, generate the TiDB server-side certificate:

      1. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=internal tidb-server.json | cfssljson -bare tidb-server
    • Pump

      First, create the default pump-server.json file:

      1. cfssl print-defaults csr > pump-server.json

      Then, edit this file to change the CN, hosts attributes:

      1. ...
      2. "CN": "TiDB",
      3. "hosts": [
      4. "127.0.0.1",
      5. "::1",
      6. "*.${cluster_name}-pump",
      7. "*.${cluster_name}-pump.${namespace}",
      8. "*.${cluster_name}-pump.${namespace}.svc"
      9. ],
      10. ...

      ${cluster_name} is the name of the cluster. ${namespace} is the namespace in which the TiDB cluster is deployed. You can also add your customized hosts.

      Finally, generate the Pump server-side certificate:

      1. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=internal pump-server.json | cfssljson -bare pump-server
    • Drainer

      First, generate the default drainer-server.json file:

      1. cfssl print-defaults csr > drainer-server.json

      Then, edit this file to change the CN, hosts attributes:

      1. ...
      2. "CN": "TiDB",
      3. "hosts": [
      4. "127.0.0.1",
      5. "::1",
      6. "<for hosts list, see the following instructions>"
      7. ],
      8. ...

      Drainer is deployed using Helm. The hosts field varies with different configuration of the values.yaml file.

      If you have set the drainerName attribute when deploying Drainer as follows:

      1. ...
      2. # Changes the names of the statefulset and Pod.
      3. # The default value is clusterName-ReleaseName-drainer.
      4. # Does not change the name of an existing running Drainer, which is unsupported.
      5. drainerName: my-drainer
      6. ...

      Then you can set the hosts attribute as described below:

      1. ...
      2. "CN": "TiDB",
      3. "hosts": [
      4. "127.0.0.1",
      5. "::1",
      6. "*.${drainer_name}",
      7. "*.${drainer_name}.${namespace}",
      8. "*.${drainer_name}.${namespace}.svc"
      9. ],
      10. ...

      If you have not set the drainerName attribute when deploying Drainer, configure the hosts attribute as follows:

      1. ...
      2. "CN": "TiDB",
      3. "hosts": [
      4. "127.0.0.1",
      5. "::1",
      6. "*.${cluster_name}-${release_name}-drainer",
      7. "*.${cluster_name}-${release_name}-drainer.${namespace}",
      8. "*.${cluster_name}-${release_name}-drainer.${namespace}.svc"
      9. ],
      10. ...

      ${cluster_name} is the name of the cluster. ${namespace} is the namespace in which the TiDB cluster is deployed. ${release_name} is the release name you set when helm install is executed. ${drainer_name} is drainerName in the values.yaml file. You can also add your customized hosts.

      Finally, generate the Drainer server-side certificate:

      1. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=internal drainer-server.json | cfssljson -bare drainer-server
    • TiCDC

      1. Generate the default ticdc-server.json file:

        1. cfssl print-defaults csr > ticdc-server.json
      2. Edit this file to change the CN, hosts attributes:

        1. ...
        2. "CN": "TiDB",
        3. "hosts": [
        4. "127.0.0.1",
        5. "::1",
        6. "${cluster_name}-ticdc",
        7. "${cluster_name}-ticdc.${namespace}",
        8. "${cluster_name}-ticdc.${namespace}.svc",
        9. "${cluster_name}-ticdc-peer",
        10. "${cluster_name}-ticdc-peer.${namespace}",
        11. "${cluster_name}-ticdc-peer.${namespace}.svc",
        12. "*.${cluster_name}-ticdc-peer",
        13. "*.${cluster_name}-ticdc-peer.${namespace}",
        14. "*.${cluster_name}-ticdc-peer.${namespace}.svc"
        15. ],
        16. ...

        ${cluster_name} is the name of the cluster. ${namespace} is the namespace in which the TiDB cluster is deployed. You can also add your customized hosts.

      3. Generate the TiCDC server-side certificate:

        1. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=internal ticdc-server.json | cfssljson -bare ticdc-server
    • TiFlash

      1. Generate the default tiflash-server.json file:

        1. cfssl print-defaults csr > tiflash-server.json
      2. Edit this file to change the CN and hosts attributes:

        1. ...
        2. "CN": "TiDB",
        3. "hosts": [
        4. "127.0.0.1",
        5. "::1",
        6. "${cluster_name}-tiflash",
        7. "${cluster_name}-tiflash.${namespace}",
        8. "${cluster_name}-tiflash.${namespace}.svc",
        9. "${cluster_name}-tiflash-peer",
        10. "${cluster_name}-tiflash-peer.${namespace}",
        11. "${cluster_name}-tiflash-peer.${namespace}.svc",
        12. "*.${cluster_name}-tiflash-peer",
        13. "*.${cluster_name}-tiflash-peer.${namespace}",
        14. "*.${cluster_name}-tiflash-peer.${namespace}.svc"
        15. ],
        16. ...

        ${cluster_name} is the name of the cluster. ${namespace} is the namespace in which the TiDB cluster is deployed. You can also add your customized hosts.

      3. Generate the TiFlash server-side certificate:

        1. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=internal tiflash-server.json | cfssljson -bare tiflash-server
    • TiKV Importer

      If you need to restore data using TiDB Lightning, you need to generate a server-side certificate for the TiKV Importer component.

      1. Generate the default importer-server.json file:

        1. cfssl print-defaults csr > importer-server.json
      2. Edit this file to change the CN and hosts attributes:

        1. ...
        2. "CN": "TiDB",
        3. "hosts": [
        4. "127.0.0.1",
        5. "::1",
        6. "${cluster_name}-importer",
        7. "${cluster_name}-importer.${namespace}",
        8. "${cluster_name}-importer.${namespace}.svc"
        9. "${cluster_name}-importer.${namespace}.svc",
        10. "*.${cluster_name}-importer",
        11. "*.${cluster_name}-importer.${namespace}",
        12. "*.${cluster_name}-importer.${namespace}.svc"
        13. ],
        14. ...

        ${cluster_name} is the name of the cluster. ${namespace} is the namespace in which the TiDB cluster is deployed. You can also add your customized hosts.

      3. Generate the TiKV Importer server-side certificate:

        1. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=internal importer-server.json | cfssljson -bare importer-server
    • TiDB Lightning

      If you need to restore data using TiDB Lightning, you need to generate a server-side certificate for the TiDB Lightning component.

      1. Generate the default lightning-server.json file:

        1. cfssl print-defaults csr > lightning-server.json
      2. Edit this file to change the CN and hosts attributes:

        1. ...
        2. "CN": "TiDB",
        3. "hosts": [
        4. "127.0.0.1",
        5. "::1",
        6. "${cluster_name}-lightning",
        7. "${cluster_name}-lightning.${namespace}",
        8. "${cluster_name}-lightning.${namespace}.svc"
        9. ],
        10. ...

        ${cluster_name} is the name of the cluster. ${namespace} is the namespace in which the TiDB cluster is deployed. You can also add your customized hosts.

      3. Generate the TiDB Lightning server-side certificate:

        1. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=internal lightning-server.json | cfssljson -bare lightning-server
  6. Generate the client-side certificate:

    First, create the default client.json file:

    1. cfssl print-defaults csr > client.json

    Then, edit this file to change the CN, hosts attributes. You can leave the hosts empty:

    1. ...
    2. "CN": "TiDB",
    3. "hosts": [],
    4. ...

    Finally, generate the client-side certificate:

    1. cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client.json | cfssljson -bare client
  7. Create the Kubernetes Secret object:

    If you have already generated a set of certificates for each component and a set of client-side certificate for each client as described in the above steps, create the Secret objects for the TiDB cluster by executing the following command:

    • The PD cluster certificate Secret:

      1. kubectl create secret generic ${cluster_name}-pd-cluster-secret --namespace=${namespace} --from-file=tls.crt=pd-server.pem --from-file=tls.key=pd-server-key.pem --from-file=ca.crt=ca.pem
    • The TiKV cluster certificate Secret:

      1. kubectl create secret generic ${cluster_name}-tikv-cluster-secret --namespace=${namespace} --from-file=tls.crt=tikv-server.pem --from-file=tls.key=tikv-server-key.pem --from-file=ca.crt=ca.pem
    • The TiDB cluster certificate Secret:

      1. kubectl create secret generic ${cluster_name}-tidb-cluster-secret --namespace=${namespace} --from-file=tls.crt=tidb-server.pem --from-file=tls.key=tidb-server-key.pem --from-file=ca.crt=ca.pem
    • The Pump cluster certificate Secret:

      1. kubectl create secret generic ${cluster_name}-pump-cluster-secret --namespace=${namespace} --from-file=tls.crt=pump-server.pem --from-file=tls.key=pump-server-key.pem --from-file=ca.crt=ca.pem
    • The Drainer cluster certificate Secret:

      1. kubectl create secret generic ${cluster_name}-drainer-cluster-secret --namespace=${namespace} --from-file=tls.crt=drainer-server.pem --from-file=tls.key=drainer-server-key.pem --from-file=ca.crt=ca.pem
    • The TiCDC cluster certificate Secret:

      1. kubectl create secret generic ${cluster_name}-ticdc-cluster-secret --namespace=${namespace} --from-file=tls.crt=ticdc-server.pem --from-file=tls.key=ticdc-server-key.pem --from-file=ca.crt=ca.pem
    • The TiFlash cluster certificate Secret:

      1. kubectl create secret generic ${cluster_name}-tiflash-cluster-secret --namespace=${namespace} --from-file=tls.crt=tiflash-server.pem --from-file=tls.key=tiflash-server-key.pem --from-file=ca.crt=ca.pem
    • The TiKV Importer cluster certificate Secret:

      1. kubectl create secret generic ${cluster_name}-importer-cluster-secret --namespace=${namespace} --from-file=tls.crt=importer-server.pem --from-file=tls.key=importer-server-key.pem --from-file=ca.crt=ca.pem
    • The TiDB Lightning cluster certificate Secret:

      1. kubectl create secret generic ${cluster_name}-lightning-cluster-secret --namespace=${namespace} --from-file=tls.crt=lightning-server.pem --from-file=tls.key=lightning-server-key.pem --from-file=ca.crt=ca.pem
    • The client certificate Secret:

      1. kubectl create secret generic ${cluster_name}-cluster-client-secret --namespace=${namespace} --from-file=tls.crt=client.pem --from-file=tls.key=client-key.pem --from-file=ca.crt=ca.pem

    You have created two Secret objects:

    • One Secret object for each PD/TiKV/TiDB/Pump/Drainer server-side certificate to load when the server is started;
    • One Secret object for their clients to connect.

Using cert-manager

  1. Install cert-manager.

    Refer to cert-manager installation in Kubernetes for details.

  2. Create an Issuer to issue certificates to the TiDB cluster.

    To configure cert-manager, create the Issuer resources.

    First, create a directory which saves the files that cert-manager needs to create certificates:

    1. mkdir -p cert-manager
    2. cd cert-manager

    Then, create a tidb-cluster-issuer.yaml file with the following content:

    1. apiVersion: cert-manager.io/v1
    2. kind: Issuer
    3. metadata:
    4. name: ${cluster_name}-selfsigned-ca-issuer
    5. namespace: ${namespace}
    6. spec:
    7. selfSigned: {}
    8. ---
    9. apiVersion: cert-manager.io/v1
    10. kind: Certificate
    11. metadata:
    12. name: ${cluster_name}-ca
    13. namespace: ${namespace}
    14. spec:
    15. secretName: ${cluster_name}-ca-secret
    16. commonName: "TiDB"
    17. isCA: true
    18. duration: 87600h # 10yrs
    19. renewBefore: 720h # 30d
    20. issuerRef:
    21. name: ${cluster_name}-selfsigned-ca-issuer
    22. kind: Issuer
    23. ---
    24. apiVersion: cert-manager.io/v1
    25. kind: Issuer
    26. metadata:
    27. name: ${cluster_name}-tidb-issuer
    28. namespace: ${namespace}
    29. spec:
    30. ca:
    31. secretName: ${cluster_name}-ca-secret

    ${cluster_name} is the name of the cluster. The above YAML file creates three objects:

    • An Issuer object of the SelfSigned type, used to generate the CA certificate needed by Issuer of the CA type;
    • A Certificate object, whose isCa is set to true.
    • An Issuer, used to issue TLS certificates between TiDB components.

    Finally, execute the following command to create an Issuer:

    1. kubectl apply -f tidb-cluster-issuer.yaml
  3. Generate the server-side certificate.

    In cert-manager, the Certificate resource represents the certificate interface. This certificate is issued and updated by the Issuer created in Step 2.

    According to Enable TLS Authentication, each component needs a server-side certificate, and all components need a shared client-side certificate for their clients.

    • PD

      1. apiVersion: cert-manager.io/v1
      2. kind: Certificate
      3. metadata:
      4. name: ${cluster_name}-pd-cluster-secret
      5. namespace: ${namespace}
      6. spec:
      7. secretName: ${cluster_name}-pd-cluster-secret
      8. duration: 8760h # 365d
      9. renewBefore: 360h # 15d
      10. subject:
      11. organizations:
      12. - PingCAP
      13. commonName: "TiDB"
      14. usages:
      15. - server auth
      16. - client auth
      17. dnsNames:
      18. - "${cluster_name}-pd"
      19. - "${cluster_name}-pd.${namespace}"
      20. - "${cluster_name}-pd.${namespace}.svc"
      21. - "${cluster_name}-pd-peer"
      22. - "${cluster_name}-pd-peer.${namespace}"
      23. - "${cluster_name}-pd-peer.${namespace}.svc"
      24. - "*.${cluster_name}-pd-peer"
      25. - "*.${cluster_name}-pd-peer.${namespace}"
      26. - "*.${cluster_name}-pd-peer.${namespace}.svc"
      27. ipAddresses:
      28. - 127.0.0.1
      29. - ::1
      30. issuerRef:
      31. name: ${cluster_name}-tidb-issuer
      32. kind: Issuer
      33. group: cert-manager.io

      ${cluster_name} is the name of the cluster. Configure the items as follows:

      • Set spec.secretName to ${cluster_name}-pd-cluster-secret.

      • Add server auth and client auth in usages.

      • Add the following DNSs in dnsNames. You can also add other DNSs according to your needs:

        • ${cluster_name}-pd
        • ${cluster_name}-pd.${namespace}
        • ${cluster_name}-pd.${namespace}.svc
        • ${cluster_name}-pd-peer
        • ${cluster_name}-pd-peer.${namespace}
        • ${cluster_name}-pd-peer.${namespace}.svc
        • *.${cluster_name}-pd-peer
        • *.${cluster_name}-pd-peer.${namespace}
        • *.${cluster_name}-pd-peer.${namespace}.svc
      • Add the following two IPs in ipAddresses. You can also add other IPs according to your needs:

        • 127.0.0.1
        • ::1
      • Add the Issuer created above in issuerRef.

      • For other attributes, refer to cert-manager API.

        After the object is created, cert-manager generates a ${cluster_name}-pd-cluster-secret Secret object to be used by the PD component of the TiDB server.

    • TiKV

      1. apiVersion: cert-manager.io/v1
      2. kind: Certificate
      3. metadata:
      4. name: ${cluster_name}-tikv-cluster-secret
      5. namespace: ${namespace}
      6. spec:
      7. secretName: ${cluster_name}-tikv-cluster-secret
      8. duration: 8760h # 365d
      9. renewBefore: 360h # 15d
      10. subject:
      11. organizations:
      12. - PingCAP
      13. commonName: "TiDB"
      14. usages:
      15. - server auth
      16. - client auth
      17. dnsNames:
      18. - "${cluster_name}-tikv"
      19. - "${cluster_name}-tikv.${namespace}"
      20. - "${cluster_name}-tikv.${namespace}.svc"
      21. - "${cluster_name}-tikv-peer"
      22. - "${cluster_name}-tikv-peer.${namespace}"
      23. - "${cluster_name}-tikv-peer.${namespace}.svc"
      24. - "*.${cluster_name}-tikv-peer"
      25. - "*.${cluster_name}-tikv-peer.${namespace}"
      26. - "*.${cluster_name}-tikv-peer.${namespace}.svc"
      27. ipAddresses:
      28. - 127.0.0.1
      29. - ::1
      30. issuerRef:
      31. name: ${cluster_name}-tidb-issuer
      32. kind: Issuer
      33. group: cert-manager.io

      ${cluster_name} is the name of the cluster. Configure the items as follows:

      • Set spec.secretName to ${cluster_name}-tikv-cluster-secret.

      • Add server auth and client auth in usages.

      • Add the following DNSs in dnsNames. You can also add other DNSs according to your needs:

        • ${cluster_name}-tikv
        • ${cluster_name}-tikv.${namespace}
        • ${cluster_name}-tikv.${namespace}.svc
        • ${cluster_name}-tikv-peer
        • ${cluster_name}-tikv-peer.${namespace}
        • ${cluster_name}-tikv-peer.${namespace}.svc
        • *.${cluster_name}-tikv-peer
        • *.${cluster_name}-tikv-peer.${namespace}
        • *.${cluster_name}-tikv-peer.${namespace}.svc
      • Add the following 2 IPs in ipAddresses. You can also add other IPs according to your needs:

        • 127.0.0.1
        • ::1
      • Add the Issuer created above in issuerRef.

      • For other attributes, refer to cert-manager API.

        After the object is created, cert-manager generates a ${cluster_name}-tikv-cluster-secret Secret object to be used by the TiKV component of the TiDB server.

    • TiDB

      1. apiVersion: cert-manager.io/v1
      2. kind: Certificate
      3. metadata:
      4. name: ${cluster_name}-tidb-cluster-secret
      5. namespace: ${namespace}
      6. spec:
      7. secretName: ${cluster_name}-tidb-cluster-secret
      8. duration: 8760h # 365d
      9. renewBefore: 360h # 15d
      10. subject:
      11. organizations:
      12. - PingCAP
      13. commonName: "TiDB"
      14. usages:
      15. - server auth
      16. - client auth
      17. dnsNames:
      18. - "${cluster_name}-tidb"
      19. - "${cluster_name}-tidb.${namespace}"
      20. - "${cluster_name}-tidb.${namespace}.svc"
      21. - "${cluster_name}-tidb-peer"
      22. - "${cluster_name}-tidb-peer.${namespace}"
      23. - "${cluster_name}-tidb-peer.${namespace}.svc"
      24. - "*.${cluster_name}-tidb-peer"
      25. - "*.${cluster_name}-tidb-peer.${namespace}"
      26. - "*.${cluster_name}-tidb-peer.${namespace}.svc"
      27. ipAddresses:
      28. - 127.0.0.1
      29. - ::1
      30. issuerRef:
      31. name: ${cluster_name}-tidb-issuer
      32. kind: Issuer
      33. group: cert-manager.io

      ${cluster_name} is the name of the cluster. Configure the items as follows:

      • Set spec.secretName to ${cluster_name}-tidb-cluster-secret

      • Add server auth and client auth in usages

      • Add the following DNSs in dnsNames. You can also add other DNSs according to your needs:

        • ${cluster_name}-tidb
        • ${cluster_name}-tidb.${namespace}
        • ${cluster_name}-tidb.${namespace}.svc
        • ${cluster_name}-tidb-peer
        • ${cluster_name}-tidb-peer.${namespace}
        • ${cluster_name}-tidb-peer.${namespace}.svc
        • *.${cluster_name}-tidb-peer
        • *.${cluster_name}-tidb-peer.${namespace}
        • *.${cluster_name}-tidb-peer.${namespace}.svc
      • Add the following 2 IPs in ipAddresses. You can also add other IPs according to your needs:

        • 127.0.0.1
        • ::1
      • Add the Issuer created above in issuerRef.

      • For other attributes, refer to cert-manager API.

        After the object is created, cert-manager generates a ${cluster_name}-tidb-cluster-secret Secret object to be used by the TiDB component of the TiDB server.

    • Pump

      1. apiVersion: cert-manager.io/v1
      2. kind: Certificate
      3. metadata:
      4. name: ${cluster_name}-pump-cluster-secret
      5. namespace: ${namespace}
      6. spec:
      7. secretName: ${cluster_name}-pump-cluster-secret
      8. duration: 8760h # 365d
      9. renewBefore: 360h # 15d
      10. subject:
      11. organizations:
      12. - PingCAP
      13. commonName: "TiDB"
      14. usages:
      15. - server auth
      16. - client auth
      17. dnsNames:
      18. - "*.${cluster_name}-pump"
      19. - "*.${cluster_name}-pump.${namespace}"
      20. - "*.${cluster_name}-pump.${namespace}.svc"
      21. ipAddresses:
      22. - 127.0.0.1
      23. - ::1
      24. issuerRef:
      25. name: ${cluster_name}-tidb-issuer
      26. kind: Issuer
      27. group: cert-manager.io

      ${cluster_name} is the name of the cluster. Configure the items as follows:

      • Set spec.secretName to ${cluster_name}-pump-cluster-secret

      • Add server auth and client auth in usages

      • Add the following DNSs in dnsNames. You can also add other DNSs according to your needs:

        • *.${cluster_name}-pump
        • *.${cluster_name}-pump.${namespace}
        • *.${cluster_name}-pump.${namespace}.svc
      • Add the following 2 IPs in ipAddresses. You can also add other IPs according to your needs:

        • 127.0.0.1
        • ::1
      • Add the Issuer created above in the issuerRef

      • For other attributes, refer to cert-manager API.

        After the object is created, cert-manager generates a ${cluster_name}-pump-cluster-secret Secret object to be used by the Pump component of the TiDB server.

    • Drainer

      Drainer is deployed using Helm. The dnsNames field varies with different configuration of the values.yaml file.

      If you set the drainerName attributes when deploying Drainer as follows:

      1. ...
      2. # Changes the name of the statefulset and Pod.
      3. # The default value is clusterName-ReleaseName-drainer
      4. # Does not change the name of an existing running Drainer, which is unsupported.
      5. drainerName: my-drainer
      6. ...

      Then you need to configure the certificate as described below:

      1. apiVersion: cert-manager.io/v1
      2. kind: Certificate
      3. metadata:
      4. name: ${cluster_name}-drainer-cluster-secret
      5. namespace: ${namespace}
      6. spec:
      7. secretName: ${cluster_name}-drainer-cluster-secret
      8. duration: 8760h # 365d
      9. renewBefore: 360h # 15d
      10. subject:
      11. organizations:
      12. - PingCAP
      13. commonName: "TiDB"
      14. usages:
      15. - server auth
      16. - client auth
      17. dnsNames:
      18. - "*.${drainer_name}"
      19. - "*.${drainer_name}.${namespace}"
      20. - "*.${drainer_name}.${namespace}.svc"
      21. ipAddresses:
      22. - 127.0.0.1
      23. - ::1
      24. issuerRef:
      25. name: ${cluster_name}-tidb-issuer
      26. kind: Issuer
      27. group: cert-manager.io

      If you didn’t set the drainerName attribute when deploying Drainer, configure the dnsNames attributes as follows:

      1. apiVersion: cert-manager.io/v1
      2. kind: Certificate
      3. metadata:
      4. name: ${cluster_name}-drainer-cluster-secret
      5. namespace: ${namespace}
      6. spec:
      7. secretName: ${cluster_name}-drainer-cluster-secret
      8. duration: 8760h # 365d
      9. renewBefore: 360h # 15d
      10. subject:
      11. organizations:
      12. - PingCAP
      13. commonName: "TiDB"
      14. usages:
      15. - server auth
      16. - client auth
      17. dnsNames:
      18. - "*.${cluster_name}-${release_name}-drainer"
      19. - "*.${cluster_name}-${release_name}-drainer.${namespace}"
      20. - "*.${cluster_name}-${release_name}-drainer.${namespace}.svc"
      21. ipAddresses:
      22. - 127.0.0.1
      23. - ::1
      24. issuerRef:
      25. name: ${cluster_name}-tidb-issuer
      26. kind: Issuer
      27. group: cert-manager.io

      ${cluster_name} is the name of the cluster. ${namespace} is the namespace in which the TiDB cluster is deployed. ${release_name} is the release name you set when helm install is executed. ${drainer_name} is drainerName in the values.yaml file. You can also add your customized dnsNames.

      • Set spec.secretName to ${cluster_name}-drainer-cluster-secret.

      • Add server auth and client auth in usages.

      • See the above descriptions for dnsNames.

      • Add the following 2 IPs in ipAddresses. You can also add other IPs according to your needs:

        • 127.0.0.1
        • ::1
      • Add the Issuer created above in issuerRef.

      • For other attributes, refer to cert-manager API.

        After the object is created, cert-manager generates a ${cluster_name}-drainer-cluster-secret Secret object to be used by the Drainer component of the TiDB server.

    • TiCDC

      Starting from v4.0.3, TiCDC supports TLS. TiDB Operator supports enabling TLS for TiCDC since v1.1.3.

      1. apiVersion: cert-manager.io/v1
      2. kind: Certificate
      3. metadata:
      4. name: ${cluster_name}-ticdc-cluster-secret
      5. namespace: ${namespace}
      6. spec:
      7. secretName: ${cluster_name}-ticdc-cluster-secret
      8. duration: 8760h # 365d
      9. renewBefore: 360h # 15d
      10. subject:
      11. organizations:
      12. - PingCAP
      13. commonName: "TiDB"
      14. usages:
      15. - server auth
      16. - client auth
      17. dnsNames:
      18. - "${cluster_name}-ticdc"
      19. - "${cluster_name}-ticdc.${namespace}"
      20. - "${cluster_name}-ticdc.${namespace}.svc"
      21. - "${cluster_name}-ticdc-peer"
      22. - "${cluster_name}-ticdc-peer.${namespace}"
      23. - "${cluster_name}-ticdc-peer.${namespace}.svc"
      24. - "*.${cluster_name}-ticdc-peer"
      25. - "*.${cluster_name}-ticdc-peer.${namespace}"
      26. - "*.${cluster_name}-ticdc-peer.${namespace}.svc"
      27. ipAddresses:
      28. - 127.0.0.1
      29. - ::1
      30. issuerRef:
      31. name: ${cluster_name}-tidb-issuer
      32. kind: Issuer
      33. group: cert-manager.io

      In the file, ${cluster_name} is the name of the cluster:

      • Set spec.secretName to ${cluster_name}-ticdc-cluster-secret.

      • Add server auth and client auth in usages.

      • Add the following DNSs in dnsNames. You can also add other DNSs according to your needs:

        • ${cluster_name}-ticdc
        • ${cluster_name}-ticdc.${namespace}
        • ${cluster_name}-ticdc.${namespace}.svc
        • ${cluster_name}-ticdc-peer
        • ${cluster_name}-ticdc-peer.${namespace}
        • ${cluster_name}-ticdc-peer.${namespace}.svc
        • *.${cluster_name}-ticdc-peer
        • *.${cluster_name}-ticdc-peer.${namespace}
        • *.${cluster_name}-ticdc-peer.${namespace}.svc
      • Add the following 2 IPs in ipAddresses. You can also add other IPs according to your needs:

        • 127.0.0.1
        • ::1
      • Add the Issuer created above in issuerRef.

      • For other attributes, refer to cert-manager API.

        After the object is created, cert-manager generates a ${cluster_name}-ticdc-cluster-secret Secret object to be used by the TiCDC component of the TiDB server.

    • TiFlash

      1. apiVersion: cert-manager.io/v1
      2. kind: Certificate
      3. metadata:
      4. name: ${cluster_name}-tiflash-cluster-secret
      5. namespace: ${namespace}
      6. spec:
      7. secretName: ${cluster_name}-tiflash-cluster-secret
      8. duration: 8760h # 365d
      9. renewBefore: 360h # 15d
      10. subject:
      11. organizations:
      12. - PingCAP
      13. commonName: "TiDB"
      14. usages:
      15. - server auth
      16. - client auth
      17. dnsNames:
      18. - "${cluster_name}-tiflash"
      19. - "${cluster_name}-tiflash.${namespace}"
      20. - "${cluster_name}-tiflash.${namespace}.svc"
      21. - "${cluster_name}-tiflash-peer"
      22. - "${cluster_name}-tiflash-peer.${namespace}"
      23. - "${cluster_name}-tiflash-peer.${namespace}.svc"
      24. - "*.${cluster_name}-tiflash-peer"
      25. - "*.${cluster_name}-tiflash-peer.${namespace}"
      26. - "*.${cluster_name}-tiflash-peer.${namespace}.svc"
      27. ipAddresses:
      28. - 127.0.0.1
      29. - ::1
      30. issuerRef:
      31. name: ${cluster_name}-tidb-issuer
      32. kind: Issuer
      33. group: cert-manager.io

      In the file, ${cluster_name} is the name of the cluster:

      • Set spec.secretName to ${cluster_name}-tiflash-cluster-secret.

      • Add server auth and client auth in usages.

      • Add the following DNSs in dnsNames. You can also add other DNSs according to your needs:

        • ${cluster_name}-tiflash
        • ${cluster_name}-tiflash.${namespace}
        • ${cluster_name}-tiflash.${namespace}.svc
        • ${cluster_name}-tiflash-peer
        • ${cluster_name}-tiflash-peer.${namespace}
        • ${cluster_name}-tiflash-peer.${namespace}.svc
        • *.${cluster_name}-tiflash-peer
        • *.${cluster_name}-tiflash-peer.${namespace}
        • *.${cluster_name}-tiflash-peer.${namespace}.svc
      • Add the following 2 IP addresses in ipAddresses. You can also add other IP addresses according to your needs:

        • 127.0.0.1
        • ::1
      • Add the Issuer created above in issuerRef.

      • For other attributes, refer to cert-manager API.

        After the object is created, cert-manager generates a ${cluster_name}-tiflash-cluster-secret Secret object to be used by the TiFlash component of the TiDB server.

    • TiKV Importer

      If you need to restore data using TiDB Lightning, you need to generate a server-side certificate for the TiKV Importer component.

      1. apiVersion: cert-manager.io/v1
      2. kind: Certificate
      3. metadata:
      4. name: ${cluster_name}-importer-cluster-secret
      5. namespace: ${namespace}
      6. spec:
      7. secretName: ${cluster_name}-importer-cluster-secret
      8. duration: 8760h # 365d
      9. renewBefore: 360h # 15d
      10. subject:
      11. organizations:
      12. - PingCAP
      13. commonName: "TiDB"
      14. usages:
      15. - server auth
      16. - client auth
      17. dnsNames:
      18. - "${cluster_name}-importer"
      19. - "${cluster_name}-importer.${namespace}"
      20. - "${cluster_name}-importer.${namespace}.svc"
      21. - "*.${cluster_name}-importer"
      22. - "*.${cluster_name}-importer.${namespace}"
      23. - "*.${cluster_name}-importer.${namespace}.svc"
      24. ipAddresses:
      25. - 127.0.0.1
      26. - ::1
      27. issuerRef:
      28. name: ${cluster_name}-tidb-issuer
      29. kind: Issuer
      30. group: cert-manager.io

      In the file, ${cluster_name} is the name of the cluster:

      • Set spec.secretName to ${cluster_name}-importer-cluster-secret.

      • Add server auth and client auth in usages.

      • Add the following DNSs in dnsNames. You can also add other DNSs according to your needs:

        • ${cluster_name}-importer
        • ${cluster_name}-importer.${namespace}
        • ${cluster_name}-importer.${namespace}.svc
      • Add the following 2 IP addresses in ipAddresses. You can also add other IP addresses according to your needs:

        • 127.0.0.1
        • ::1
      • Add the Issuer created above in issuerRef.

      • For other attributes, refer to cert-manager API.

        After the object is created, cert-manager generates a ${cluster_name}-importer-cluster-secret Secret object to be used by the TiKV Importer component of the TiDB server.

    • TiDB Lightning

      If you need to restore data using TiDB Lightning, you need to generate a server-side certificate for the TiDB Lightning component.

      1. apiVersion: cert-manager.io/v1
      2. kind: Certificate
      3. metadata:
      4. name: ${cluster_name}-lightning-cluster-secret
      5. namespace: ${namespace}
      6. spec:
      7. secretName: ${cluster_name}-lightning-cluster-secret
      8. duration: 8760h # 365d
      9. renewBefore: 360h # 15d
      10. subject:
      11. organizations:
      12. - PingCAP
      13. commonName: "TiDB"
      14. usages:
      15. - server auth
      16. - client auth
      17. dnsNames:
      18. - "${cluster_name}-lightning"
      19. - "${cluster_name}-lightning.${namespace}"
      20. - "${cluster_name}-lightning.${namespace}.svc"
      21. ipAddresses:
      22. - 127.0.0.1
      23. - ::1
      24. issuerRef:
      25. name: ${cluster_name}-tidb-issuer
      26. kind: Issuer
      27. group: cert-manager.io

      In the file, ${cluster_name} is the name of the cluster:

      • Set spec.secretName to ${cluster_name}-lightning-cluster-secret.

      • Add server auth and client auth in usages.

      • Add the following DNSs in dnsNames. You can also add other DNSs according to your needs:

        • ${cluster_name}-lightning
        • ${cluster_name}-lightning.${namespace}
        • ${cluster_name}-lightning.${namespace}.svc
      • Add the following 2 IP addresses in ipAddresses. You can also add other IP addresses according to your needs:

        • 127.0.0.1
        • ::1
      • Add the Issuer created above in issuerRef.

      • For other attributes, refer to cert-manager API.

        After the object is created, cert-manager generates a ${cluster_name}-lightning-cluster-secret Secret object to be used by the TiDB Lightning component of the TiDB server.

  4. Generate the client-side certificate for components of the TiDB cluster.

    1. apiVersion: cert-manager.io/v1
    2. kind: Certificate
    3. metadata:
    4. name: ${cluster_name}-cluster-client-secret
    5. namespace: ${namespace}
    6. spec:
    7. secretName: ${cluster_name}-cluster-client-secret
    8. duration: 8760h # 365d
    9. renewBefore: 360h # 15d
    10. subject:
    11. organizations:
    12. - PingCAP
    13. commonName: "TiDB"
    14. usages:
    15. - client auth
    16. issuerRef:
    17. name: ${cluster_name}-tidb-issuer
    18. kind: Issuer
    19. group: cert-manager.io

    ${cluster_name} is the name of the cluster. Configure the items as follows:

    • Set spec.secretName to ${cluster_name}-cluster-client-secret.
    • Add client auth in usages.
    • You can leave dnsNames and ipAddresses empty.
    • Add the Issuer created above in issuerRef.
    • For other attributes, refer to cert-manager API.

    After the object is created, cert-manager generates a ${cluster_name}-cluster-client-secret Secret object to be used by the clients of the TiDB components.

Deploy the TiDB cluster

When you deploy a TiDB cluster, you can enable TLS between TiDB components, and set the cert-allowed-cn configuration item (for TiDB, the configuration item is cluster-verify-cn) to verify the CN (Common Name) of each component’s certificate.

Enable TLS between TiDB Components - 图4Note

Currently, you can set only one value for the cert-allowed-cn configuration item of PD. Therefore, the commonName of all Certificate objects must be the same.

In this step, you need to perform the following operations:

  • Create a TiDB cluster
  • Enable TLS between the TiDB components, and enable CN verification
  • Deploy a monitoring system
  • Deploy the Pump component, and enable CN verification
  1. Create a TiDB cluster:

    Create the tidb-cluster.yaml file:

    1. apiVersion: pingcap.com/v1alpha1
    2. kind: TidbCluster
    3. metadata:
    4. name: ${cluster_name}
    5. namespace: ${namespace}
    6. spec:
    7. tlsCluster:
    8. enabled: true
    9. version: v5.4.0
    10. timezone: UTC
    11. pvReclaimPolicy: Retain
    12. pd:
    13. baseImage: pingcap/pd
    14. maxFailoverCount: 0
    15. replicas: 1
    16. requests:
    17. storage: "10Gi"
    18. config:
    19. security:
    20. cert-allowed-cn:
    21. - TiDB
    22. tikv:
    23. baseImage: pingcap/tikv
    24. maxFailoverCount: 0
    25. replicas: 1
    26. requests:
    27. storage: "100Gi"
    28. config:
    29. security:
    30. cert-allowed-cn:
    31. - TiDB
    32. tidb:
    33. baseImage: pingcap/tidb
    34. maxFailoverCount: 0
    35. replicas: 1
    36. service:
    37. type: ClusterIP
    38. config:
    39. security:
    40. cluster-verify-cn:
    41. - TiDB
    42. pump:
    43. baseImage: pingcap/tidb-binlog
    44. replicas: 1
    45. requests:
    46. storage: "100Gi"
    47. config:
    48. security:
    49. cert-allowed-cn:
    50. - TiDB
    51. ---
    52. apiVersion: pingcap.com/v1alpha1
    53. kind: TidbMonitor
    54. metadata:
    55. name: ${cluster_name}
    56. namespace: ${namespace}
    57. spec:
    58. clusters:
    59. - name: ${cluster_name}
    60. prometheus:
    61. baseImage: prom/prometheus
    62. version: v2.27.1
    63. grafana:
    64. baseImage: grafana/grafana
    65. version: 7.5.11
    66. initializer:
    67. baseImage: pingcap/tidb-monitor-initializer
    68. version: v5.4.0
    69. reloader:
    70. baseImage: pingcap/tidb-monitor-reloader
    71. version: v1.0.1
    72. prometheusReloader:
    73. baseImage: quay.io/prometheus-operator/prometheus-config-reloader
    74. version: v0.49.0
    75. imagePullPolicy: IfNotPresent

    Execute kubectl apply -f tidb-cluster.yaml to create a TiDB cluster.

    This operation also includes deploying a monitoring system and the Pump component.

  2. Create a Drainer component and enable TLS and CN verification:

    • Method 1: Set drainerName when you create Drainer.

      Edit the values.yaml file, set drainer-name, and enable the TLS feature:

      1. ...
      2. drainerName: ${drainer_name}
      3. tlsCluster:
      4. enabled: true
      5. certAllowedCN:
      6. - TiDB
      7. ...

      Deploy the Drainer cluster:

      1. helm install ${release_name} pingcap/tidb-drainer --namespace=${namespace} --version=${helm_version} -f values.yaml
    • Method 2: Do not set drainerName when you create Drainer.

      Edit the values.yaml file, and enable the TLS feature:

      1. ...
      2. tlsCluster:
      3. enabled: true
      4. certAllowedCN:
      5. - TiDB
      6. ...

      Deploy the Drainer cluster:

      1. helm install ${release_name} pingcap/tidb-drainer --namespace=${namespace} --version=${helm_version} -f values.yaml
  3. Create the Backup/Restore resource object:

    • Create the backup.yaml file:

      1. apiVersion: pingcap.com/v1alpha1
      2. kind: Backup
      3. metadata:
      4. name: ${cluster_name}-backup
      5. namespace: ${namespace}
      6. spec:
      7. backupType: full
      8. br:
      9. cluster: ${cluster_name}
      10. clusterNamespace: ${namespace}
      11. sendCredToTikv: true
      12. from:
      13. host: ${host}
      14. secretName: ${tidb_secret}
      15. port: 4000
      16. user: root
      17. s3:
      18. provider: aws
      19. region: ${my_region}
      20. secretName: ${s3_secret}
      21. bucket: ${my_bucket}
      22. prefix: ${my_folder}

      Deploy Backup:

      1. kubectl apply -f backup.yaml
    • Create the restore.yaml file:

      1. apiVersion: pingcap.com/v1alpha1
      2. kind: Restore
      3. metadata:
      4. name: ${cluster_name}-restore
      5. namespace: ${namespace}
      6. spec:
      7. backupType: full
      8. br:
      9. cluster: ${cluster_name}
      10. clusterNamespace: ${namespace}
      11. sendCredToTikv: true
      12. to:
      13. host: ${host}
      14. secretName: ${tidb_secret}
      15. port: 4000
      16. user: root
      17. s3:
      18. provider: aws
      19. region: ${my_region}
      20. secretName: ${s3_secret}
      21. bucket: ${my_bucket}
      22. prefix: ${my_folder}

      Deploy Restore:

      1. kubectl apply -f restore.yaml

Configure pd-ctl, tikv-ctl and connect to the cluster

  1. Mount the certificates.

    Configure spec.pd.mountClusterClientSecret: true and spec.tikv.mountClusterClientSecret: true with the following command:

    1. kubectl patch tc ${cluster_name} -n ${namespace} --type merge -p '{"spec":{"pd":{"mountClusterClientSecret": true},"tikv":{"mountClusterClientSecret": true}}}'

    Enable TLS between TiDB Components - 图5Note

    • The above configuration will trigger the rolling update of PD and TiKV cluster.
    • The above configurations are supported since TiDB Operator v1.1.5.
  2. Use pd-ctl to connect to the PD cluster.

    Get into the PD Pod:

    1. kubectl exec -it ${cluster_name}-pd-0 -n ${namespace} sh

    Use pd-ctl:

    1. cd /var/lib/cluster-client-tls
    2. /pd-ctl --cacert=ca.crt --cert=tls.crt --key=tls.key -u https://127.0.0.1:2379 member
  3. Use tikv-ctl to connect to the TiKV cluster.

    Get into the TiKV Pod:

    1. kubectl exec -it ${cluster_name}-tikv-0 -n ${namespace} sh

    Use tikv-ctl:

    1. cd /var/lib/cluster-client-tls
    2. /tikv-ctl --ca-path=ca.crt --cert-path=tls.crt --key-path=tls.key --host 127.0.0.1:20160 cluster