Import Data

This document describes how to import data into a TiDB cluster in Kubernetes using TiDB Lightning.

TiDB Lightning contains two components: tidb-lightning and tikv-importer. In Kubernetes, the tikv-importer is inside the separate Helm chart of the TiDB cluster. And tikv-importer is deployed as a StatefulSet with replicas=1 while tidb-lightning is in a separate Helm chart and deployed as a Job.

TiDB Lightning supports three backends: Importer-backend, Local-backend, and TiDB-backend. For the differences of these backends and how to choose backends, see TiDB Lightning Backends.

Deploy TiDB Lightning

Step 1. Configure TiDB Lightning

Use the following command to save the default configuration of TiDB Lightning to the tidb-lightning-values.yaml file:

  1. helm inspect values pingcap/tidb-lightning --version=${chart_version} > tidb-lightning-values.yaml

Configure the backend field in the configuration file depending on your needs. The optional values are local and tidb.

  1. # The delivery backend used to import data (valid options include `local` and `tidb`).
  2. # If set to `local`, then the following `sortedKV` should be set.
  3. backend: local

If you use the local backend, you must set sortedKV in values.yaml to create the corresponding PVC. The PVC is used for local KV sorting.

  1. # For `local` backend, an extra PV is needed for local KV sorting.
  2. sortedKV:
  3. storageClassName: local-storage
  4. storage: 100Gi

Configure checkpoint

Starting from v1.1.10, the tidb-lightning Helm chart saves the TiDB Lightning checkpoint information in the directory of the source data. When the a new tidb-lightning job is running, it can resume the data import according to the checkpoint information.

For versions earlier than v1.1.10, you can modify config in values.yaml to save the checkpoint information in the target TiDB cluster, other MySQL-compatible databases or a shared storage directory. For more information, refer to TiDB Lightning checkpoint.

Configure TLS

If TLS between components has been enabled on the target TiDB cluster (spec.tlsCluster.enabled: true), refer to Generate certificates for components of the TiDB cluster to genereate a server-side certificate for TiDB Lightning, and configure tlsCluster.enabled: true in values.yaml to enable TLS between components.

If the target TiDB cluster has enabled TLS for the MySQL client (spec.tidb.tlsClient.enabled: true), and the corresponding client-side certificate is configured (the Kubernetes Secret object is ${cluster_name}-tidb-client-secret), you can configure tlsClient.enabled: true in values.yaml to enable TiDB Lightning to connect to the TiDB server using TLS.

To use different client certificates to connect to the TiDB server, refer to Issue two sets of certificates for the TiDB cluster to generate the client-side certificate for TiDB Lightning, and configure the corresponding Kubernetes secret object in tlsCluster.tlsClientSecretName in values.yaml.

Import Data - 图2Note

If TLS is enabled between components via tlsCluster.enabled: true but not enabled between TiDB Lightning and the TiDB server via tlsClient.enabled: true, you need to explicitly disable TLS between TiDB Lightning and the TiDB server in config in values.yaml:

  1. [tidb]
  2. tls="false"

Step 2. Configure the data source

The tidb-lightning Helm chart supports both local and remote data sources. The three types of data sources correspond to three modes: local, remote, and ad hoc. The three modes cannot be used together. You can only configure one mode.

Local

In the local mode, tidb-lightning reads the backup data from a directory in one of the Kubernetes node.

  1. dataSource:
  2. local:
  3. nodeName: kind-worker3
  4. hostPath: /data/export-20190820

The descriptions of the related fields are as follows:

  • dataSource.local.nodeName: the node name that the directory is located at.
  • dataSource.local.hostPath: the path of the backup data. The path must contain a file named metadata.

Remote

Unlike the local mode, the remote mode uses rclone to download the backup tarball file or the backup directory from a network storage to a PV. Any cloud storage supported by rclone should work, but currently only the following have been tested: Google Cloud Storage (GCS), Amazon S3, Ceph Object Storage.

To restore backup data from the remote source, take the following steps:

  1. Grant permissions to the remote storage.

    If you use Amazon S3 as the storage, refer to AWS account Permissions. The configuration varies with different methods.

    If you use Ceph as the storage, you can only grant permissions by importing AccessKey and SecretKey. See Grant permissions by AccessKey and SecretKey.

    If you use GCS as the storage, refer to GCS account permissions.

    • Grant permissions by importing AccessKey and SecretKey

      1. Create a Secret configuration file secret.yaml containing the rclone configuration. A sample configuration is listed below. Only one cloud storage configuration is required.

        1. apiVersion: v1
        2. kind: Secret
        3. metadata:
        4. name: cloud-storage-secret
        5. type: Opaque
        6. stringData:
        7. rclone.conf: |
        8. [s3]
        9. type = s3
        10. provider = AWS
        11. env_auth = false
        12. access_key_id = ${access_key}
        13. secret_access_key = ${secret_key}
        14. region = us-east-1
        15. [ceph]
        16. type = s3
        17. provider = Ceph
        18. env_auth = false
        19. access_key_id = ${access_key}
        20. secret_access_key = ${secret_key}
        21. endpoint = ${endpoint}
        22. region = :default-placement
        23. [gcs]
        24. type = google cloud storage
        25. # The service account must include Storage Object Viewer role
        26. # The content can be retrieved by `cat ${service-account-file} | jq -c .`
        27. service_account_credentials = ${service_account_json_file_content}
      2. Execute the following command to create Secret:

        1. kubectl apply -f secret.yaml -n ${namespace}
    • Grant permissions by associating IAM with Pod or with ServiceAccount

      If you use Amazon S3 as the storage, you can grant permissions by associating IAM with Pod or with ServiceAccount, in which s3.access_key_id and s3.secret_access_key can be ignored.

      1. Save the following configurations as secret.yaml.

        1. apiVersion: v1
        2. kind: Secret
        3. metadata:
        4. name: cloud-storage-secret
        5. type: Opaque
        6. stringData:
        7. rclone.conf: |
        8. [s3]
        9. type = s3
        10. provider = AWS
        11. env_auth = true
        12. access_key_id =
        13. secret_access_key =
        14. region = us-east-1
      2. Execute the following command to create Secret:

        1. kubectl apply -f secret.yaml -n ${namespace}
  2. Configure the dataSource field. For example:

    1. dataSource:
    2. remote:
    3. rcloneImage: rclone/rclone:1.55.1
    4. storageClassName: local-storage
    5. storage: 100Gi
    6. secretName: cloud-storage-secret
    7. path: s3:bench-data-us/sysbench/sbtest_16_1e7.tar.gz
    8. # directory: s3:bench-data-us

    The descriptions of the related fields are as follows:

    • dataSource.remote.storageClassName: the name of StorageClass used to create PV.
    • dataSource.remote.secretName: the name of the Secret created in the previous step.
    • dataSource.remote.path: If the backup data is packaged as a tarball file, use this field to indicate the path to the tarball file.
    • dataSource.remote.directory: If the backup data is in a directory, use this field to specify the path to the directory.

Ad hoc

When restoring data from remote storage, sometimes the restore process is interrupted due to the exception. In such cases, if you do not want to download backup data from the network storage repeatedly, you can use the ad hoc mode to directly recover the data that has been downloaded and decompressed into PV in the remote mode.

For example:

  1. dataSource:
  2. adhoc:
  3. pvcName: tidb-cluster-scheduled-backup
  4. backupName: scheduled-backup-20190822-041004

The descriptions of the related fields are as follows:

  • dataSource.adhoc.pvcName: the PVC name used in restoring data from remote storage. The PVC must be deployed in the same namespace as Tidb-Lightning.
  • dataSource.adhoc.backupName: the name of the original backup data, such as: backup-2020-12-17T10:12:51Z (Does not contain the ‘. tgz’ suffix of the compressed file name on network storage).

Step 3. Deploy TiDB Lightning

The method of deploying TiDB Lightning varies with different methods of granting permissions and with different storages.

  • For Local Mode, Ad hoc Mode, and Remote Mode (only for remote modes that meet one of the three requirements: using Amazon S3 AccessKey and SecretKey permission granting methods, using Ceph as the storage backend, or using GCS as the storage backend), run the following command to deploy TiDB Lightning.

    1. helm install ${release_name} pingcap/tidb-lightning --namespace=${namespace} --set failFast=true -f tidb-lightning-values.yaml --version=${chart_version}
  • For Remote Mode, if you grant permissions by associating Amazon S3 IAM with Pod, take the following steps:

    1. Create the IAM role:

      Create an IAM role for the account, and grant the required permission to the role. The IAM role requires the AmazonS3FullAccess permission because TiDB Lightning needs to access Amazon S3 storage.

    2. Modify tidb-lightning-values.yaml, and add the iam.amazonaws.com/role: arn:aws:iam::123456789012:role/user annotation in the annotations field.

    3. Deploy TiDB Lightning:

      1. helm install ${release_name} pingcap/tidb-lightning --namespace=${namespace} --set failFast=true -f tidb-lightning-values.yaml --version=${chart_version}

      Import Data - 图3Note

      arn:aws:iam::123456789012:role/user is the IAM role created in Step 1.

  • For Remote Mode, if you grant permissions by associating Amazon S3 with ServiceAccount, take the following steps:

    1. Enable the IAM role for the service account on the cluster:

      To enable the IAM role permission on the EKS cluster, see AWS Documentation.

    2. Create the IAM role:

      Create an IAM role. Grant the AmazonS3FullAccess permission to the role, and edit Trust relationships of the role.

    3. Associate IAM with the ServiceAccount resources:

      1. kubectl annotate sa ${servieaccount} -n eks.amazonaws.com/role-arn=arn:aws:iam::123456789012:role/user
    4. Deploy TiDB Lightning:

      1. helm install ${release_name} pingcap/tidb-lightning --namespace=${namespace} --set-string failFast=true,serviceAccount=${servieaccount} -f tidb-lightning-values.yaml --version=${chart_version}

      Import Data - 图4Note

      arn:aws:iam::123456789012:role/user is the IAM role created in Step 1. ${service-account} is the ServiceAccount used by TiDB Lightning. The default value is default.

Destroy TiDB Lightning

Currently, TiDB Lightning only supports restoring data offline. After the restore, if the TiDB cluster needs to provide service for external applications, you can destroy TiDB Lightning to save cost.

To destroy tidb-lightning, execute the following command:

  1. helm uninstall ${release_name} -n ${namespace}

Troubleshoot TiDB Lightning

When TiDB Lightning fails to restore data, you cannot simply restart it. Manual intervention is required. Therefore, the TiDB Lightning’s Job restart policy is set to Never.

Import Data - 图5Note

If you have not configured to persist the checkpoint information in the target TiDB cluster, other MySQL-compatible databases or a shared storage directory, after the restore failure, you need to first delete the part of data already restored to the target cluster. After that, deploy tidb-lightning again and retry the data restore.

If TiDB Lightning fails to restore data, and if you have configured to persist the checkpoint information in the target TiDB cluster, other MySQL-compatible databases or a shared storage directory, follow the steps below to do manual intervention:

  1. View the log by executing the following command:

    1. kubectl logs -n ${namespace} ${pod_name}
    • If you restore data using the remote data source, and the error occurs when TiDB Lightning downloads data from remote storage:

      1. Address the problem according to the log.
      2. Deploy tidb-lightning again and retry the data restore.
    • For other cases, refer to the following steps.
  2. Refer to TiDB Lightning Troubleshooting and learn the solutions to different issues.

  3. Address the issues accordingly:

    • If tidb-lightning-ctl is required:

      1. Configure dataSource in values.yaml. Make sure the new Job uses the data source and checkpoint information of the failed Job:

        • In the local or ad hoc mode, you do not need to modify dataSource.
        • In the remote mode, modify dataSource to the ad hoc mode. dataSource.adhoc.pvcName is the PVC name created by the original Helm chart. dataSource.adhoc.backupName is the backup name of the data to be restored.
      2. Modify failFast in values.yaml to false, and create a Job used for tidb-lightning-ctl.

        • Based on the checkpoint information, TiDB Lightning checks whether the last data restore encountered an error. If yes, TiDB Lightning pauses the restore automatically.
        • TiDB Lightning uses the checkpoint information to avoid repeatedly restoring the same data. Therefore, creating the Job does not affect data correctness.
      3. After the Pod corresponding to the new Job is running, view the log by running kubectl logs -n ${namespace} ${pod_name} and confirm tidb-lightning in the new Job already stops data restore. If the log has the following message, the data restore is stopped:

        • tidb lightning encountered error
        • tidb lightning exit
      4. Enter the container by running kubectl exec -it -n ${namespace} ${pod_name} -it -- sh.

      5. Obtain the starting script by running cat /proc/1/cmdline.

      6. Get the command-line parameters from the starting script. Refer to TiDB Lightning Troubleshooting and troubleshoot using tidb-lightning-ctl.

      7. After the troubleshooting, modify failFast in values.yaml to true and create a new Job to resume data restore.

    • If tidb-lightning-ctl is not required:

      1. Troubleshoot TiDB Lightning.

      2. Configure dataSource in values.yaml. Make sure the new Job uses the data source and checkpoint information of the failed Job:

        • In the local or ad hoc mode, you do not need to modify dataSource.
        • In the remote mode, modify dataSource to the ad hoc mode. dataSource.adhoc.pvcName is the PVC name created by the original Helm chart. dataSource.adhoc.backupName is the backup name of the data to be restored.
      3. Create a new Job using the modified values.yaml file and resume data restore.
  4. After the troubleshooting and data restore is completed, delete the Jobs for data restore and troubleshooting.