Deploy DM in Kubernetes

TiDB Data Migration (DM) is an integrated data migration task management platform that supports the full data migration and the incremental data replication from MySQL/MariaDB into TiDB. This document describes how to deploy DM in Kubernetes using TiDB Operator and how to migrate MySQL data to TiDB cluster using DM.

Prerequisites

Deploy DM - 图1Note

Make sure that the TiDB Operator version >= 1.2.0.

Configure DM deployment

To configure the DM deployment, you need to configure the DMCluster Custom Resource (CR). For the complete configurations of the DMCluster CR, refer to the DMCluster example and API documentation. Note that you need to choose the example and API of the current TiDB Operator version.

Cluster name

Configure the cluster name by changing the metadata.name in the DMCluster CR.

Version

Usually, components in a cluster are in the same version. It is recommended to configure only spec.<master/worker>.baseImage and spec.version. If you need to deploy different versions for different components, configure spec.<master/worker>.version.

The formats of the related parameters are as follows:

  • spec.version: the format is imageTag, such as v5.4.0.
  • spec.<master/worker>.baseImage: the format is imageName, such as pingcap/dm.
  • spec.<master/worker>.version: the format is imageTag, such as v5.4.0.

TiDB Operator only supports deploying DM 2.0 and later versions.

Cluster

Configure DM-master

DM-master is an indispensable component of the DM cluster. You need to deploy at least three DM-master Pods if you want to achieve high availability.

You can configure DM-master parameters by spec.master.config in DMCluster CR. For complete DM-master configuration parameters, refer to DM-master Configuration File.

  1. apiVersion: pingcap.com/v1alpha1
  2. kind: DMCluster
  3. metadata:
  4. name: ${dm_cluster_name}
  5. namespace: ${namespace}
  6. spec:
  7. version: v5.4.0
  8. configUpdateStrategy: RollingUpdate
  9. pvReclaimPolicy: Retain
  10. discovery: {}
  11. master:
  12. baseImage: pingcap/dm
  13. maxFailoverCount: 0
  14. imagePullPolicy: IfNotPresent
  15. service:
  16. type: NodePort
  17. # Configures masterNodePort when you need to expose the DM-master service to a fixed NodePort
  18. # masterNodePort: 30020
  19. replicas: 1
  20. storageSize: "10Gi"
  21. requests:
  22. cpu: 1
  23. config: |
  24. rpc-timeout = "40s"

Configure DM-worker

You can configure DM-worker parameters by spec.worker.config in DMCluster CR. For complete DM-worker configuration parameters,refer to DM-worker Configuration File.

  1. apiVersion: pingcap.com/v1alpha1
  2. kind: DMCluster
  3. metadata:
  4. name: ${dm_cluster_name}
  5. namespace: ${namespace}
  6. spec:
  7. ...
  8. worker:
  9. baseImage: pingcap/dm
  10. maxFailoverCount: 0
  11. replicas: 1
  12. storageSize: "100Gi"
  13. requests:
  14. cpu: 1
  15. config: |
  16. keepalive-ttl = 15

Topology Spread Constraint

By configuring topologySpreadConstraints, you can make pods evenly spread in different topologies. For instructions about configuring topologySpreadConstraints, see Pod Topology Spread Constraints.

To use topologySpreadConstraints, you must meet the following conditions:

  • Your Kubernetes cluster uses default-scheduler instead of tidb-scheduler. For details, refer to tidb-scheduler and default-scheduler.
  • Your Kubernetes cluster enables the EvenPodsSpread feature gate. If the Kubernetes version in use is earlier than v1.16 or if the EvenPodsSpread feature gate is disabled, the configuration of topologySpreadConstraints does not take effect.

You can either configure topologySpreadConstraints at a cluster level (spec.topologySpreadConstraints) for all components or at a component level (such as spec.tidb.topologySpreadConstraints) for specific components.

The following is an example configuration:

  1. topologySpreadConstrains:
  2. - topologyKey: kubernetes.io/hostname
  3. - topologyKey: topology.kubernetes.io/zone

The example configuration can make pods of the same component evenly spread on different zones and nodes.

Currently, topologySpreadConstraints only supports the configuration of the topologyKey field. In the pod spec, the above example configuration will be automatically expanded as follows:

  1. topologySpreadConstrains:
  2. - topologyKey: kubernetes.io/hostname
  3. maxSkew: 1
  4. whenUnsatisfiable: DoNotSchedule
  5. labelSelector: <object>
  6. - topologyKey: topology.kubernetes.io/zone
  7. maxSkew: 1
  8. whenUnsatisfiable: DoNotSchedule
  9. labelSelector: <object>

Deploy the DM cluster

After configuring the yaml file of the DM cluster in the above steps, execute the following command to deploy the DM cluster:

  1. kubectl apply -f ${dm_cluster_name}.yaml -n ${namespace}

If the server does not have an external network, you need to download the Docker image used by the DM cluster and upload the image to the server, and then execute docker load to install the Docker image on the server:

  1. Deploy a DM cluster requires the following Docker image (assuming the version of the DM cluster is v5.4.0):

    1. pingcap/dm:v5.4.0
  2. To download the image, execute the following command:

    1. docker pull pingcap/dm:v5.4.0
    2. docker save -o dm-v5.4.0.tar pingcap/dm:v5.4.0
  3. Upload the Docker image to the server, and execute docker load to install the image on the server:

    1. docker load -i dm-v5.4.0.tar

After deploying the DM cluster, execute the following command to view the Pod status:

  1. kubectl get po -n ${namespace} -l app.kubernetes.io/instance=${dm_cluster_name}

You can use TiDB Operator to deploy and manage multiple DM clusters in a single Kubernetes cluster by repeating the above procedure and replacing ${dm_cluster_name} with a different name.

Different clusters can be in the same or different namespace, which is based on your actual needs.

Access the DM cluster in Kubernetes

To access DM-master in the pod within a Kubernetes cluster, use the DM-master service domain name ${cluster_name}-dm-master.${namespace}.

To access the DM cluster outside a Kubernetes cluster, expose the DM-master port by editing the spec.master.service field configuration in the DMCluster CR.

  1. spec:
  2. ...
  3. master:
  4. service:
  5. type: NodePort

You can access the DM-master service via the address of ${kubernetes_node_ip}:${node_port}.

For more service exposure methods, refer to Access the TiDB Cluster.

What’s next