Advanced migration options

You can automate your migrations and modify the MigPlan and MigrationController custom resources in order to perform large-scale migrations and to improve performance.

MTC terminology

Table 1. MTC terminology
TermDefinition

Source cluster

Cluster from which the applications are migrated.

Destination cluster[1]

Cluster to which the applications are migrated.

Replication repository

Object storage used for copying images, volumes, and Kubernetes objects during indirect migration or for Kubernetes objects during direct volume migration or direct image migration.

The replication repository must be accessible to all clusters.

Host cluster

Cluster on which the migration-controller pod and the web console are running. The host cluster is usually the destination cluster but this is not required.

The host cluster does not require an exposed registry route for direct image migration.

Remote cluster

A remote cluster is usually the source cluster but this is not required.

A remote cluster requires a Secret custom resource that contains the migration-controller service account token.

A remote cluster requires an exposed secure registry route for direct image migration.

Indirect migration

Images, volumes, and Kubernetes objects are copied from the source cluster to the replication repository and then from the replication repository to the destination cluster.

Direct volume migration

Persistent volumes are copied directly from the source cluster to the destination cluster.

Direct image migration

Images are copied directly from the source cluster to the destination cluster.

Stage migration

Data is copied to the destination cluster without stopping the application.

Running a stage migration multiple times reduces the duration of the cutover migration.

Cutover migration

The application is stopped on the source cluster and its resources are migrated to the destination cluster.

State migration

Application state is migrated by copying specific persistent volume claims and Kubernetes objects to the destination cluster.

Rollback migration

Rollback migration rolls back a completed migration.

1 Called the target cluster in the MTC web console.

Migrating applications by using the CLI

You can migrate applications with the MTC API by using the command line interface (CLI) in order to automate the migration.

Migration prerequisites

  • You must be logged in as a user with cluster-admin privileges on all clusters.

Direct image migration

  • You must ensure that the secure internal registry of the source cluster is exposed.

  • You must create a route to the exposed registry.

Direct volume migration

  • If your clusters use proxies, you must configure an Stunnel TCP proxy.

Internal images

  • If your application uses internal images from the openshift namespace, you must ensure that the required versions of the images are present on the target cluster.

    You can manually update an image stream tag in order to use a deprecated OKD 3 image on an OKD 4.6 cluster.

Clusters

  • The source cluster must be upgraded to the latest MTC z-stream release.

  • The MTC version must be the same on all clusters.

Network

  • The clusters have unrestricted network access to each other and to the replication repository.

  • If you copy the persistent volumes with move, the clusters must have unrestricted network access to the remote volumes.

  • You must enable the following ports on an OKD 3 cluster:

    • 8443 (API server)

    • 443 (routes)

    • 53 (DNS)

  • You must enable the following ports on an OKD 4 cluster:

    • 6443 (API server)

    • 443 (routes)

    • 53 (DNS)

  • You must enable port 443 on the replication repository if you are using TLS.

Persistent volumes (PVs)

  • The PVs must be valid.

  • The PVs must be bound to persistent volume claims.

  • If you use snapshots to copy the PVs, the following additional prerequisites apply:

    • The cloud provider must support snapshots.

    • The PVs must have the same cloud provider.

    • The PVs must be located in the same geographic region.

    • The PVs must have the same storage class.

Creating a registry route for direct image migration

For direct image migration, you must create a route to the exposed internal registry on all remote clusters.

Prerequisites

  • The internal registry must be exposed to external traffic on all remote clusters.

    The OKD 4 registry is exposed by default.

    The OKD 3 registry must be exposed manually.

Procedure

  • To create a route to an OKD 3 registry, run the following command:

    1. $ oc create route passthrough --service=docker-registry -n default
  • To create a route to an OKD 4 registry, run the following command:

    1. $ oc create route passthrough --service=image-registry -n openshift-image-registry

Configuring proxies

For OKD 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object.

For OKD 4.2 to 4.6, the Migration Toolkit for Containers (MTC) inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings.

You must configure the proxies to allow the SPDY protocol and to forward the Upgrade HTTP header to the API server. Otherwise, an Upgrade request required error is displayed. The MigrationController CR uses SPDY to run commands within remote pods. The Upgrade HTTP header is required in order to open a websocket connection with the API server.

Direct volume migration

If you are performing a direct volume migration (DVM) from a source cluster behind a proxy, you must configure an Stunnel proxy. Stunnel creates a transparent tunnel between the source and target clusters for the TCP connection without changing the certificates.

DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy.

Prerequisites

  • You must be logged in as a user with cluster-admin privileges on all clusters.

Procedure

  1. Get the MigrationController CR manifest:

    1. $ oc get migrationcontroller <migration_controller> -n openshift-migration
  2. Update the proxy parameters:

    1. apiVersion: migration.openshift.io/v1alpha1
    2. kind: MigrationController
    3. metadata:
    4. name: <migration_controller>
    5. namespace: openshift-migration
    6. ...
    7. spec:
    8. stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> (1)
    9. httpProxy: http://<username>:<password>@<ip>:<port> (2)
    10. httpsProxy: http://<username>:<password>@<ip>:<port> (3)
    11. noProxy: example.com (4)
    1Stunnel proxy URL for direct volume migration.
    2Proxy URL for creating HTTP connections outside the cluster. The URL scheme must be http.
    3Proxy URL for creating HTTPS connections outside the cluster. If this is not specified, then httpProxy is used for both HTTP and HTTPS connections.
    4Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying.

    Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues.

    This field is ignored if neither the httpProxy nor the httpsProxy field is set.

  3. Save the manifest as migration-controller.yaml.

  4. Apply the updated manifest:

    1. $ oc replace -f migration-controller.yaml -n openshift-migration

Migrating an application from the command line

You can migrate an application from the command line by using the Migration Toolkit for Containers (MTC) API.

Procedure

  1. Create a MigCluster CR manifest for the host cluster:

    1. $ cat << EOF | oc apply -f -
    2. apiVersion: migration.openshift.io/v1alpha1
    3. kind: MigCluster
    4. metadata:
    5. name: <host_cluster>
    6. namespace: openshift-migration
    7. spec:
    8. isHostCluster: true
    9. EOF
  2. Create a Secret CR manifest for each remote cluster:

    1. $ cat << EOF | oc apply -f -
    2. apiVersion: v1
    3. kind: Secret
    4. metadata:
    5. name: <cluster_secret>
    6. namespace: openshift-config
    7. type: Opaque
    8. data:
    9. saToken: <sa_token> (1)
    10. EOF
    1Specify the base64-encoded migration-controller service account (SA) token of the remote cluster. You can obtain the token by running the following command:
    1. $ oc sa get-token migration-controller -n openshift-migration | base64 -w 0
  3. Create a MigCluster CR manifest for each remote cluster:

    1. $ cat << EOF | oc apply -f -
    2. apiVersion: migration.openshift.io/v1alpha1
    3. kind: MigCluster
    4. metadata:
    5. name: <remote_cluster> (1)
    6. namespace: openshift-migration
    7. spec:
    8. exposedRegistryPath: <exposed_registry_route> (2)
    9. insecure: false (3)
    10. isHostCluster: false
    11. serviceAccountSecretRef:
    12. name: <remote_cluster_secret> (4)
    13. namespace: openshift-config
    14. url: <remote_cluster_url> (5)
    15. EOF
    1Specify the Cluster CR of the remote cluster.
    2Optional: For direct image migration, specify the exposed registry route.
    3SSL verification is enabled if false. CA certificates are not required or checked if true.
    4Specify the Secret CR of the remote cluster.
    5Specify the URL of the remote cluster.
  4. Verify that all clusters are in a Ready state:

    1. $ oc describe cluster <cluster>
  5. Create a Secret CR manifest for the replication repository:

    1. $ cat << EOF | oc apply -f -
    2. apiVersion: v1
    3. kind: Secret
    4. metadata:
    5. namespace: openshift-config
    6. name: <migstorage_creds>
    7. type: Opaque
    8. data:
    9. aws-access-key-id: <key_id_base64> (1)
    10. aws-secret-access-key: <secret_key_base64> (2)
    11. EOF
    1Specify the key ID in base64 format.
    2Specify the secret key in base64 format.

    AWS credentials are base64-encoded by default. For other storage providers, you must encode your credentials by running the following command with each key:

    1. $ echo -n "<key>" | base64 -w 0 (1)
    1Specify the key ID or the secret key. Both keys must be base64-encoded.
  6. Create a MigStorage CR manifest for the replication repository:

    1. $ cat << EOF | oc apply -f -
    2. apiVersion: migration.openshift.io/v1alpha1
    3. kind: MigStorage
    4. metadata:
    5. name: <migstorage>
    6. namespace: openshift-migration
    7. spec:
    8. backupStorageConfig:
    9. awsBucketName: <bucket> (1)
    10. credsSecretRef:
    11. name: <storage_secret> (2)
    12. namespace: openshift-config
    13. backupStorageProvider: <storage_provider> (3)
    14. volumeSnapshotConfig:
    15. credsSecretRef:
    16. name: <storage_secret> (4)
    17. namespace: openshift-config
    18. volumeSnapshotProvider: <storage_provider> (5)
    19. EOF
    1Specify the bucket name.
    2Specify the Secrets CR of the object storage. You must ensure that the credentials stored in the Secrets CR of the object storage are correct.
    3Specify the storage provider.
    4Optional: If you are copying data by using snapshots, specify the Secrets CR of the object storage. You must ensure that the credentials stored in the Secrets CR of the object storage are correct.
    5Optional: If you are copying data by using snapshots, specify the storage provider.
  7. Verify that the MigStorage CR is in a Ready state:

    1. $ oc describe migstorage <migstorage>
  8. Create a MigPlan CR manifest:

    1. $ cat << EOF | oc apply -f -
    2. apiVersion: migration.openshift.io/v1alpha1
    3. kind: MigPlan
    4. metadata:
    5. name: <migplan>
    6. namespace: openshift-migration
    7. spec:
    8. destMigClusterRef:
    9. name: <host_cluster>
    10. namespace: openshift-migration
    11. indirectImageMigration: true (1)
    12. indirectVolumeMigration: true (2)
    13. migStorageRef:
    14. name: <migstorage> (3)
    15. namespace: openshift-migration
    16. namespaces:
    17. - <application_namespace> (4)
    18. srcMigClusterRef:
    19. name: <remote_cluster> (5)
    20. namespace: openshift-migration
    21. EOF
    1Direct image migration is enabled if false.
    2Direct volume migration is enabled if false.
    3Specify the name of the MigStorage CR instance.
    4Specify one or more source namespaces. By default, the destination namespace has the same name.
    5Specify the name of the source cluster MigCluster instance.
  9. Verify that the MigPlan instance is in a Ready state:

    1. $ oc describe migplan <migplan> -n openshift-migration
  10. Create a MigMigration CR manifest to start the migration defined in the MigPlan instance:

    1. $ cat << EOF | oc apply -f -
    2. apiVersion: migration.openshift.io/v1alpha1
    3. kind: MigMigration
    4. metadata:
    5. name: <migmigration>
    6. namespace: openshift-migration
    7. spec:
    8. migPlanRef:
    9. name: <migplan> (1)
    10. namespace: openshift-migration
    11. quiescePods: true (2)
    12. stage: false (3)
    13. rollback: false (4)
    14. EOF
    1Specify the MigPlan CR name.
    2The pods on the source cluster are stopped before migration if true.
    3A stage migration, which copies most of the data without stopping the application, is performed if true.
    4A completed migration is rolled back if true.
  11. Verify the migration by watching the MigMigration CR progress:

    1. $ oc watch migmigration <migmigration> -n openshift-migration

    The output resembles the following:

    Example output

    1. Name: c8b034c0-6567-11eb-9a4f-0bc004db0fbc
    2. Namespace: openshift-migration
    3. Labels: migration.openshift.io/migplan-name=django
    4. Annotations: openshift.io/touch: e99f9083-6567-11eb-8420-0a580a81020c
    5. API Version: migration.openshift.io/v1alpha1
    6. Kind: MigMigration
    7. ...
    8. Spec:
    9. Mig Plan Ref:
    10. Name: migplan
    11. Namespace: openshift-migration
    12. Stage: false
    13. Status:
    14. Conditions:
    15. Category: Advisory
    16. Last Transition Time: 2021-02-02T15:04:09Z
    17. Message: Step: 19/47
    18. Reason: InitialBackupCreated
    19. Status: True
    20. Type: Running
    21. Category: Required
    22. Last Transition Time: 2021-02-02T15:03:19Z
    23. Message: The migration is ready.
    24. Status: True
    25. Type: Ready
    26. Category: Required
    27. Durable: true
    28. Last Transition Time: 2021-02-02T15:04:05Z
    29. Message: The migration registries are healthy.
    30. Status: True
    31. Type: RegistriesHealthy
    32. Itinerary: Final
    33. Observed Digest: 7fae9d21f15979c71ddc7dd075cb97061895caac5b936d92fae967019ab616d5
    34. Phase: InitialBackupCreated
    35. Pipeline:
    36. Completed: 2021-02-02T15:04:07Z
    37. Message: Completed
    38. Name: Prepare
    39. Started: 2021-02-02T15:03:18Z
    40. Message: Waiting for initial Velero backup to complete.
    41. Name: Backup
    42. Phase: InitialBackupCreated
    43. Progress:
    44. Backup openshift-migration/c8b034c0-6567-11eb-9a4f-0bc004db0fbc-wpc44: 0 out of estimated total of 0 objects backed up (5s)
    45. Started: 2021-02-02T15:04:07Z
    46. Message: Not started
    47. Name: StageBackup
    48. Message: Not started
    49. Name: StageRestore
    50. Message: Not started
    51. Name: DirectImage
    52. Message: Not started
    53. Name: DirectVolume
    54. Message: Not started
    55. Name: Restore
    56. Message: Not started
    57. Name: Cleanup
    58. Start Timestamp: 2021-02-02T15:03:18Z
    59. Events:
    60. Type Reason Age From Message
    61. ---- ------ ---- ---- -------
    62. Normal Running 57s migmigration_controller Step: 2/47
    63. Normal Running 57s migmigration_controller Step: 3/47
    64. Normal Running 57s (x3 over 57s) migmigration_controller Step: 4/47
    65. Normal Running 54s migmigration_controller Step: 5/47
    66. Normal Running 54s migmigration_controller Step: 6/47
    67. Normal Running 52s (x2 over 53s) migmigration_controller Step: 7/47
    68. Normal Running 51s (x2 over 51s) migmigration_controller Step: 8/47
    69. Normal Ready 50s (x12 over 57s) migmigration_controller The migration is ready.
    70. Normal Running 50s migmigration_controller Step: 9/47
    71. Normal Running 50s migmigration_controller Step: 10/47

Migrating an application’s state

You can perform repeatable, state-only migrations by selecting specific persistent volume claims (PVCs). During a state migration, Migration Toolkit for Containers (MTC) copies persistent volume (PV) data to the target cluster. PV references are not moved. The application pods continue to run on the source cluster.

If you have a CI/CD pipeline, you can migrate stateless components by deploying them on the target cluster. Then you can migrate stateful components by using MTC.

You can use state migration to migrate namespaces within the same cluster.

Do not use state migration to migrate namespaces between clusters. Use stage or cutover migration instead.

You can migrate PV data from the source cluster to PVCs that are already provisioned in the target cluster by mapping PVCs in the MigPlan CR. This ensures that the target PVCs of migrated applications are synchronized with the source PVCs.

You can perform a one-time migration of Kubernetes objects that store application state.

Excluding persistent volume claims

You can exclude persistent volume claims (PVCs) by adding the spec.persistentVolumes.pvc.selection.action parameter to the MigPlan custom resource (CR) after the persistent volumes (PVs) have been discovered.

Prerequisites

  • MigPlan CR with discovered PVs.

Procedure

  • Add the spec.persistentVolumes.pvc.selection.action parameter to the MigPlan CR and set its value to skip:

    1. apiVersion: migration.openshift.io/v1alpha1
    2. kind: MigPlan
    3. metadata:
    4. name: <migplan>
    5. namespace: openshift-migration
    6. spec:
    7. ...
    8. persistentVolumes:
    9. - capacity: 10Gi
    10. name: <pv_name>
    11. pvc:
    12. ...
    13. selection:
    14. action: skip (1)
    1skip excludes the PVC from the migration plan.

Mapping persistent volume claims

You can map persistent volume claims (PVCs) by updating the spec.persistentVolumes.pvc.name parameter in the MigPlan custom resource (CR) after the persistent volumes (PVs) have been discovered.

Prerequisites

  • MigPlan CR with discovered PVs.

Procedure

  • Update the spec.persistentVolumes.pvc.name parameter in the MigPlan CR:

    1. apiVersion: migration.openshift.io/v1alpha1
    2. kind: MigPlan
    3. metadata:
    4. name: <migplan>
    5. namespace: openshift-migration
    6. spec:
    7. ...
    8. persistentVolumes:
    9. - capacity: 10Gi
    10. name: <pv_name>
    11. pvc:
    12. name: <source_pvc>:<destination_pvc> (1)
    1Specify the PVC on the source cluster and the PVC on the destination cluster. If the destination PVC does not exist, it will be created. You can use this mapping to change the PVC name during migration.

Migrating Kubernetes objects

You can perform a one-time migration of Kubernetes objects that constitute an application’s state.

After migration, the closed parameter of the MigPlan CR is set to true. You cannot create another MigMigration CR for this MigPlan CR.

You add Kubernetes objects to the MigPlan CR by using the following options:

  • Adding the Kubernetes objects to the includedResources section.

  • Using the labelSelector parameter to reference labeled Kubernetes objects.

If you set both parameters, the label is used to filter the included resources, for example, to migrate Secret and ConfigMap resources with the label app: frontend.

Procedure

  • Update the MigPlan CR:

    1. apiVersion: migration.openshift.io/v1alpha1
    2. kind: MigPlan
    3. metadata:
    4. name: <migplan>
    5. namespace: openshift-migration
    6. spec:
    7. includedResources: (1)
    8. - kind: <Secret>
    9. group: ""
    10. - kind: <ConfigMap>
    11. group: ""
    12. ...
    13. labelSelector:
    14. matchLabels:
    15. <app: frontend> (2)
    1Specify the kind and group of each resource.
    2Specify the label of the resources to migrate.

Migration hooks

You can add up to four migration hooks to a single migration plan, with each hook running at a different phase of the migration. Migration hooks perform tasks such as customizing application quiescence, manually migrating unsupported data types, and updating applications after migration.

A migration hook runs on a source or a target cluster at one of the following migration steps:

  • PreBackup: Before resources are backed up on the source cluster.

  • PostBackup: After resources are backed up on the source cluster.

  • PreRestore: Before resources are restored on the target cluster.

  • PostRestore: After resources are restored on the target cluster.

You can create a hook by creating an Ansible playbook that runs with the default Ansible image or with a custom hook container.

Ansible playbook

The Ansible playbook is mounted on a hook container as a config map. The hook container runs as a job, using the cluster, service account, and namespace specified in the MigPlan custom resource. The job continues to run until it reaches the default limit of 6 retries or a successful completion. This continues even if the initial pod is evicted or killed.

The default Ansible runtime image is registry.redhat.io/rhmtc/openshift-migration-hook-runner-rhel7:1.6. This image is based on the Ansible Runner image and includes python-openshift for Ansible Kubernetes resources and an updated oc binary.

Custom hook container

You can use a custom hook container instead of the default Ansible image.

Writing an Ansible playbook for a migration hook

You can write an Ansible playbook to use as a migration hook. The hook is added to a migration plan by using the MTC web console or by specifying values for the spec.hooks parameters in the MigPlan custom resource (CR) manifest.

The Ansible playbook is mounted onto a hook container as a config map. The hook container runs as a job, using the cluster, service account, and namespace specified in the MigPlan CR. The hook container uses a specified service account token so that the tasks do not require authentication before they run in the cluster.

Ansible modules

You can use the Ansible shell module to run oc commands.

Example shell module

  1. - hosts: localhost
  2. gather_facts: false
  3. tasks:
  4. - name: get pod name
  5. shell: oc get po --all-namespaces

You can use kubernetes.core modules, such as k8s_info, to interact with Kubernetes resources.

Example k8s_facts module

  1. - hosts: localhost
  2. gather_facts: false
  3. tasks:
  4. - name: Get pod
  5. k8s_info:
  6. kind: pods
  7. api: v1
  8. namespace: openshift-migration
  9. name: "{{ lookup( 'env', 'HOSTNAME') }}"
  10. register: pods
  11. - name: Print pod name
  12. debug:
  13. msg: "{{ pods.resources[0].metadata.name }}"

You can use the fail module to produce a non-zero exit status in cases where a non-zero exit status would not normally be produced, ensuring that the success or failure of a hook is detected. Hooks run as jobs and the success or failure status of a hook is based on the exit status of the job container.

Example fail module

  1. - hosts: localhost
  2. gather_facts: false
  3. tasks:
  4. - name: Set a boolean
  5. set_fact:
  6. do_fail: true
  7. - name: "fail"
  8. fail:
  9. msg: "Cause a failure"
  10. when: do_fail

Environment variables

The MigPlan CR name and migration namespaces are passed as environment variables to the hook container. These variables are accessed by using the lookup plug-in.

Example environment variables

  1. - hosts: localhost
  2. gather_facts: false
  3. tasks:
  4. - set_fact:
  5. namespaces: "{{ (lookup( 'env', 'migration_namespaces')).split(',') }}"
  6. - debug:
  7. msg: "{{ item }}"
  8. with_items: "{{ namespaces }}"
  9. - debug:
  10. msg: "{{ lookup( 'env', 'migplan_name') }}"

Configuration options

You can configure the following options for the MigPlan and MigrationController custom resources (CRs) to perform large-scale migrations and to improve performance.

Increasing limits for large migrations

You can increase the limits on migration objects and container resources for large migrations with the Migration Toolkit for Containers (MTC).

You must test these changes before you perform a migration in a production environment.

Procedure

  1. Edit the MigrationController custom resource (CR) manifest:

    1. $ oc edit migrationcontroller -n openshift-migration
  2. Update the following parameters:

    1. ...
    2. mig_controller_limits_cpu: "1" (1)
    3. mig_controller_limits_memory: "10Gi" (2)
    4. ...
    5. mig_controller_requests_cpu: "100m" (3)
    6. mig_controller_requests_memory: "350Mi" (4)
    7. ...
    8. mig_pv_limit: 100 (5)
    9. mig_pod_limit: 100 (6)
    10. mig_namespace_limit: 10 (7)
    11. ...
    1Specifies the number of CPUs available to the MigrationController CR.
    2Specifies the amount of memory available to the MigrationController CR.
    3Specifies the number of CPU units available for MigrationController CR requests. 100m represents 0.1 CPU units (100 * 1e-3).
    4Specifies the amount of memory available for MigrationController CR requests.
    5Specifies the number of persistent volumes that can be migrated.
    6Specifies the number of pods that can be migrated.
    7Specifies the number of namespaces that can be migrated.
  3. Create a migration plan that uses the updated parameters to verify the changes.

    If your migration plan exceeds the MigrationController CR limits, the MTC console displays a warning message when you save the migration plan.

Excluding resources from a migration plan

You can exclude resources, for example, image streams, persistent volumes (PVs), or subscriptions, from a Migration Toolkit for Containers (MTC) migration plan in order to reduce the resource load for migration or to migrate images or PVs with a different tool.

By default, the MTC excludes service catalog resources and Operator Lifecycle Manager (OLM) resources from migration. These resources are parts of the service catalog API group and the OLM API group, neither of which is supported for migration at this time.

Procedure

  1. Edit the MigrationController custom resource manifest:

    1. $ oc edit migrationcontroller <migration_controller> -n openshift-migration
  2. Update the spec section by adding a parameter to exclude specific resources or by adding a resource to the excluded_resources parameter if it does not have its own exclusion parameter:

    1. apiVersion: migration.openshift.io/v1alpha1
    2. kind: MigrationController
    3. metadata:
    4. name: migration-controller
    5. namespace: openshift-migration
    6. spec:
    7. disable_image_migration: true (1)
    8. disable_pv_migration: true (2)
    9. ...
    10. excluded_resources: (3)
    11. - imagetags
    12. - templateinstances
    13. - clusterserviceversions
    14. - packagemanifests
    15. - subscriptions
    16. - servicebrokers
    17. - servicebindings
    18. - serviceclasses
    19. - serviceinstances
    20. - serviceplans
    21. - operatorgroups
    22. - events
    23. - events.events.k8s.io
    1Add disable_image_migration: true to exclude image streams from the migration. Do not edit the excluded_resources parameter. imagestreams is added to excluded_resources when the MigrationController pod restarts.
    2Add disable_pv_migration: true to exclude PVs from the migration plan. Do not edit the excluded_resources parameter. persistentvolumes and persistentvolumeclaims are added to excluded_resources when the MigrationController pod restarts. Disabling PV migration also disables PV discovery when you create the migration plan.
    3You can add OKD resources to the excluded_resources list. Do not delete the default excluded resources. These resources are problematic to migrate and must be excluded.
  3. Wait two minutes for the MigrationController pod to restart so that the changes are applied.

  4. Verify that the resource is excluded:

    1. $ oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1

    The output contains the excluded resources:

    Example output

    1. - name: EXCLUDED_RESOURCES
    2. value:
    3. imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims

Enabling persistent volume resizing for direct volume migration

You can enable persistent volume (PV) resizing for direct volume migration to avoid running out of disk space on the destination cluster.

When the disk usage of a PV reaches a configured level, the MigrationController custom resource (CR) compares the requested storage capacity of a persistent volume claim (PVC) to its actual provisioned capacity. Then, it calculates the space required on the destination cluster.

A pv_resizing_threshold parameter determines when PV resizing is used. The default threshold is 3%. This means that PV resizing occurs when the disk usage of a PV is more than 97%. You can increase this threshold so that PV resizing occurs at a lower disk usage level.

PVC capacity is calculated according to the following criteria:

  • If the requested storage capacity (spec.resources.requests.storage) of the PVC is not equal to its actual provisioned capacity (status.capacity.storage), the greater value is used.

  • If a PV is provisioned through a PVC and then subsequently changed so that its PV and PVC capacities no longer match, the greater value is used.

Prerequisites

  • The PVCs must be attached to one or more running pods so that the MigrationController CR can execute commands.

Procedure

  1. Log in to the host cluster.

  2. Enable PV resizing by patching the MigrationController CR:

    1. $ oc patch migrationcontroller migration-controller -p '{"spec":{"enable_dvm_pv_resizing":true}}' \ (1)
    2. --type='merge' -n openshift-migration
    1Set the value to false to disable PV resizing.
  3. Optional: Update the pv_resizing_threshold parameter to increase the threshold:

    1. $ oc patch migrationcontroller migration-controller -p '{"spec":{"pv_resizing_threshold":41}}' \ (1)
    2. --type='merge' -n openshift-migration
    1The default value is 3.

    When the threshold is exceeded, the following status message is displayed in the MigPlan CR status:

    1. status:
    2. conditions:
    3. ...
    4. - category: Warn
    5. durable: true
    6. lastTransitionTime: "2021-06-17T08:57:01Z"
    7. message: 'Capacity of the following volumes will be automatically adjusted to avoid disk capacity issues in the target cluster: [pvc-b800eb7b-cf3b-11eb-a3f7-0eae3e0555f3]'
    8. reason: Done
    9. status: "False"
    10. type: PvCapacityAdjustmentRequired

    For AWS gp2 storage, this message does not appear unless the pv_resizing_threshold is 42% or greater because of the way gp2 calculates volume usage and size. (BZ#1973148)

Enabling cached Kubernetes clients

You can enable cached Kubernetes clients in the MigrationController custom resource (CR) for improved performance during migration. The greatest performance benefit is displayed when migrating between clusters in different regions or with significant network latency.

Delegated tasks, for example, Rsync backup for direct volume migration or Velero backup and restore, however, do not show improved performance with cached clients.

Cached clients require extra memory because the MigrationController CR caches all API resources that are required for interacting with MigCluster CRs. Requests that are normally sent to the API server are directed to the cache instead. The cache watches the API server for updates.

You can increase the memory limits and requests of the MigrationController CR if OOMKilled errors occur after you enable cached clients.

Procedure

  1. Enable cached clients by running the following command:

    1. $ oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \
    2. '[{ "op": "replace", "path": "/spec/mig_controller_enable_cache", "value": true}]'
  2. Optional: Increase the MigrationController CR memory limits by running the following command:

    1. $ oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \
    2. '[{ "op": "replace", "path": "/spec/mig_controller_limits_memory", "value": <10Gi>}]'
  3. Optional: Increase the MigrationController CR memory requests by running the following command:

    1. $ oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \
    2. '[{ "op": "replace", "path": "/spec/mig_controller_requests_memory", "value": <350Mi>}]'