Installing the Migration Toolkit for Containers

You can install the Migration Toolkit for Containers (MTC) on OKD 4.

By default, the MTC web console and the Migration Controller pod run on the target cluster. You can configure the Migration Controller custom resource manifest to run the MTC web console and the Migration Controller pod on a remote cluster.

After you have installed MTC, you must configure an object storage to use as a replication repository.

To uninstall MTC, see Uninstalling MTC and deleting resources.

Compatibility guidelines

You must install the Migration Toolkit for Containers (MTC) Operator that is compatible with your OKD version.

Definitions

legacy platform

OKD 4.5 and earlier.

modern platform

OKD 4.6 and later.

legacy operator

The MTC Operator designed for legacy platforms.

modern operator

The MTC Operator designed for modern platforms.

control cluster

The cluster that runs the MTC controller and GUI.

remote cluster

A source or destination cluster for a migration that runs Velero. The Control Cluster communicates with Remote clusters via the Velero API to drive migrations.

Table 1. MTC compatibility: Migrating from a legacy platform
OKD 4.5 or earlierOKD 4.6 later

Latest MTC version

MTC 1.7.z

Legacy 1.7 operator: Install manually with the operator.yml file.

This cluster cannot be the control cluster.

MTC 1.7.z

Install with OLM, release channel release-v1.7

Stable MTC version

MTC 1.5

Legacy 1.5 operator: Install manually with the operator.yml file.

MTC 1.6.z

Install with OLM, release channel release-v1.6

Edge cases exist in which network restrictions prevent modern clusters from connecting to other clusters involved in the migration. For example, when migrating from an OKD 3.11 cluster on premises to a modern OKD cluster in the cloud, where the modern cluster cannot connect to the OKD 3.11 cluster.

With MTC 1.7, if one of the remote clusters is unable to communicate with the control cluster because of network restrictions, use the crane tunnel-api command.

With the stable MTC release, although you should always designate the most modern cluster as the control cluster, in this specific case it is possible to designate the legacy cluster as the control cluster and push workloads to the remote cluster.

Installing the legacy Migration Toolkit for Containers Operator on OKD 4.2 to 4.5

You can install the legacy Migration Toolkit for Containers Operator manually on OKD versions 4.2 to 4.5.

Prerequisites

  • You must be logged in as a user with cluster-admin privileges on all clusters.

  • You must have access to registry.redhat.io.

  • You must have podman installed.

Procedure

  1. Log in to registry.redhat.io with your Red Hat Customer Portal credentials:

    1. $ sudo podman login registry.redhat.io
  2. Download the operator.yml file:

    1. $ sudo podman cp $(sudo podman create \
    2. registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.5.3):/operator.yml ./
  3. Download the controller.yml file:

    1. $ sudo podman cp $(sudo podman create \
    2. registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.5.3):/controller.yml ./
  4. Log in to your OKD 3 cluster.

  5. Verify that the cluster can authenticate with registry.redhat.io:

    1. $ oc run test --image registry.redhat.io/ubi8 --command sleep infinity
  6. Create the Migration Toolkit for Containers Operator object:

    1. $ oc create -f operator.yml

    Example output

    1. namespace/openshift-migration created
    2. rolebinding.rbac.authorization.k8s.io/system:deployers created
    3. serviceaccount/migration-operator created
    4. customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created
    5. role.rbac.authorization.k8s.io/migration-operator created
    6. rolebinding.rbac.authorization.k8s.io/migration-operator created
    7. clusterrolebinding.rbac.authorization.k8s.io/migration-operator created
    8. deployment.apps/migration-operator created
    9. Error from server (AlreadyExists): error when creating "./operator.yml":
    10. rolebindings.rbac.authorization.k8s.io "system:image-builders" already exists (1)
    11. Error from server (AlreadyExists): error when creating "./operator.yml":
    12. rolebindings.rbac.authorization.k8s.io "system:image-pullers" already exists
    1You can ignore Error from server (AlreadyExists) messages. They are caused by the Migration Toolkit for Containers Operator creating resources for earlier versions of OKD 3 that are provided in later releases.
  7. Create the MigrationController object:

    1. $ oc create -f controller.yml
  8. Verify that the MTC pods are running:

    1. $ oc get pods -n openshift-migration

Installing the Migration Toolkit for Containers Operator on OKD 4.10

You install the Migration Toolkit for Containers Operator on OKD 4.10 by using the Operator Lifecycle Manager.

Prerequisites

  • You must be logged in as a user with cluster-admin privileges on all clusters.

Procedure

  1. In the OKD web console, click OperatorsOperatorHub.

  2. Use the Filter by keyword field to find the Migration Toolkit for Containers Operator.

  3. Select the Migration Toolkit for Containers Operator and click Install.

  4. Click Install.

    On the Installed Operators page, the Migration Toolkit for Containers Operator appears in the openshift-migration project with the status Succeeded.

  5. Click Migration Toolkit for Containers Operator.

  6. Under Provided APIs, locate the Migration Controller tile, and click Create Instance.

  7. Click Create.

  8. Click WorkloadsPods to verify that the MTC pods are running.

Configuring proxies

For OKD 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object.

For OKD 4.2 to 4.10, the Migration Toolkit for Containers (MTC) inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings.

You must configure the proxies to allow the SPDY protocol and to forward the Upgrade HTTP header to the API server. Otherwise, an Upgrade request required error is displayed. The MigrationController CR uses SPDY to run commands within remote pods. The Upgrade HTTP header is required in order to open a websocket connection with the API server.

Direct volume migration

If you are performing a direct volume migration (DVM) from a source cluster behind a proxy, you must configure an Stunnel proxy. Stunnel creates a transparent tunnel between the source and target clusters for the TCP connection without changing the certificates.

DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy.

Prerequisites

  • You must be logged in as a user with cluster-admin privileges on all clusters.

Procedure

  1. Get the MigrationController CR manifest:

    1. $ oc get migrationcontroller <migration_controller> -n openshift-migration
  2. Update the proxy parameters:

    1. apiVersion: migration.openshift.io/v1alpha1
    2. kind: MigrationController
    3. metadata:
    4. name: <migration_controller>
    5. namespace: openshift-migration
    6. ...
    7. spec:
    8. stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> (1)
    9. httpProxy: http://<username>:<password>@<ip>:<port> (2)
    10. httpsProxy: http://<username>:<password>@<ip>:<port> (3)
    11. noProxy: example.com (4)
    1Stunnel proxy URL for direct volume migration.
    2Proxy URL for creating HTTP connections outside the cluster. The URL scheme must be http.
    3Proxy URL for creating HTTPS connections outside the cluster. If this is not specified, then httpProxy is used for both HTTP and HTTPS connections.
    4Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying.

    Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues.

    This field is ignored if neither the httpProxy nor the httpsProxy field is set.

  3. Save the manifest as migration-controller.yaml.

  4. Apply the updated manifest:

    1. $ oc replace -f migration-controller.yaml -n openshift-migration

For more information, see Configuring the cluster-wide proxy.

Configuring a replication repository

You must configure an object storage to use as a replication repository. The Migration Toolkit for Containers (MTC) copies data from the source cluster to the replication repository, and then from the replication repository to the target cluster.

MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. Select a method that is suited for your environment and is supported by your storage provider.

MTC supports the following storage providers:

Prerequisites

  • All clusters must have uninterrupted network access to the replication repository.

  • If you use a proxy server with an internally hosted replication repository, you must ensure that the proxy allows access to the replication repository.

Retrieving Multicloud Object Gateway credentials

You must retrieve the Multicloud Object Gateway (MCG) credentials and S3 endpoint in order to configure MCG as a replication repository for the Migration Toolkit for Containers (MTC). You must retrieve the Multicloud Object Gateway (MCG) credentials in order to create a Secret custom resource (CR) for the OpenShift API for Data Protection (OADP). ////ifdef::installing-oadp-mcg[] ////endif::[]

MCG is a component of OpenShift Data Foundation.

Prerequisites

Procedure

  1. Obtain the S3 endpoint, AWS_ACCESS_KEY_ID, and AWS_SECRET_ACCESS_KEY by running the describe command on the NooBaa custom resource.

    You use these credentials to add MCG as a replication repository.

Configuring Amazon Web Services

You configure Amazon Web Services (AWS) S3 object storage as a replication repository for the Migration Toolkit for Containers (MTC).

Prerequisites

  • You must have the AWS CLI installed.

  • The AWS S3 storage bucket must be accessible to the source and target clusters.

  • If you are using the snapshot copy method:

    • You must have access to EC2 Elastic Block Storage (EBS).

    • The source and target clusters must be in the same region.

    • The source and target clusters must have the same storage class.

    • The storage class must be compatible with snapshots.

Procedure

  1. Set the BUCKET variable:

    1. $ BUCKET=<your_bucket>
  2. Set the REGION variable:

    1. $ REGION=<your_region>
  3. Create an AWS S3 bucket:

    1. $ aws s3api create-bucket \
    2. --bucket $BUCKET \
    3. --region $REGION \
    4. --create-bucket-configuration LocationConstraint=$REGION (1)
    1us-east-1 does not support a LocationConstraint. If your region is us-east-1, omit —create-bucket-configuration LocationConstraint=$REGION.
  4. Create an IAM user:

    1. $ aws iam create-user --user-name velero (1)
    1If you want to use Velero to back up multiple clusters with multiple S3 buckets, create a unique user name for each cluster.
  5. Create a velero-policy.json file:

    1. $ cat > velero-policy.json <<EOF
    2. {
    3. "Version": "2012-10-17",
    4. "Statement": [
    5. {
    6. "Effect": "Allow",
    7. "Action": [
    8. "ec2:DescribeVolumes",
    9. "ec2:DescribeSnapshots",
    10. "ec2:CreateTags",
    11. "ec2:CreateVolume",
    12. "ec2:CreateSnapshot",
    13. "ec2:DeleteSnapshot"
    14. ],
    15. "Resource": "*"
    16. },
    17. {
    18. "Effect": "Allow",
    19. "Action": [
    20. "s3:GetObject",
    21. "s3:DeleteObject",
    22. "s3:PutObject",
    23. "s3:AbortMultipartUpload",
    24. "s3:ListMultipartUploadParts"
    25. ],
    26. "Resource": [
    27. "arn:aws:s3:::${BUCKET}/*"
    28. ]
    29. },
    30. {
    31. "Effect": "Allow",
    32. "Action": [
    33. "s3:ListBucket"
    34. ],
    35. "Resource": [
    36. "arn:aws:s3:::${BUCKET}"
    37. ]
    38. }
    39. ]
    40. }
    41. EOF
  6. Attach the policies to give the velero user the necessary permissions:

    1. $ aws iam put-user-policy \
    2. --user-name velero \
    3. --policy-name velero \
    4. --policy-document file://velero-policy.json
  7. Create an access key for the velero user:

    1. $ aws iam create-access-key --user-name velero

    Example output

    1. {
    2. "AccessKey": {
    3. "UserName": "velero",
    4. "Status": "Active",
    5. "CreateDate": "2017-07-31T22:24:41.576Z",
    6. "SecretAccessKey": <AWS_SECRET_ACCESS_KEY>,
    7. "AccessKeyId": <AWS_ACCESS_KEY_ID>
    8. }
    9. }

    Record the AWS_SECRET_ACCESS_KEY and the AWS_ACCESS_KEY_ID. You use the credentials to add AWS as a replication repository.

Configuring Google Cloud Platform

You configure a Google Cloud Platform (GCP) storage bucket as a replication repository for the Migration Toolkit for Containers (MTC).

Prerequisites

  • You must have the gcloud and gsutil CLI tools installed. See the Google cloud documentation for details.

  • The GCP storage bucket must be accessible to the source and target clusters.

  • If you are using the snapshot copy method:

    • The source and target clusters must be in the same region.

    • The source and target clusters must have the same storage class.

    • The storage class must be compatible with snapshots.

Procedure

  1. Log in to GCP:

    1. $ gcloud auth login
  2. Set the BUCKET variable:

    1. $ BUCKET=<bucket> (1)
    1Specify your bucket name.
  3. Create the storage bucket:

    1. $ gsutil mb gs://$BUCKET/
  4. Set the PROJECT_ID variable to your active project:

    1. $ PROJECT_ID=$(gcloud config get-value project)
  5. Create a service account:

    1. $ gcloud iam service-accounts create velero \
    2. --display-name "Velero service account"
  6. List your service accounts:

    1. $ gcloud iam service-accounts list
  7. Set the SERVICE_ACCOUNT_EMAIL variable to match its email value:

    1. $ SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts list \
    2. --filter="displayName:Velero service account" \
    3. --format 'value(email)')
  8. Attach the policies to give the velero user the necessary permissions:

    1. $ ROLE_PERMISSIONS=(
    2. compute.disks.get
    3. compute.disks.create
    4. compute.disks.createSnapshot
    5. compute.snapshots.get
    6. compute.snapshots.create
    7. compute.snapshots.useReadOnly
    8. compute.snapshots.delete
    9. compute.zones.get
    10. )
  9. Create the velero.server custom role:

    1. $ gcloud iam roles create velero.server \
    2. --project $PROJECT_ID \
    3. --title "Velero Server" \
    4. --permissions "$(IFS=","; echo "${ROLE_PERMISSIONS[*]}")"
  10. Add IAM policy binding to the project:

    1. $ gcloud projects add-iam-policy-binding $PROJECT_ID \
    2. --member serviceAccount:$SERVICE_ACCOUNT_EMAIL \
    3. --role projects/$PROJECT_ID/roles/velero.server
  11. Update the IAM service account:

    1. $ gsutil iam ch serviceAccount:$SERVICE_ACCOUNT_EMAIL:objectAdmin gs://${BUCKET}
  12. Save the IAM service account keys to the credentials-velero file in the current directory:

    1. $ gcloud iam service-accounts keys create credentials-velero \
    2. --iam-account $SERVICE_ACCOUNT_EMAIL

    You use the credentials-velero file to add GCP as a replication repository.

Configuring Microsoft Azure

You configure a Microsoft Azure Blob storage container as a replication repository for the Migration Toolkit for Containers (MTC).

Prerequisites

  • You must have the Azure CLI installed.

  • The Azure Blob storage container must be accessible to the source and target clusters.

  • If you are using the snapshot copy method:

    • The source and target clusters must be in the same region.

    • The source and target clusters must have the same storage class.

    • The storage class must be compatible with snapshots.

Procedure

  1. Log in to Azure:

    1. $ az login
  2. Set the AZURE_RESOURCE_GROUP variable:

    1. $ AZURE_RESOURCE_GROUP=Velero_Backups
  3. Create an Azure resource group:

    1. $ az group create -n $AZURE_RESOURCE_GROUP --location CentralUS (1)
    1Specify your location.
  4. Set the AZURE_STORAGE_ACCOUNT_ID variable:

    1. $ AZURE_STORAGE_ACCOUNT_ID="velero$(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')"
  5. Create an Azure storage account:

    1. $ az storage account create \
    2. --name $AZURE_STORAGE_ACCOUNT_ID \
    3. --resource-group $AZURE_RESOURCE_GROUP \
    4. --sku Standard_GRS \
    5. --encryption-services blob \
    6. --https-only true \
    7. --kind BlobStorage \
    8. --access-tier Hot
  6. Set the BLOB_CONTAINER variable:

    1. $ BLOB_CONTAINER=velero
  7. Create an Azure Blob storage container:

    1. $ az storage container create \
    2. -n $BLOB_CONTAINER \
    3. --public-access off \
    4. --account-name $AZURE_STORAGE_ACCOUNT_ID
  8. Create a service principal and credentials for velero:

    1. $ AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` \
    2. AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv` \
    3. AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name "velero" \
    4. --role "Contributor" --query 'password' -o tsv` \
    5. AZURE_CLIENT_ID=`az ad sp list --display-name "velero" \
    6. --query '[0].appId' -o tsv`
  9. Save the service principal credentials in the credentials-velero file:

    1. $ cat << EOF > ./credentials-velero
    2. AZURE_SUBSCRIPTION_ID=${AZURE_SUBSCRIPTION_ID}
    3. AZURE_TENANT_ID=${AZURE_TENANT_ID}
    4. AZURE_CLIENT_ID=${AZURE_CLIENT_ID}
    5. AZURE_CLIENT_SECRET=${AZURE_CLIENT_SECRET}
    6. AZURE_RESOURCE_GROUP=${AZURE_RESOURCE_GROUP}
    7. AZURE_CLOUD_NAME=AzurePublicCloud
    8. EOF

    You use the credentials-velero file to add Azure as a replication repository.

Additional resources

Uninstalling MTC and deleting resources

You can uninstall the Migration Toolkit for Containers (MTC) and delete its resources to clean up the cluster.

Deleting the velero CRDs removes Velero from the cluster.

Prerequisites

  • You must be logged in as a user with cluster-admin privileges.

Procedure

  1. Delete the MigrationController custom resource (CR) on all clusters:

    1. $ oc delete migrationcontroller <migration_controller>
  2. Uninstall the Migration Toolkit for Containers Operator on OKD 4 by using the Operator Lifecycle Manager.

  3. Delete cluster-scoped resources on all clusters by running the following commands:

    • migration custom resource definitions (CRDs):

      1. $ oc delete $(oc get crds -o name | grep 'migration.openshift.io')
    • velero CRDs:

      1. $ oc delete $(oc get crds -o name | grep 'velero')
    • migration cluster roles:

      1. $ oc delete $(oc get clusterroles -o name | grep 'migration.openshift.io')
    • migration-operator cluster role:

      1. $ oc delete clusterrole migration-operator
    • velero cluster roles:

      1. $ oc delete $(oc get clusterroles -o name | grep 'velero')
    • migration cluster role bindings:

      1. $ oc delete $(oc get clusterrolebindings -o name | grep 'migration.openshift.io')
    • migration-operator cluster role bindings:

      1. $ oc delete clusterrolebindings migration-operator
    • velero cluster role bindings:

      1. $ oc delete $(oc get clusterrolebindings -o name | grep 'velero')