Installing the Migration Toolkit for Containers in a restricted network environment

You can install the Migration Toolkit for Containers (MTC) on OKD 4 in a restricted network environment by performing the following procedures:

  1. Create a mirrored Operator catalog.

    This process creates a mapping.txt file, which contains the mapping between the registry.redhat.io image and your mirror registry image. The mapping.txt file is required for installing the legacy Migration Toolkit for Containers Operator on an OKD 4.2 to 4.5 source cluster.

  2. Install the Migration Toolkit for Containers Operator on the OKD 4.8 target cluster by using Operator Lifecycle Manager.

    By default, the MTC web console and the Migration Controller pod run on the target cluster. You can configure the Migration Controller custom resource manifest to run the MTC web console and the Migration Controller pod on a remote cluster.

  3. Install the Migration Toolkit for Containers Operator on the source cluster:

    • OKD 4.6 or later: Install the Migration Toolkit for Containers Operator by using Operator Lifecycle Manager.

    • OKD 4.2 to 4.5: Install the legacy Migration Toolkit for Containers Operator from the command line interface.

  4. Configure object storage to use as a replication repository.

Compatibility guidelines

You must install the Migration Toolkit for Containers (MTC) version that is compatible with your OKD version.

You cannot install MTC 1.6.x on OKD versions 3.7 to 4.5 because the custom resource definition API versions are incompatible.

You can migrate workloads from a source cluster with MTC 1.5.1 to a target cluster with MTC 1.6.x as long as the MigrationController custom resource and the MTC web console are running on the target cluster.

Table 1. OKD and MTC compatibility
OKD versionMTC versionMigration Toolkit for Containers Operator

3.7

1.5.1

Legacy Migration Toolkit for Containers Operator.

Installed manually with the operator-3.7.yml file.

3.9 to 4.5

1.5.1

Legacy Migration Toolkit for Containers Operator.

Installed manually with the operator.yml file.

4.6 and later

1.6.x[1]

Migration Toolkit for Containers Operator.

Installed with the Operator Lifecycle Manager.

1 Latest z-stream release.

Installing the Migration Toolkit for Containers Operator on OKD 4.8

You install the Migration Toolkit for Containers Operator on OKD 4.8 by using the Operator Lifecycle Manager.

Prerequisites

  • You must be logged in as a user with cluster-admin privileges on all clusters.

  • You must create an Operator catalog from a mirror image in a local registry.

Procedure

  1. In the OKD web console, click OperatorsOperatorHub.

  2. Use the Filter by keyword field to find the Migration Toolkit for Containers Operator.

  3. Select the Migration Toolkit for Containers Operator and click Install.

  4. Click Install.

    On the Installed Operators page, the Migration Toolkit for Containers Operator appears in the openshift-migration project with the status Succeeded.

  5. Click Migration Toolkit for Containers Operator.

  6. Under Provided APIs, locate the Migration Controller tile, and click Create Instance.

  7. Click Create.

  8. Click WorkloadsPods to verify that the MTC pods are running.

Installing the legacy Migration Toolkit for Containers Operator on OKD 4.2 to 4.5

You can install the legacy Migration Toolkit for Containers Operator manually on OKD versions 4.2 to 4.5.

Prerequisites

  • You must be logged in as a user with cluster-admin privileges on all clusters.

  • You must have access to registry.redhat.io.

  • You must have podman installed.

  • You must have a Linux workstation with network access in order to download files from registry.redhat.io.

  • You must create a mirror image of the Operator catalog.

  • You must install the Migration Toolkit for Containers Operator from the mirrored Operator catalog on OKD 4.8.

Procedure

  1. Log in to registry.redhat.io with your Red Hat Customer Portal credentials:

    1. $ sudo podman login registry.redhat.io
  2. Download the operator.yml file:

    1. $ sudo podman cp $(sudo podman create \
    2. registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.5.1):/operator.yml ./
  3. Download the controller.yml file:

    1. $ sudo podman cp $(sudo podman create \
    2. registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.5.1):/controller.yml ./
  4. Obtain the Operator image mapping by running the following command:

    1. $ grep openshift-migration-legacy-rhel8-operator ./mapping.txt | grep rhmtc

    The mapping.txt file was created when you mirrored the Operator catalog. The output shows the mapping between the registry.redhat.io image and your mirror registry image.

    Example output

    1. registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator
  5. Update the image values for the ansible and operator containers and the REGISTRY value in the operator.yml file:

    1. containers:
    2. - name: ansible
    3. image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> (1)
    4. ...
    5. - name: operator
    6. image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> (1)
    7. ...
    8. env:
    9. - name: REGISTRY
    10. value: <registry.apps.example.com> (2)
    1Specify your mirror registry and the sha256 value of the Operator image.
    2Specify your mirror registry.
  6. Log in to your OKD 3 cluster.

  7. Create the Migration Toolkit for Containers Operator object:

    1. $ oc create -f operator.yml

    Example output

    1. namespace/openshift-migration created
    2. rolebinding.rbac.authorization.k8s.io/system:deployers created
    3. serviceaccount/migration-operator created
    4. customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created
    5. role.rbac.authorization.k8s.io/migration-operator created
    6. rolebinding.rbac.authorization.k8s.io/migration-operator created
    7. clusterrolebinding.rbac.authorization.k8s.io/migration-operator created
    8. deployment.apps/migration-operator created
    9. Error from server (AlreadyExists): error when creating "./operator.yml":
    10. rolebindings.rbac.authorization.k8s.io "system:image-builders" already exists (1)
    11. Error from server (AlreadyExists): error when creating "./operator.yml":
    12. rolebindings.rbac.authorization.k8s.io "system:image-pullers" already exists
    1You can ignore Error from server (AlreadyExists) messages. They are caused by the Migration Toolkit for Containers Operator creating resources for earlier versions of OKD 3 that are provided in later releases.
  8. Create the MigrationController object:

    1. $ oc create -f controller.yml
  9. Verify that the MTC pods are running:

    1. $ oc get pods -n openshift-migration

Configuring proxies

For OKD 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object.

For OKD 4.2 to 4.8, the Migration Toolkit for Containers (MTC) inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings.

You must configure the proxies to allow the SPDY protocol and to forward the Upgrade HTTP header to the API server. Otherwise, an Upgrade request required error is displayed. The MigrationController CR uses SPDY to run commands within remote pods. The Upgrade HTTP header is required in order to open a websocket connection with the API server.

Direct volume migration

If you are performing a direct volume migration (DVM) from a source cluster behind a proxy, you must configure an Stunnel proxy. Stunnel creates a transparent tunnel between the source and target clusters for the TCP connection without changing the certificates.

DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy.

Prerequisites

  • You must be logged in as a user with cluster-admin privileges on all clusters.

Procedure

  1. Get the MigrationController CR manifest:

    1. $ oc get migrationcontroller <migration_controller> -n openshift-migration
  2. Update the proxy parameters:

    1. apiVersion: migration.openshift.io/v1alpha1
    2. kind: MigrationController
    3. metadata:
    4. name: <migration_controller>
    5. namespace: openshift-migration
    6. ...
    7. spec:
    8. stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> (1)
    9. httpProxy: http://<username>:<password>@<ip>:<port> (2)
    10. httpsProxy: http://<username>:<password>@<ip>:<port> (3)
    11. noProxy: example.com (4)
    1Stunnel proxy URL for direct volume migration.
    2Proxy URL for creating HTTP connections outside the cluster. The URL scheme must be http.
    3Proxy URL for creating HTTPS connections outside the cluster. If this is not specified, then httpProxy is used for both HTTP and HTTPS connections.
    4Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying.

    Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues.

    This field is ignored if neither the httpProxy nor the httpsProxy field is set.

  3. Save the manifest as migration-controller.yaml.

  4. Apply the updated manifest:

    1. $ oc replace -f migration-controller.yaml -n openshift-migration

For more information, see Configuring the cluster-wide proxy.

Configuring a replication repository

You must configure an object storage to use as a replication repository. The Migration Toolkit for Containers (MTC) copies data from the source cluster to the replication repository, and then from the replication repository to the target cluster. Multi-Cloud Object Gateway (MCG) is the only supported option for a restricted network environment.

MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider.

Prerequisites

  • All clusters must have uninterrupted network access to the replication repository.

  • If you use a proxy server with an internally hosted replication repository, you must ensure that the proxy allows access to the replication repository.

Configuring Multi-Cloud Object Gateway

You can install the OpenShift Container Storage Operator and configure a Multi-Cloud Object Gateway (MCG) storage bucket as a replication repository for the Migration Toolkit for Containers (MTC).

Installing the OpenShift Container Storage Operator

You can install the OpenShift Container Storage Operator from OperatorHub.

Prerequisites

  • Ensure that you have downloaded the pull secret from the Red Hat OpenShift Cluster Manager site as shown in Obtaining the installation program in the installation documentation for your platform.

    If you have the pull secret, add the redhat-operators catalog to the OperatorHub custom resource (CR) as shown in Configuring OKD to use Red Hat Operators.

Procedure

  1. In the OKD web console, click OperatorsOperatorHub.

  2. Use Filter by keyword (in this case, OCS) to find the OpenShift Container Storage Operator.

  3. Select the OpenShift Container Storage Operator and click Install.

  4. Select an Update Channel, Installation Mode, and Approval Strategy.

  5. Click Install.

    On the Installed Operators page, the OpenShift Container Storage Operator appears in the openshift-storage project with the status Succeeded.

Creating the Multi-Cloud Object Gateway storage bucket

You can create the Multi-Cloud Object Gateway (MCG) storage bucket’s custom resources (CRs).

Procedure

  1. Log in to the OKD cluster:

    1. $ oc login -u <username>
  2. Create the NooBaa CR configuration file, noobaa.yml, with the following content:

    1. apiVersion: noobaa.io/v1alpha1
    2. kind: NooBaa
    3. metadata:
    4. name: <noobaa>
    5. namespace: openshift-storage
    6. spec:
    7. dbResources:
    8. requests:
    9. cpu: 0.5 (1)
    10. memory: 1Gi
    11. coreResources:
    12. requests:
    13. cpu: 0.5 (1)
    14. memory: 1Gi
    1For a very small cluster, you can change the value to 0.1.
  3. Create the NooBaa object:

    1. $ oc create -f noobaa.yml
  4. Create the BackingStore CR configuration file, bs.yml, with the following content:

    1. apiVersion: noobaa.io/v1alpha1
    2. kind: BackingStore
    3. metadata:
    4. finalizers:
    5. - noobaa.io/finalizer
    6. labels:
    7. app: noobaa
    8. name: <mcg_backing_store>
    9. namespace: openshift-storage
    10. spec:
    11. pvPool:
    12. numVolumes: 3 (1)
    13. resources:
    14. requests:
    15. storage: <volume_size> (2)
    16. storageClass: <storage_class> (3)
    17. type: pv-pool
    1Specify the number of volumes in the persistent volume pool.
    2Specify the size of the volumes, for example, 50Gi.
    3Specify the storage class, for example, gp2.
  5. Create the BackingStore object:

    1. $ oc create -f bs.yml
  6. Create the BucketClass CR configuration file, bc.yml, with the following content:

    1. apiVersion: noobaa.io/v1alpha1
    2. kind: BucketClass
    3. metadata:
    4. labels:
    5. app: noobaa
    6. name: <mcg_bucket_class>
    7. namespace: openshift-storage
    8. spec:
    9. placementPolicy:
    10. tiers:
    11. - backingStores:
    12. - <mcg_backing_store>
    13. placement: Spread
  7. Create the BucketClass object:

    1. $ oc create -f bc.yml
  8. Create the ObjectBucketClaim CR configuration file, obc.yml, with the following content:

    1. apiVersion: objectbucket.io/v1alpha1
    2. kind: ObjectBucketClaim
    3. metadata:
    4. name: <bucket>
    5. namespace: openshift-storage
    6. spec:
    7. bucketName: <bucket> (1)
    8. storageClassName: <storage_class>
    9. additionalConfig:
    10. bucketclass: <mcg_bucket_class>
    1Record the bucket name for adding the replication repository to the MTC web console.
  9. Create the ObjectBucketClaim object:

    1. $ oc create -f obc.yml
  10. Watch the resource creation process to verify that the ObjectBucketClaim status is Bound:

    1. $ watch -n 30 'oc get -n openshift-storage objectbucketclaim migstorage -o yaml'

    This process can take five to ten minutes.

  11. Obtain and record the following values, which are required when you add the replication repository to the MTC web console:

    • S3 endpoint:

      1. $ oc get route -n openshift-storage s3
    • S3 provider access key:

      1. $ oc get secret -n openshift-storage migstorage \
      2. -o go-template='{{ .data.AWS_ACCESS_KEY_ID }}' | base64 --decode
    • S3 provider secret access key:

      1. $ oc get secret -n openshift-storage migstorage \
      2. -o go-template='{{ .data.AWS_SECRET_ACCESS_KEY }}' | base64 --decode

Additional resources for configuring a replication repository