Backup, restore, and disaster recovery for hosted control planes

If you need to back up and restore etcd on a hosted cluster or provide disaster recovery for a hosted cluster, see the following procedures.

Recovering etcd pods for hosted clusters

In hosted clusters, etcd pods run as part of a stateful set. The stateful set relies on persistent storage to store etcd data for each member. In a highly available control plane, the size of the stateful set is three pods, and each member has its own persistent volume claim.

Checking the status of a hosted cluster

To check the status of your hosted cluster, complete the following steps.

Procedure

  1. Enter the running etcd pod that you want to check by entering the following command:

    1. $ oc rsh -n <control_plane_namespace> -c etcd etcd-0
  2. Set up the etcdctl environment by entering the following commands:

    1. sh-4.4$ export ETCDCTL_API=3
    1. sh-4.4$ export ETCDCTL_CACERT=/etc/etcd/tls/etcd-ca/ca.crt
    1. sh-4.4$ export ETCDCTL_CERT=/etc/etcd/tls/client/etcd-client.crt
    1. sh-4.4$ export ETCDCTL_KEY=/etc/etcd/tls/client/etcd-client.key
    1. sh-4.4$ export ETCDCTL_ENDPOINTS=https://etcd-client:2379
  3. Print the endpoint status for each cluster member by entering the following command:

    1. sh-4.4$ etcdctl endpoint health --cluster -w table

Recovering an etcd member for a hosted cluster

An etcd member of a 3-node cluster might fail because of corrupted or missing data. To recover the etcd member, complete the following steps.

Procedure

  1. If you need to confirm that the etcd member is failing, enter the following command:

    1. $ oc get pods -l app=etcd -n <control_plane_namespace>

    The output resembles this example if the etcd member is failing:

    Example output

    1. NAME READY STATUS RESTARTS AGE
    2. etcd-0 2/2 Running 0 64m
    3. etcd-1 2/2 Running 0 45m
    4. etcd-2 1/2 CrashLoopBackOff 1 (5s ago) 64m
  2. Delete the persistent volume claim of the failing etcd member and the pod by entering the following command:

    1. $ oc delete pvc/data-etcd-2 pod/etcd-2 --wait=false
  3. When the pod restarts, verify that the etcd member is added back to the etcd cluster and is correctly functioning by entering the following command:

    1. $ oc get pods -l app=etcd -n $CONTROL_PLANE_NAMESPACE

    Example output

    1. NAME READY STATUS RESTARTS AGE
    2. etcd-0 2/2 Running 0 67m
    3. etcd-1 2/2 Running 0 48m
    4. etcd-2 2/2 Running 0 2m2s

Recovering an etcd cluster from a quorum loss

If multiple members of the etcd cluster have lost data or return a CrashLoopBackOff status, it can cause an etcd quorum loss. You must restore your etcd cluster from a snapshot.

This procedure requires API downtime.

Prerequisites

  • The oc and jq binaries have been installed.

Procedure

  1. First, set up your environment variables and scale down the API servers:

    1. Set up environment variables for your hosted cluster by entering the following commands, replacing values as necessary:

      1. $ CLUSTER_NAME=my-cluster
      1. $ HOSTED_CLUSTER_NAMESPACE=clusters
      1. $ CONTROL_PLANE_NAMESPACE="${HOSTED_CLUSTER_NAMESPACE}-${CLUSTER_NAME}"
    2. Pause reconciliation of the hosted cluster by entering the following command, replacing values as necessary:

      1. $ oc patch -n ${HOSTED_CLUSTER_NAMESPACE} hostedclusters/${CLUSTER_NAME} -p '{"spec":{"pausedUntil":"true"}}' --type=merge
    3. Scale down the API servers by entering the following commands:

      1. Scale down the kube-apiserver:

        1. $ oc scale -n ${CONTROL_PLANE_NAMESPACE} deployment/kube-apiserver --replicas=0
      2. Scale down the openshift-apiserver:

        1. $ oc scale -n ${CONTROL_PLANE_NAMESPACE} deployment/openshift-apiserver --replicas=0
      3. Scale down the openshift-oauth-apiserver:

        1. $ oc scale -n ${CONTROL_PLANE_NAMESPACE} deployment/openshift-oauth-apiserver --replicas=0
  1. Next, take a snapshot of etcd by using one of the following methods:

    1. Use a previously backed-up snapshot of etcd.

    2. If you have an available etcd pod, take a snapshot from the active etcd pod by completing the following steps:

      1. List etcd pods by entering the following command:

        1. $ oc get -n ${CONTROL_PLANE_NAMESPACE} pods -l app=etcd
      2. Take a snapshot of the pod database and save it locally to your machine by entering the following commands:

        1. $ ETCD_POD=etcd-0
        1. $ oc exec -n ${CONTROL_PLANE_NAMESPACE} -c etcd -t ${ETCD_POD} -- env ETCDCTL_API=3 /usr/bin/etcdctl \
        2. --cacert /etc/etcd/tls/etcd-ca/ca.crt \
        3. --cert /etc/etcd/tls/client/etcd-client.crt \
        4. --key /etc/etcd/tls/client/etcd-client.key \
        5. --endpoints=https://localhost:2379 \
        6. snapshot save /var/lib/snapshot.db
      3. Verify that the snapshot is successful by entering the following command:

        1. $ oc exec -n ${CONTROL_PLANE_NAMESPACE} -c etcd -t ${ETCD_POD} -- env ETCDCTL_API=3 /usr/bin/etcdctl -w table snapshot status /var/lib/snapshot.db
    3. Make a local copy of the snapshot by entering the following command:

      1. $ oc cp -c etcd ${CONTROL_PLANE_NAMESPACE}/${ETCD_POD}:/var/lib/snapshot.db /tmp/etcd.snapshot.db
      1. Make a copy of the snapshot database from etcd persistent storage:

        1. List etcd pods by entering the following command:

          1. $ oc get -n ${CONTROL_PLANE_NAMESPACE} pods -l app=etcd
        2. Find a pod that is running and set its name as the value of ETCD_POD: ETCD_POD=etcd-0, and then copy its snapshot database by entering the following command:

          1. $ oc cp -c etcd ${CONTROL_PLANE_NAMESPACE}/${ETCD_POD}:/var/lib/data/member/snap/db /tmp/etcd.snapshot.db
  1. Next, scale down the etcd statefulset by entering the following command:

    1. $ oc scale -n ${CONTROL_PLANE_NAMESPACE} statefulset/etcd --replicas=0
    1. Delete volumes for second and third members by entering the following command:

      1. $ oc delete -n ${CONTROL_PLANE_NAMESPACE} pvc/data-etcd-1 pvc/data-etcd-2
    2. Create a pod to access the first etcd member’s data:

      1. Get the etcd image by entering the following command:

        1. $ ETCD_IMAGE=$(oc get -n ${CONTROL_PLANE_NAMESPACE} statefulset/etcd -o jsonpath='{ .spec.template.spec.containers[0].image }')
      2. Create a pod that allows access to etcd data:

        1. $ cat << EOF | oc apply -n ${CONTROL_PLANE_NAMESPACE} -f -
        2. apiVersion: apps/v1
        3. kind: Deployment
        4. metadata:
        5. name: etcd-data
        6. spec:
        7. replicas: 1
        8. selector:
        9. matchLabels:
        10. app: etcd-data
        11. template:
        12. metadata:
        13. labels:
        14. app: etcd-data
        15. spec:
        16. containers:
        17. - name: access
        18. image: $ETCD_IMAGE
        19. volumeMounts:
        20. - name: data
        21. mountPath: /var/lib
        22. command:
        23. - /usr/bin/bash
        24. args:
        25. - -c
        26. - |-
        27. while true; do
        28. sleep 1000
        29. done
        30. volumes:
        31. - name: data
        32. persistentVolumeClaim:
        33. claimName: data-etcd-0
        34. EOF
      3. Check the status of the etcd-data pod and wait for it to be running by entering the following command:

        1. $ oc get -n ${CONTROL_PLANE_NAMESPACE} pods -l app=etcd-data
      4. Get the name of the etcd-data pod by entering the following command:

        1. $ DATA_POD=$(oc get -n ${CONTROL_PLANE_NAMESPACE} pods --no-headers -l app=etcd-data -o name | cut -d/ -f2)
    3. Copy an etcd snapshot into the pod by entering the following command:

      1. $ oc cp /tmp/etcd.snapshot.db ${CONTROL_PLANE_NAMESPACE}/${DATA_POD}:/var/lib/restored.snap.db
    4. Remove old data from the etcd-data pod by entering the following commands:

      1. $ oc exec -n ${CONTROL_PLANE_NAMESPACE} ${DATA_POD} -- rm -rf /var/lib/data
      1. $ oc exec -n ${CONTROL_PLANE_NAMESPACE} ${DATA_POD} -- mkdir -p /var/lib/data
    5. Restore the etcd snapshot by entering the following command:

      1. $ oc exec -n ${CONTROL_PLANE_NAMESPACE} ${DATA_POD} -- etcdutl snapshot restore /var/lib/restored.snap.db \
      2. --data-dir=/var/lib/data --skip-hash-check \
      3. --name etcd-0 \
      4. --initial-cluster-token=etcd-cluster \
      5. --initial-cluster etcd-0=https://etcd-0.etcd-discovery.${CONTROL_PLANE_NAMESPACE}.svc:2380,etcd-1=https://etcd-1.etcd-discovery.${CONTROL_PLANE_NAMESPACE}.svc:2380,etcd-2=https://etcd-2.etcd-discovery.${CONTROL_PLANE_NAMESPACE}.svc:2380 \
      6. --initial-advertise-peer-urls https://etcd-0.etcd-discovery.${CONTROL_PLANE_NAMESPACE}.svc:2380
    6. Remove the temporary etcd snapshot from the pod by entering the following command:

      1. $ oc exec -n ${CONTROL_PLANE_NAMESPACE} ${DATA_POD} -- rm /var/lib/restored.snap.db
    7. Delete data access deployment by entering the following command:

      1. $ oc delete -n ${CONTROL_PLANE_NAMESPACE} deployment/etcd-data
    8. Scale up the etcd cluster by entering the following command:

      1. $ oc scale -n ${CONTROL_PLANE_NAMESPACE} statefulset/etcd --replicas=3
    9. Wait for the etcd member pods to return and report as available by entering the following command:

      1. $ oc get -n ${CONTROL_PLANE_NAMESPACE} pods -l app=etcd -w
    10. Scale up all etcd-writer deployments by entering the following command:

      1. $ oc scale deployment -n ${CONTROL_PLANE_NAMESPACE} --replicas=3 kube-apiserver openshift-apiserver openshift-oauth-apiserver
  2. Restore reconciliation of the hosted cluster by entering the following command:

    1. $ oc patch -n ${CLUSTER_NAMESPACE} hostedclusters/${CLUSTER_NAME} -p '{"spec":{"pausedUntil":""}}' --type=merge

Backing up and restoring etcd on a hosted cluster on AWS

If you use hosted control planes for OKD, the process to back up and restore etcd is different from the usual etcd backup process.

The following procedures are specific to hosted control planes on AWS.

Hosted control planes on the AWS platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Taking a snapshot of etcd on a hosted cluster

As part of the process to back up etcd for a hosted cluster, you take a snapshot of etcd. After you take the snapshot, you can restore it, for example, as part of a disaster recovery operation.

This procedure requires API downtime.

Procedure

  1. Pause reconciliation of the hosted cluster by entering this command:

    1. $ oc patch -n clusters hostedclusters/${CLUSTER_NAME} -p '{"spec":{"pausedUntil":"'${PAUSED_UNTIL}'"}}' --type=merge
  2. Stop all etcd-writer deployments by entering this command:

    1. $ oc scale deployment -n ${HOSTED_CLUSTER_NAMESPACE} --replicas=0 kube-apiserver openshift-apiserver openshift-oauth-apiserver
  3. Take an etcd snapshot by using the exec command in each etcd container:

    1. $ oc exec -it etcd-0 -n ${HOSTED_CLUSTER_NAMESPACE} -- env ETCDCTL_API=3 /usr/bin/etcdctl --cacert /etc/etcd/tls/client/etcd-client-ca.crt --cert /etc/etcd/tls/client/etcd-client.crt --key /etc/etcd/tls/client/etcd-client.key --endpoints=localhost:2379 snapshot save /var/lib/data/snapshot.db
    2. $ oc exec -it etcd-0 -n ${HOSTED_CLUSTER_NAMESPACE} -- env ETCDCTL_API=3 /usr/bin/etcdctl -w table snapshot status /var/lib/data/snapshot.db
  4. Copy the snapshot data to a location where you can retrieve it later, such as an S3 bucket, as shown in the following example.

    The following example uses signature version 2. If you are in a region that supports signature version 4, such as the us-east-2 region, use signature version 4. Otherwise, if you use signature version 2 to copy the snapshot to an S3 bucket, the upload fails and signature version 2 is deprecated.

    Example

    1. BUCKET_NAME=somebucket
    2. FILEPATH="/${BUCKET_NAME}/${CLUSTER_NAME}-snapshot.db"
    3. CONTENT_TYPE="application/x-compressed-tar"
    4. DATE_VALUE=`date -R`
    5. SIGNATURE_STRING="PUT\n\n${CONTENT_TYPE}\n${DATE_VALUE}\n${FILEPATH}"
    6. ACCESS_KEY=accesskey
    7. SECRET_KEY=secret
    8. SIGNATURE_HASH=`echo -en ${SIGNATURE_STRING} | openssl sha1 -hmac ${SECRET_KEY} -binary | base64`
    9. oc exec -it etcd-0 -n ${HOSTED_CLUSTER_NAMESPACE} -- curl -X PUT -T "/var/lib/data/snapshot.db" \
    10. -H "Host: ${BUCKET_NAME}.s3.amazonaws.com" \
    11. -H "Date: ${DATE_VALUE}" \
    12. -H "Content-Type: ${CONTENT_TYPE}" \
    13. -H "Authorization: AWS ${ACCESS_KEY}:${SIGNATURE_HASH}" \
    14. https://${BUCKET_NAME}.s3.amazonaws.com/${CLUSTER_NAME}-snapshot.db
  5. If you want to be able to restore the snapshot on a new cluster later, save the encryption secret that the hosted cluster references, as shown in this example:

    Example

    1. oc get hostedcluster $CLUSTER_NAME -o=jsonpath='{.spec.secretEncryption.aescbc}'
    2. {"activeKey":{"name":"CLUSTER_NAME-etcd-encryption-key"}}
    3. # Save this secret, or the key it contains so the etcd data can later be decrypted
    4. oc get secret ${CLUSTER_NAME}-etcd-encryption-key -o=jsonpath='{.data.key}'

Next steps

Restore the etcd snapshot.

Restoring an etcd snapshot on a hosted cluster

If you have a snapshot of etcd from your hosted cluster, you can restore it. Currently, you can restore an etcd snapshot only during cluster creation.

To restore an etcd snapshot, you modify the output from the create cluster --render command and define a restoreSnapshotURL value in the etcd section of the HostedCluster specification.

Prerequisites

You took an etcd snapshot on a hosted cluster.

Procedure

  1. On the aws command-line interface (CLI), create a pre-signed URL so that you can download your etcd snapshot from S3 without passing credentials to the etcd deployment:

    1. ETCD_SNAPSHOT=${ETCD_SNAPSHOT:-"s3://${BUCKET_NAME}/${CLUSTER_NAME}-snapshot.db"}
    2. ETCD_SNAPSHOT_URL=$(aws s3 presign ${ETCD_SNAPSHOT})
  2. Modify the HostedCluster specification to refer to the URL:

    1. spec:
    2. etcd:
    3. managed:
    4. storage:
    5. persistentVolume:
    6. size: 4Gi
    7. type: PersistentVolume
    8. restoreSnapshotURL:
    9. - "${ETCD_SNAPSHOT_URL}"
    10. managementType: Managed
  3. Ensure that the secret that you referenced from the spec.secretEncryption.aescbc value contains the same AES key that you saved in the previous steps.

Disaster recovery for a hosted cluster within an AWS region

In a situation where you need disaster recovery (DR) for a hosted cluster, you can recover a hosted cluster to the same region within AWS. For example, you need DR when the upgrade of a management cluster fails and the hosted cluster is in a read-only state.

Hosted control planes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

The DR process involves three main steps:

  1. Backing up the hosted cluster on the source management cluster

  2. Restoring the hosted cluster on a destination management cluster

  3. Deleting the hosted cluster from the source management cluster

Your workloads remain running during the process. The Cluster API might be unavailable for a period, but that will not affect the services that are running on the worker nodes.

Both the source management cluster and the destination management cluster must have the —external-dns flags to maintain the API server URL, as shown in this example:

Example: External DNS flags
  1. external-dns-provider=aws \
  2. external-dns-credentials=<AWS Credentials location> \
  3. external-dns-domain-filter=<DNS Base Domain>

If you do not include the —external-dns flags to maintain the API server URL, the hosted cluster cannot be migrated.

Example environment and context

Consider an scenario where you have three clusters to restore. Two are management clusters, and one is a hosted cluster. You can restore either the control plane only or the control plane and the nodes. Before you begin, you need the following information:

  • Source MGMT Namespace: The source management namespace

  • Source MGMT ClusterName: The source management cluster name

  • Source MGMT Kubeconfig: The source management kubeconfig file

  • Destination MGMT Kubeconfig: The destination management kubeconfig file

  • HC Kubeconfig: The hosted cluster kubeconfig file

  • SSH key file: The SSH public key

  • Pull secret: The pull secret file to access the release images

  • AWS credentials

  • AWS region

  • Base domain: The DNS base domain to use as an external DNS

  • S3 bucket name: The bucket in the AWS region where you plan to upload the etcd backup

This information is shown in the following example environment variables.

Example environment variables

  1. SSH_KEY_FILE=${HOME}/.ssh/id_rsa.pub
  2. BASE_PATH=${HOME}/hypershift
  3. BASE_DOMAIN="aws.sample.com"
  4. PULL_SECRET_FILE="${HOME}/pull_secret.json"
  5. AWS_CREDS="${HOME}/.aws/credentials"
  6. AWS_ZONE_ID="Z02718293M33QHDEQBROL"
  7. CONTROL_PLANE_AVAILABILITY_POLICY=SingleReplica
  8. HYPERSHIFT_PATH=${BASE_PATH}/src/hypershift
  9. HYPERSHIFT_CLI=${HYPERSHIFT_PATH}/bin/hypershift
  10. HYPERSHIFT_IMAGE=${HYPERSHIFT_IMAGE:-"quay.io/${USER}/hypershift:latest"}
  11. NODE_POOL_REPLICAS=${NODE_POOL_REPLICAS:-2}
  12. # MGMT Context
  13. MGMT_REGION=us-west-1
  14. MGMT_CLUSTER_NAME="${USER}-dev"
  15. MGMT_CLUSTER_NS=${USER}
  16. MGMT_CLUSTER_DIR="${BASE_PATH}/hosted_clusters/${MGMT_CLUSTER_NS}-${MGMT_CLUSTER_NAME}"
  17. MGMT_KUBECONFIG="${MGMT_CLUSTER_DIR}/kubeconfig"
  18. # MGMT2 Context
  19. MGMT2_CLUSTER_NAME="${USER}-dest"
  20. MGMT2_CLUSTER_NS=${USER}
  21. MGMT2_CLUSTER_DIR="${BASE_PATH}/hosted_clusters/${MGMT2_CLUSTER_NS}-${MGMT2_CLUSTER_NAME}"
  22. MGMT2_KUBECONFIG="${MGMT2_CLUSTER_DIR}/kubeconfig"
  23. # Hosted Cluster Context
  24. HC_CLUSTER_NS=clusters
  25. HC_REGION=us-west-1
  26. HC_CLUSTER_NAME="${USER}-hosted"
  27. HC_CLUSTER_DIR="${BASE_PATH}/hosted_clusters/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}"
  28. HC_KUBECONFIG="${HC_CLUSTER_DIR}/kubeconfig"
  29. BACKUP_DIR=${HC_CLUSTER_DIR}/backup
  30. BUCKET_NAME="${USER}-hosted-${MGMT_REGION}"
  31. # DNS
  32. AWS_ZONE_ID="Z07342811SH9AA102K1AC"
  33. EXTERNAL_DNS_DOMAIN="hc.jpdv.aws.kerbeross.com"

Overview of the backup and restore process

The backup and restore process works as follows:

  1. On management cluster 1, which you can think of as the source management cluster, the control plane and workers interact by using the external DNS API. The external DNS API is accessible, and a load balancer sits between the management clusters.

    Diagram that shows the workers accessing the external DNS API and the external DNS API pointing to the control plane through a load balancer

  2. You take a snapshot of the hosted cluster, which includes etcd, the control plane, and the worker nodes. During this process, the worker nodes continue to try to access the external DNS API even if it is not accessible, the workloads are running, the control plane is saved in a local manifest file, and etcd is backed up to an S3 bucket. The data plane is active and the control plane is paused.

    298 OpenShift Backup Restore 0123 01

  3. On management cluster 2, which you can think of as the destination management cluster, you restore etcd from the S3 bucket and restore the control plane from the local manifest file. During this process, the external DNS API is stopped, the hosted cluster API becomes inaccessible, and any workers that use the API are unable to update their manifest files, but the workloads are still running.

    298 OpenShift Backup Restore 0123 02

  4. The external DNS API is accessible again, and the worker nodes use it to move to management cluster 2. The external DNS API can access the load balancer that points to the control plane.

    298 OpenShift Backup Restore 0123 03

  5. On management cluster 2, the control plane and worker nodes interact by using the external DNS API. The resources are deleted from management cluster 1, except for the S3 backup of etcd. If you try to set up the hosted cluster again on mangagement cluster 1, it will not work.

    298 OpenShift Backup Restore 0123 04

You can manually back up and restore your hosted cluster, or you can run a script to complete the process. For more information about the script, see “Running a script to back up and restore a hosted cluster”.

Backing up a hosted cluster

To recover your hosted cluster in your target management cluster, you first need to back up all of the relevant data.

Procedure

  1. Create a configmap file to declare the source management cluster by entering this command:

    1. $ oc create configmap mgmt-parent-cluster -n default --from-literal=from=${MGMT_CLUSTER_NAME}
  2. Shut down the reconciliation in the hosted cluster and in the node pools by entering these commands:

    1. PAUSED_UNTIL="true"
    2. oc patch -n ${HC_CLUSTER_NS} hostedclusters/${HC_CLUSTER_NAME} -p '{"spec":{"pausedUntil":"'${PAUSED_UNTIL}'"}}' --type=merge
    3. oc scale deployment -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} --replicas=0 kube-apiserver openshift-apiserver openshift-oauth-apiserver control-plane-operator
    1. PAUSED_UNTIL="true"
    2. oc patch -n ${HC_CLUSTER_NS} hostedclusters/${HC_CLUSTER_NAME} -p '{"spec":{"pausedUntil":"'${PAUSED_UNTIL}'"}}' --type=merge
    3. oc patch -n ${HC_CLUSTER_NS} nodepools/${NODEPOOLS} -p '{"spec":{"pausedUntil":"'${PAUSED_UNTIL}'"}}' --type=merge
    4. oc scale deployment -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} --replicas=0 kube-apiserver openshift-apiserver openshift-oauth-apiserver control-plane-operator
  3. Back up etcd and upload the data to an S3 bucket by running this bash script:

    Wrap this script in a function and call it from the main function.

    1. # ETCD Backup
    2. ETCD_PODS="etcd-0"
    3. if [ "${CONTROL_PLANE_AVAILABILITY_POLICY}" = "HighlyAvailable" ]; then
    4. ETCD_PODS="etcd-0 etcd-1 etcd-2"
    5. fi
    6. for POD in ${ETCD_PODS}; do
    7. # Create an etcd snapshot
    8. oc exec -it ${POD} -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} -- env ETCDCTL_API=3 /usr/bin/etcdctl --cacert /etc/etcd/tls/client/etcd-client-ca.crt --cert /etc/etcd/tls/client/etcd-client.crt --key /etc/etcd/tls/client/etcd-client.key --endpoints=localhost:2379 snapshot save /var/lib/data/snapshot.db
    9. oc exec -it ${POD} -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} -- env ETCDCTL_API=3 /usr/bin/etcdctl -w table snapshot status /var/lib/data/snapshot.db
    10. FILEPATH="/${BUCKET_NAME}/${HC_CLUSTER_NAME}-${POD}-snapshot.db"
    11. CONTENT_TYPE="application/x-compressed-tar"
    12. DATE_VALUE=`date -R`
    13. SIGNATURE_STRING="PUT\n\n${CONTENT_TYPE}\n${DATE_VALUE}\n${FILEPATH}"
    14. set +x
    15. ACCESS_KEY=$(grep aws_access_key_id ${AWS_CREDS} | head -n1 | cut -d= -f2 | sed "s/ //g")
    16. SECRET_KEY=$(grep aws_secret_access_key ${AWS_CREDS} | head -n1 | cut -d= -f2 | sed "s/ //g")
    17. SIGNATURE_HASH=$(echo -en ${SIGNATURE_STRING} | openssl sha1 -hmac "${SECRET_KEY}" -binary | base64)
    18. set -x
    19. # FIXME: this is pushing to the OIDC bucket
    20. oc exec -it etcd-0 -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} -- curl -X PUT -T "/var/lib/data/snapshot.db" \
    21. -H "Host: ${BUCKET_NAME}.s3.amazonaws.com" \
    22. -H "Date: ${DATE_VALUE}" \
    23. -H "Content-Type: ${CONTENT_TYPE}" \
    24. -H "Authorization: AWS ${ACCESS_KEY}:${SIGNATURE_HASH}" \
    25. https://${BUCKET_NAME}.s3.amazonaws.com/${HC_CLUSTER_NAME}-${POD}-snapshot.db
    26. done

    For more information about backing up etcd, see “Backing up and restoring etcd on a hosted cluster”.

  4. Back up Kubernetes and OKD objects by entering the following commands. You need to back up the following objects:

    • HostedCluster and NodePool objects from the HostedCluster namespace

    • HostedCluster secrets from the HostedCluster namespace

    • HostedControlPlane from the Hosted Control Plane namespace

    • Cluster from the Hosted Control Plane namespace

    • AWSCluster, AWSMachineTemplate, and AWSMachine from the Hosted Control Plane namespace

    • MachineDeployments, MachineSets, and Machines from the Hosted Control Plane namespace

    • ControlPlane secrets from the Hosted Control Plane namespace

      1. mkdir -p ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS} ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}
      2. chmod 700 ${BACKUP_DIR}/namespaces/
      3. # HostedCluster
      4. echo "Backing Up HostedCluster Objects:"
      5. oc get hc ${HC_CLUSTER_NAME} -n ${HC_CLUSTER_NS} -o yaml > ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}/hc-${HC_CLUSTER_NAME}.yaml
      6. echo "--> HostedCluster"
      7. sed -i '' -e '/^status:$/,$d' ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}/hc-${HC_CLUSTER_NAME}.yaml
      8. # NodePool
      9. oc get np ${NODEPOOLS} -n ${HC_CLUSTER_NS} -o yaml > ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}/np-${NODEPOOLS}.yaml
      10. echo "--> NodePool"
      11. sed -i '' -e '/^status:$/,$ d' ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}/np-${NODEPOOLS}.yaml
      12. # Secrets in the HC Namespace
      13. echo "--> HostedCluster Secrets:"
      14. for s in $(oc get secret -n ${HC_CLUSTER_NS} | grep "^${HC_CLUSTER_NAME}" | awk '{print $1}'); do
      15. oc get secret -n ${HC_CLUSTER_NS} $s -o yaml > ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}/secret-${s}.yaml
      16. done
      17. # Secrets in the HC Control Plane Namespace
      18. echo "--> HostedCluster ControlPlane Secrets:"
      19. for s in $(oc get secret -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} | egrep -v "docker|service-account-token|oauth-openshift|NAME|token-${HC_CLUSTER_NAME}" | awk '{print $1}'); do
      20. oc get secret -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} $s -o yaml > ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}/secret-${s}.yaml
      21. done
      22. # Hosted Control Plane
      23. echo "--> HostedControlPlane:"
      24. oc get hcp ${HC_CLUSTER_NAME} -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} -o yaml > ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}/hcp-${HC_CLUSTER_NAME}.yaml
      25. # Cluster
      26. echo "--> Cluster:"
      27. CL_NAME=$(oc get hcp ${HC_CLUSTER_NAME} -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} -o jsonpath={.metadata.labels.\*} | grep ${HC_CLUSTER_NAME})
      28. oc get cluster ${CL_NAME} -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} -o yaml > ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}/cl-${HC_CLUSTER_NAME}.yaml
      29. # AWS Cluster
      30. echo "--> AWS Cluster:"
      31. oc get awscluster ${HC_CLUSTER_NAME} -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} -o yaml > ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}/awscl-${HC_CLUSTER_NAME}.yaml
      32. # AWS MachineTemplate
      33. echo "--> AWS Machine Template:"
      34. oc get awsmachinetemplate ${NODEPOOLS} -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} -o yaml > ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}/awsmt-${HC_CLUSTER_NAME}.yaml
      35. # AWS Machines
      36. echo "--> AWS Machine:"
      37. CL_NAME=$(oc get hcp ${HC_CLUSTER_NAME} -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} -o jsonpath={.metadata.labels.\*} | grep ${HC_CLUSTER_NAME})
      38. for s in $(oc get awsmachines -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} --no-headers | grep ${CL_NAME} | cut -f1 -d\ ); do
      39. oc get -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} awsmachines $s -o yaml > ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}/awsm-${s}.yaml
      40. done
      41. # MachineDeployments
      42. echo "--> HostedCluster MachineDeployments:"
      43. for s in $(oc get machinedeployment -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} -o name); do
      44. mdp_name=$(echo ${s} | cut -f 2 -d /)
      45. oc get -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} $s -o yaml > ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}/machinedeployment-${mdp_name}.yaml
      46. done
      47. # MachineSets
      48. echo "--> HostedCluster MachineSets:"
      49. for s in $(oc get machineset -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} -o name); do
      50. ms_name=$(echo ${s} | cut -f 2 -d /)
      51. oc get -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} $s -o yaml > ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}/machineset-${ms_name}.yaml
      52. done
      53. # Machines
      54. echo "--> HostedCluster Machine:"
      55. for s in $(oc get machine -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} -o name); do
      56. m_name=$(echo ${s} | cut -f 2 -d /)
      57. oc get -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} $s -o yaml > ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}/machine-${m_name}.yaml
      58. done
  5. Clean up the ControlPlane routes by entering this command:

    1. $ oc delete routes -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} --all

    By entering that command, you enable the ExternalDNS Operator to delete the Route53 entries.

  6. Verify that the Route53 entries are clean by running this script:

    1. function clean_routes() {
    2. if [[ -z "${1}" ]];then
    3. echo "Give me the NS where to clean the routes"
    4. exit 1
    5. fi
    6. # Constants
    7. if [[ -z "${2}" ]];then
    8. echo "Give me the Route53 zone ID"
    9. exit 1
    10. fi
    11. ZONE_ID=${2}
    12. ROUTES=10
    13. timeout=40
    14. count=0
    15. # This allows us to remove the ownership in the AWS for the API route
    16. oc delete route -n ${1} --all
    17. while [ ${ROUTES} -gt 2 ]
    18. do
    19. echo "Waiting for ExternalDNS Operator to clean the DNS Records in AWS Route53 where the zone id is: ${ZONE_ID}..."
    20. echo "Try: (${count}/${timeout})"
    21. sleep 10
    22. if [[ $count -eq timeout ]];then
    23. echo "Timeout waiting for cleaning the Route53 DNS records"
    24. exit 1
    25. fi
    26. count=$((count+1))
    27. ROUTES=$(aws route53 list-resource-record-sets --hosted-zone-id ${ZONE_ID} --max-items 10000 --output json | grep -c ${EXTERNAL_DNS_DOMAIN})
    28. done
    29. }
    30. # SAMPLE: clean_routes "<HC ControlPlane Namespace>" "<AWS_ZONE_ID>"
    31. clean_routes "${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}" "${AWS_ZONE_ID}"

Verification

Check all of the OKD objects and the S3 bucket to verify that everything looks as expected.

Next steps

Restore your hosted cluster.

Restoring a hosted cluster

Gather all of the objects that you backed up and restore them in your destination management cluster.

Prerequisites

You backed up the data from your source management cluster.

Ensure that the kubeconfig file of the destination management cluster is placed as it is set in the KUBECONFIG variable or, if you use the script, in the MGMT2_KUBECONFIG variable. Use export KUBECONFIG=<Kubeconfig FilePath> or, if you use the script, use export KUBECONFIG=${MGMT2_KUBECONFIG}.

Procedure

  1. Verify that the new management cluster does not contain any namespaces from the cluster that you are restoring by entering these commands:

    1. # Just in case
    2. export KUBECONFIG=${MGMT2_KUBECONFIG}
    3. BACKUP_DIR=${HC_CLUSTER_DIR}/backup
    4. # Namespace deletion in the destination Management cluster
    5. $ oc delete ns ${HC_CLUSTER_NS} || true
    6. $ oc delete ns ${HC_CLUSTER_NS}-{HC_CLUSTER_NAME} || true
  2. Re-create the deleted namespaces by entering these commands:

    1. # Namespace creation
    2. $ oc new-project ${HC_CLUSTER_NS}
    3. $ oc new-project ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}
  3. Restore the secrets in the HC namespace by entering this command:

    1. $ oc apply -f ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}/secret-*
  4. Restore the objects in the HostedCluster control plane namespace by entering these commands:

    1. # Secrets
    2. $ oc apply -f ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}/secret-*
    3. # Cluster
    4. $ oc apply -f ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}/hcp-*
    5. $ oc apply -f ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}/cl-*
  5. If you are recovering the nodes and the node pool to reuse AWS instances, restore the objects in the HC control plane namespace by entering these commands:

    1. # AWS
    2. $ oc apply -f ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}/awscl-*
    3. $ oc apply -f ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}/awsmt-*
    4. $ oc apply -f ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}/awsm-*
    5. # Machines
    6. $ oc apply -f ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}/machinedeployment-*
    7. $ oc apply -f ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}/machineset-*
    8. $ oc apply -f ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}/machine-*
  6. Restore the etcd data and the hosted cluster by running this bash script:

    1. ETCD_PODS="etcd-0"
    2. if [ "${CONTROL_PLANE_AVAILABILITY_POLICY}" = "HighlyAvailable" ]; then
    3. ETCD_PODS="etcd-0 etcd-1 etcd-2"
    4. fi
    5. HC_RESTORE_FILE=${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}/hc-${HC_CLUSTER_NAME}-restore.yaml
    6. HC_BACKUP_FILE=${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}/hc-${HC_CLUSTER_NAME}.yaml
    7. HC_NEW_FILE=${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}/hc-${HC_CLUSTER_NAME}-new.yaml
    8. cat ${HC_BACKUP_FILE} > ${HC_NEW_FILE}
    9. cat > ${HC_RESTORE_FILE} <<EOF
    10. restoreSnapshotURL:
    11. EOF
    12. for POD in ${ETCD_PODS}; do
    13. # Create a pre-signed URL for the etcd snapshot
    14. ETCD_SNAPSHOT="s3://${BUCKET_NAME}/${HC_CLUSTER_NAME}-${POD}-snapshot.db"
    15. ETCD_SNAPSHOT_URL=$(AWS_DEFAULT_REGION=${MGMT2_REGION} aws s3 presign ${ETCD_SNAPSHOT})
    16. # FIXME no CLI support for restoreSnapshotURL yet
    17. cat >> ${HC_RESTORE_FILE} <<EOF
    18. - "${ETCD_SNAPSHOT_URL}"
    19. EOF
    20. done
    21. cat ${HC_RESTORE_FILE}
    22. if ! grep ${HC_CLUSTER_NAME}-snapshot.db ${HC_NEW_FILE}; then
    23. sed -i '' -e "/type: PersistentVolume/r ${HC_RESTORE_FILE}" ${HC_NEW_FILE}
    24. sed -i '' -e '/pausedUntil:/d' ${HC_NEW_FILE}
    25. fi
    26. HC=$(oc get hc -n ${HC_CLUSTER_NS} ${HC_CLUSTER_NAME} -o name || true)
    27. if [[ ${HC} == "" ]];then
    28. echo "Deploying HC Cluster: ${HC_CLUSTER_NAME} in ${HC_CLUSTER_NS} namespace"
    29. oc apply -f ${HC_NEW_FILE}
    30. else
    31. echo "HC Cluster ${HC_CLUSTER_NAME} already exists, avoiding step"
    32. fi
  7. If you are recovering the nodes and the node pool to reuse AWS instances, restore the node pool by entering this command:

    1. oc apply -f ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}/np-*

Verification

  • To verify that the nodes are fully restored, use this function:

    1. timeout=40
    2. count=0
    3. NODE_STATUS=$(oc get nodes --kubeconfig=${HC_KUBECONFIG} | grep -v NotReady | grep -c "worker") || NODE_STATUS=0
    4. while [ ${NODE_POOL_REPLICAS} != ${NODE_STATUS} ]
    5. do
    6. echo "Waiting for Nodes to be Ready in the destination MGMT Cluster: ${MGMT2_CLUSTER_NAME}"
    7. echo "Try: (${count}/${timeout})"
    8. sleep 30
    9. if [[ $count -eq timeout ]];then
    10. echo "Timeout waiting for Nodes in the destination MGMT Cluster"
    11. exit 1
    12. fi
    13. count=$((count+1))
    14. NODE_STATUS=$(oc get nodes --kubeconfig=${HC_KUBECONFIG} | grep -v NotReady | grep -c "worker") || NODE_STATUS=0
    15. done

Next steps

Shut down and delete your cluster.

Deleting a hosted cluster from your source management cluster

After you back up your hosted cluster and restore it to your destination management cluster, you shut down and delete the hosted cluster on your source management cluster.

Prerequisites

You backed up your data and restored it to your source management cluster.

Ensure that the kubeconfig file of the destination management cluster is placed as it is set in the KUBECONFIG variable or, if you use the script, in the MGMT_KUBECONFIG variable. Use export KUBECONFIG=<Kubeconfig FilePath> or, if you use the script, use export KUBECONFIG=${MGMT_KUBECONFIG}.

Procedure

  1. Scale the deployment and statefulset objects by entering these commands:

    Do not scale the stateful set if the value of its spec.persistentVolumeClaimRetentionPolicy.whenScaled field is set to Delete, because this could lead to a loss of data.

    As a workaround, update the value of the spec.persistentVolumeClaimRetentionPolicy.whenScaled field to Retain. Ensure that no controllers exist that reconcile the stateful set and would return the value back to Delete, which could lead to a loss of data.

    1. # Just in case
    2. export KUBECONFIG=${MGMT_KUBECONFIG}
    3. # Scale down deployments
    4. oc scale deployment -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} --replicas=0 --all
    5. oc scale statefulset.apps -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} --replicas=0 --all
    6. sleep 15
  2. Delete the NodePool objects by entering these commands:

    1. NODEPOOLS=$(oc get nodepools -n ${HC_CLUSTER_NS} -o=jsonpath='{.items[?(@.spec.clusterName=="'${HC_CLUSTER_NAME}'")].metadata.name}')
    2. if [[ ! -z "${NODEPOOLS}" ]];then
    3. oc patch -n "${HC_CLUSTER_NS}" nodepool ${NODEPOOLS} --type=json --patch='[ { "op":"remove", "path": "/metadata/finalizers" }]'
    4. oc delete np -n ${HC_CLUSTER_NS} ${NODEPOOLS}
    5. fi
  3. Delete the machine and machineset objects by entering these commands:

    1. # Machines
    2. for m in $(oc get machines -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} -o name); do
    3. oc patch -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} ${m} --type=json --patch='[ { "op":"remove", "path": "/metadata/finalizers" }]' || true
    4. oc delete -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} ${m} || true
    5. done
    6. oc delete machineset -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} --all || true
  4. Delete the cluster object by entering these commands:

    1. # Cluster
    2. C_NAME=$(oc get cluster -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} -o name)
    3. oc patch -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} ${C_NAME} --type=json --patch='[ { "op":"remove", "path": "/metadata/finalizers" }]'
    4. oc delete cluster.cluster.x-k8s.io -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} --all
  5. Delete the AWS machines (Kubernetes objects) by entering these commands. Do not worry about deleting the real AWS machines. The cloud instances will not be affected.

    1. # AWS Machines
    2. for m in $(oc get awsmachine.infrastructure.cluster.x-k8s.io -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} -o name)
    3. do
    4. oc patch -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} ${m} --type=json --patch='[ { "op":"remove", "path": "/metadata/finalizers" }]' || true
    5. oc delete -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} ${m} || true
    6. done
  6. Delete the HostedControlPlane and ControlPlane HC namespace objects by entering these commands:

    1. # Delete HCP and ControlPlane HC NS
    2. oc patch -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} hostedcontrolplane.hypershift.openshift.io ${HC_CLUSTER_NAME} --type=json --patch='[ { "op":"remove", "path": "/metadata/finalizers" }]'
    3. oc delete hostedcontrolplane.hypershift.openshift.io -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} --all
    4. oc delete ns ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} || true
  7. Delete the HostedCluster and HC namespace objects by entering these commands:

    1. # Delete HC and HC Namespace
    2. oc -n ${HC_CLUSTER_NS} patch hostedclusters ${HC_CLUSTER_NAME} -p '{"metadata":{"finalizers":null}}' --type merge || true
    3. oc delete hc -n ${HC_CLUSTER_NS} ${HC_CLUSTER_NAME} || true
    4. oc delete ns ${HC_CLUSTER_NS} || true

Verification

  • To verify that everything works, enter these commands:

    1. # Validations
    2. export KUBECONFIG=${MGMT2_KUBECONFIG}
    3. oc get hc -n ${HC_CLUSTER_NS}
    4. oc get np -n ${HC_CLUSTER_NS}
    5. oc get pod -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}
    6. oc get machines -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}
    7. # Inside the HostedCluster
    8. export KUBECONFIG=${HC_KUBECONFIG}
    9. oc get clusterversion
    10. oc get nodes

Next steps

Delete the OVN pods in the hosted cluster so that you can connect to the new OVN control plane that runs in the new management cluster:

  1. Load the KUBECONFIG environment variable with the hosted cluster’s kubeconfig path.

  2. Enter this command:

    1. $ oc delete pod -n openshift-ovn-kubernetes --all

Running a script to back up and restore a hosted cluster

To expedite the process to back up a hosted cluster and restore it within the same region on AWS, you can modify and run a script.

Procedure

  1. Replace the variables in the following script with your information:

    1. # Fill the Common variables to fit your environment, this is just a sample
    2. SSH_KEY_FILE=${HOME}/.ssh/id_rsa.pub
    3. BASE_PATH=${HOME}/hypershift
    4. BASE_DOMAIN="aws.sample.com"
    5. PULL_SECRET_FILE="${HOME}/pull_secret.json"
    6. AWS_CREDS="${HOME}/.aws/credentials"
    7. CONTROL_PLANE_AVAILABILITY_POLICY=SingleReplica
    8. HYPERSHIFT_PATH=${BASE_PATH}/src/hypershift
    9. HYPERSHIFT_CLI=${HYPERSHIFT_PATH}/bin/hypershift
    10. HYPERSHIFT_IMAGE=${HYPERSHIFT_IMAGE:-"quay.io/${USER}/hypershift:latest"}
    11. NODE_POOL_REPLICAS=${NODE_POOL_REPLICAS:-2}
    12. # MGMT Context
    13. MGMT_REGION=us-west-1
    14. MGMT_CLUSTER_NAME="${USER}-dev"
    15. MGMT_CLUSTER_NS=${USER}
    16. MGMT_CLUSTER_DIR="${BASE_PATH}/hosted_clusters/${MGMT_CLUSTER_NS}-${MGMT_CLUSTER_NAME}"
    17. MGMT_KUBECONFIG="${MGMT_CLUSTER_DIR}/kubeconfig"
    18. # MGMT2 Context
    19. MGMT2_CLUSTER_NAME="${USER}-dest"
    20. MGMT2_CLUSTER_NS=${USER}
    21. MGMT2_CLUSTER_DIR="${BASE_PATH}/hosted_clusters/${MGMT2_CLUSTER_NS}-${MGMT2_CLUSTER_NAME}"
    22. MGMT2_KUBECONFIG="${MGMT2_CLUSTER_DIR}/kubeconfig"
    23. # Hosted Cluster Context
    24. HC_CLUSTER_NS=clusters
    25. HC_REGION=us-west-1
    26. HC_CLUSTER_NAME="${USER}-hosted"
    27. HC_CLUSTER_DIR="${BASE_PATH}/hosted_clusters/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}"
    28. HC_KUBECONFIG="${HC_CLUSTER_DIR}/kubeconfig"
    29. BACKUP_DIR=${HC_CLUSTER_DIR}/backup
    30. BUCKET_NAME="${USER}-hosted-${MGMT_REGION}"
    31. # DNS
    32. AWS_ZONE_ID="Z026552815SS3YPH9H6MG"
    33. EXTERNAL_DNS_DOMAIN="guest.jpdv.aws.kerbeross.com"
  2. Save the script to your local file system.

  3. Run the script by entering the following command:

    1. source <env_file>

    where: env_file is the name of the file where you saved the script.

    The migration script is maintained at the following repository: https://github.com/openshift/hypershift/blob/main/contrib/migration/migrate-hcp.sh.