Restoring to a previous cluster state

To restore the cluster to a previous state, you must have previously backed up etcd data by creating a snapshot. You will use this snapshot to restore the cluster state.

About restoring cluster state

You can use an etcd backup to restore your cluster to a previous state. This can be used to recover from the following situations:

  • The cluster has lost the majority of control plane hosts (quorum loss).

  • An administrator has deleted something critical and must restore to recover the cluster.

Restoring to a previous cluster state is a destructive and destablizing action to take on a running cluster. This should only be used as a last resort.

If you are able to retrieve data using the Kubernetes API server, then etcd is available and you should not restore using an etcd backup.

Restoring etcd effectively takes a cluster back in time and all clients will experience a conflicting, parallel history. This can impact the behavior of watching components like kubelets, Kubernetes controller managers, SDN controllers, and persistent volume controllers.

It can cause Operator churn when the content in etcd does not match the actual content on disk, causing Operators for the Kubernetes API server, Kubernetes controller manager, Kubernetes scheduler, and etcd to get stuck when files on disk conflict with content in etcd. This can require manual actions to resolve the issues.

In extreme cases, the cluster can lose track of persistent volumes, delete critical workloads that no longer exist, reimage machines, and rewrite CA bundles with expired certificates.

Restoring to a previous cluster state

You can use a saved etcd backup to restore a previous cluster state or restore a cluster that has lost the majority of control plane hosts.

If your cluster uses a control plane machine set, see “Troubleshooting the control plane machine set” for a more simple etcd recovery procedure.

When you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OKD 4.7.2 cluster must use an etcd backup that was taken from 4.7.2.

Prerequisites

  • Access to the cluster as a user with the cluster-admin role through a certificate-based kubeconfig file, like the one that was used during installation.

  • A healthy control plane host to use as the recovery host.

  • SSH access to control plane hosts.

  • A backup directory containing both the etcd snapshot and the resources for the static pods, which were from the same backup. The file names in the directory must be in the following formats: snapshot_<datetimestamp>.db and static_kuberesources_<datetimestamp>.tar.gz.

For non-recovery control plane nodes, it is not required to establish SSH connectivity or to stop the static pods. You can delete and recreate other non-recovery, control plane machines, one by one.

Procedure

  1. Select a control plane host to use as the recovery host. This is the host that you will run the restore operation on.

  2. Establish SSH connectivity to each of the control plane nodes, including the recovery host.

    The Kubernetes API server becomes inaccessible after the restore process starts, so you cannot access the control plane nodes. For this reason, it is recommended to establish SSH connectivity to each control plane host in a separate terminal.

    If you do not complete this step, you will not be able to access the control plane hosts to complete the restore procedure, and you will be unable to recover your cluster from this state.

  3. Copy the etcd backup directory to the recovery control plane host.

    This procedure assumes that you copied the backup directory containing the etcd snapshot and the resources for the static pods to the /home/core/ directory of your recovery control plane host.

  4. Stop the static pods on any other control plane nodes.

    You do not need to stop the static pods on the recovery host.

    1. Access a control plane host that is not the recovery host.

    2. Move the existing etcd pod file out of the kubelet manifest directory:

      1. $ sudo mv /etc/kubernetes/manifests/etcd-pod.yaml /tmp
    3. Verify that the etcd pods are stopped.

      1. $ sudo crictl ps | grep etcd | egrep -v "operator|etcd-guard"

      The output of this command should be empty. If it is not empty, wait a few minutes and check again.

    4. Move the existing Kubernetes API server pod file out of the kubelet manifest directory:

      1. $ sudo mv /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp
    5. Verify that the Kubernetes API server pods are stopped.

      1. $ sudo crictl ps | grep kube-apiserver | egrep -v "operator|guard"

      The output of this command should be empty. If it is not empty, wait a few minutes and check again.

    6. Move the etcd data directory to a different location:

      1. $ sudo mv /var/lib/etcd/ /tmp
    7. Repeat this step on each of the other control plane hosts that is not the recovery host.

  5. Access the recovery control plane host.

  6. If the cluster-wide proxy is enabled, be sure that you have exported the NO_PROXY, HTTP_PROXY, and HTTPS_PROXY environment variables.

    You can check whether the proxy is enabled by reviewing the output of oc get proxy cluster -o yaml. The proxy is enabled if the httpProxy, httpsProxy, and noProxy fields have values set.

  7. Run the restore script on the recovery control plane host and pass in the path to the etcd backup directory:

    1. $ sudo -E /usr/local/bin/cluster-restore.sh /home/core/backup

    Example script output

    1. ...stopping kube-scheduler-pod.yaml
    2. ...stopping kube-controller-manager-pod.yaml
    3. ...stopping etcd-pod.yaml
    4. ...stopping kube-apiserver-pod.yaml
    5. Waiting for container etcd to stop
    6. .complete
    7. Waiting for container etcdctl to stop
    8. .............................complete
    9. Waiting for container etcd-metrics to stop
    10. complete
    11. Waiting for container kube-controller-manager to stop
    12. complete
    13. Waiting for container kube-apiserver to stop
    14. ..........................................................................................complete
    15. Waiting for container kube-scheduler to stop
    16. complete
    17. Moving etcd data-dir /var/lib/etcd/member to /var/lib/etcd-backup
    18. starting restore-etcd static pod
    19. starting kube-apiserver-pod.yaml
    20. static-pod-resources/kube-apiserver-pod-7/kube-apiserver-pod.yaml
    21. starting kube-controller-manager-pod.yaml
    22. static-pod-resources/kube-controller-manager-pod-7/kube-controller-manager-pod.yaml
    23. starting kube-scheduler-pod.yaml
    24. static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml

    The restore process can cause nodes to enter the NotReady state if the node certificates were updated after the last etcd backup.

  8. Check the nodes to ensure they are in the Ready state.

    1. Run the following command:

      1. $ oc get nodes -w

      Sample output

      1. NAME STATUS ROLES AGE VERSION
      2. host-172-25-75-28 Ready master 3d20h v1.27.3
      3. host-172-25-75-38 Ready infra,worker 3d20h v1.27.3
      4. host-172-25-75-40 Ready master 3d20h v1.27.3
      5. host-172-25-75-65 Ready master 3d20h v1.27.3
      6. host-172-25-75-74 Ready infra,worker 3d20h v1.27.3
      7. host-172-25-75-79 Ready worker 3d20h v1.27.3
      8. host-172-25-75-86 Ready worker 3d20h v1.27.3
      9. host-172-25-75-98 Ready infra,worker 3d20h v1.27.3

      It can take several minutes for all nodes to report their state.

    2. If any nodes are in the NotReady state, log in to the nodes and remove all of the PEM files from the /var/lib/kubelet/pki directory on each node. You can SSH into the nodes or use the terminal window in the web console.

      1. $ ssh -i <ssh-key-path> core@<master-hostname>

      Sample pki directory

      1. sh-4.4# pwd
      2. /var/lib/kubelet/pki
      3. sh-4.4# ls
      4. kubelet-client-2022-04-28-11-24-09.pem kubelet-server-2022-04-28-11-24-15.pem
      5. kubelet-client-current.pem kubelet-server-current.pem
  9. Restart the kubelet service on all control plane hosts.

    1. From the recovery host, run the following command:

      1. $ sudo systemctl restart kubelet.service
    2. Repeat this step on all other control plane hosts.

  10. Approve the pending CSRs:

    Clusters with no worker nodes, such as single-node clusters or clusters consisting of three schedulable control plane nodes, will not have any pending CSRs to approve. You can skip all the commands listed in this step.

    1. Get the list of current CSRs:

      1. $ oc get csr

      Example output

      1. NAME AGE SIGNERNAME REQUESTOR CONDITION
      2. csr-2s94x 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending (1)
      3. csr-4bd6t 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending (1)
      4. csr-4hl85 13m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending (2)
      5. csr-zhhhp 3m8s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending (2)
      6. ...
      1A pending kubelet service CSR (for user-provisioned installations).
      2A pending node-bootstrapper CSR.
    2. Review the details of a CSR to verify that it is valid:

      1. $ oc describe csr <csr_name> (1)
      1<csr_name> is the name of a CSR from the list of current CSRs.
    3. Approve each valid node-bootstrapper CSR:

      1. $ oc adm certificate approve <csr_name>
    4. For user-provisioned installations, approve each valid kubelet service CSR:

      1. $ oc adm certificate approve <csr_name>
  11. Verify that the single member control plane has started successfully.

    1. From the recovery host, verify that the etcd container is running.

      1. $ sudo crictl ps | grep etcd | egrep -v "operator|etcd-guard"

      Example output

      1. 3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0
    2. From the recovery host, verify that the etcd pod is running.

      1. $ oc -n openshift-etcd get pods -l k8s-app=etcd

      Example output

      1. NAME READY STATUS RESTARTS AGE
      2. etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s

      If the status is Pending, or the output lists more than one running etcd pod, wait a few minutes and check again.

  12. If you are using the OVNKubernetes network plugin, you must restart ovnkube-controlplane pods.

    1. Delete all of the ovnkube-controlplane pods by running the following command:

      1. $ oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-control-plane
    2. Verify that all of the ovnkube-controlplane pods were redeployed by running the following command:

      1. $ oc -n openshift-ovn-kubernetes get pod -l app=ovnkube-control-plane
  13. If you are using the OVN-Kubernetes network plugin, restart the Open Virtual Network (OVN) Kubernetes pods on all the nodes one by one. Use the following steps to restart OVN-Kubernetes pods on each node:

    Restart OVN-Kubernetes pods in the following order:
    1. The recovery control plane host

    2. The other control plane hosts (if available)

    3. The other nodes

    Validating and mutating admission webhooks can reject pods. If you add any additional webhooks with the failurePolicy set to Fail, then they can reject pods and the restoration process can fail. You can avoid this by saving and deleting webhooks while restoring the cluster state. After the cluster state is restored successfully, you can enable the webhooks again.

    Alternatively, you can temporarily set the failurePolicy to Ignore while restoring the cluster state. After the cluster state is restored successfully, you can set the failurePolicy to Fail.

    1. Remove the northbound database (nbdb) and southbound database (sbdb). Access the recovery host and the remaining control plane nodes by using Secure Shell (SSH) and run the following command:

      1. $ sudo rm -f /var/lib/ovn-ic/etc/*.db
    2. Restart the OpenVSwitch services. Access the node by using Secure Shell (SSH) and run the following command:

      1. $ sudo systemctl restart ovs-vswitchd ovsdb-server
    3. Delete the ovnkube-node pod on the node by running the following command, replacing <node> with the name of the node that you are restarting:

      1. $ oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-node --field-selector=spec.nodeName==<node>
    4. Verify that the ovnkube-node pod is running again with the following command:

      1. $ oc -n openshift-ovn-kubernetes get pod -l app=ovnkube-node --field-selector=spec.nodeName==<node>

      It might take several minutes for the pods to restart.

  14. Delete and re-create other non-recovery, control plane machines, one by one. After the machines are re-created, a new revision is forced and etcd automatically scales up.

    • If you use a user-provisioned bare metal installation, you can re-create a control plane machine by using the same method that you used to originally create it. For more information, see “Installing a user-provisioned cluster on bare metal”.

      Do not delete and re-create the machine for the recovery host.

    • If you are running installer-provisioned infrastructure, or you used the Machine API to create your machines, follow these steps:

      Do not delete and re-create the machine for the recovery host.

      For bare metal installations on installer-provisioned infrastructure, control plane machines are not re-created. For more information, see “Replacing a bare-metal control plane node”.

      1. Obtain the machine for one of the lost control plane hosts.

        In a terminal that has access to the cluster as a cluster-admin user, run the following command:

        1. $ oc get machines -n openshift-machine-api -o wide

        Example output:

        1. NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE
        2. clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped (1)
        3. clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running
        4. clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running
        5. clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running
        6. clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running
        7. clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running
        1This is the control plane machine for the lost control plane host, ip-10-0-131-183.ec2.internal.
      2. Save the machine configuration to a file on your file system:

        1. $ oc get machine clustername-8qw5l-master-0 \ (1)
        2. -n openshift-machine-api \
        3. -o yaml \
        4. > new-master-machine.yaml
        1Specify the name of the control plane machine for the lost control plane host.
      3. Edit the new-master-machine.yaml file that was created in the previous step to assign a new name and remove unnecessary fields.

        1. Remove the entire status section:

          1. status:
          2. addresses:
          3. - address: 10.0.131.183
          4. type: InternalIP
          5. - address: ip-10-0-131-183.ec2.internal
          6. type: InternalDNS
          7. - address: ip-10-0-131-183.ec2.internal
          8. type: Hostname
          9. lastUpdated: "2020-04-20T17:44:29Z"
          10. nodeRef:
          11. kind: Node
          12. name: ip-10-0-131-183.ec2.internal
          13. uid: acca4411-af0d-4387-b73e-52b2484295ad
          14. phase: Running
          15. providerStatus:
          16. apiVersion: awsproviderconfig.openshift.io/v1beta1
          17. conditions:
          18. - lastProbeTime: "2020-04-20T16:53:50Z"
          19. lastTransitionTime: "2020-04-20T16:53:50Z"
          20. message: machine successfully created
          21. reason: MachineCreationSucceeded
          22. status: "True"
          23. type: MachineCreation
          24. instanceId: i-0fdb85790d76d0c3f
          25. instanceState: stopped
          26. kind: AWSMachineProviderStatus
        2. Change the metadata.name field to a new name.

          It is recommended to keep the same base name as the old machine and change the ending number to the next available number. In this example, clustername-8qw5l-master-0 is changed to clustername-8qw5l-master-3:

          1. apiVersion: machine.openshift.io/v1beta1
          2. kind: Machine
          3. metadata:
          4. ...
          5. name: clustername-8qw5l-master-3
          6. ...
        3. Remove the spec.providerID field:

          1. providerID: aws:///us-east-1a/i-0fdb85790d76d0c3f
        4. Remove the metadata.annotations and metadata.generation fields:

          1. annotations:
          2. machine.openshift.io/instance-state: running
          3. ...
          4. generation: 2
        5. Remove the metadata.resourceVersion and metadata.uid fields:

          1. resourceVersion: "13291"
          2. uid: a282eb70-40a2-4e89-8009-d05dd420d31a
      4. Delete the machine of the lost control plane host:

        1. $ oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 (1)
        1Specify the name of the control plane machine for the lost control plane host.
      5. Verify that the machine was deleted:

        1. $ oc get machines -n openshift-machine-api -o wide

        Example output:

        1. NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE
        2. clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running
        3. clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running
        4. clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running
        5. clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running
        6. clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running
      6. Create a machine by using the new-master-machine.yaml file:

        1. $ oc apply -f new-master-machine.yaml
      7. Verify that the new machine has been created:

        1. $ oc get machines -n openshift-machine-api -o wide

        Example output:

        1. NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE
        2. clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running
        3. clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running
        4. clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-173-171.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running (1)
        5. clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running
        6. clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running
        7. clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running
        1The new machine, clustername-8qw5l-master-3 is being created and is ready after the phase changes from Provisioning to Running.

        It might take a few minutes for the new machine to be created. The etcd cluster Operator will automatically sync when the machine or node returns to a healthy state.

      8. Repeat these steps for each lost control plane host that is not the recovery host.

  1. Turn off the quorum guard by entering the following command:

    1. $ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": {"useUnsupportedUnsafeNonHANonProductionUnstableEtcd": true}}}'

    This command ensures that you can successfully re-create secrets and roll out the static pods.

  2. In a separate terminal window within the recovery host, export the recovery kubeconfig file by running the following command:

    1. $ export KUBECONFIG=/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfig
  3. Force etcd redeployment.

    In the same terminal window where you exported the recovery kubeconfig file, run the following command:

    1. $ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge (1)
    1The forceRedeploymentReason value must be unique, which is why a timestamp is appended.

    When the etcd cluster Operator performs a redeployment, the existing nodes are started with new pods similar to the initial bootstrap scale up.

  4. Turn the quorum guard back on by entering the following command:

    1. $ oc patch etcd/cluster --type=merge -p '{"spec": {"unsupportedConfigOverrides": null}}'
  5. You can verify that the unsupportedConfigOverrides section is removed from the object by entering this command:

    1. $ oc get etcd/cluster -oyaml
  6. Verify all nodes are updated to the latest revision.

    In a terminal that has access to the cluster as a cluster-admin user, run the following command:

    1. $ oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'

    Review the NodeInstallerProgressing status condition for etcd to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update:

    1. AllNodesAtLatestRevision
    2. 3 nodes are at revision 7 (1)
    1In this example, the latest revision number is 7.

    If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7, this means that the update is still in progress. Wait a few minutes and try again.

  7. After etcd is redeployed, force new rollouts for the control plane. The Kubernetes API server will reinstall itself on the other nodes because the kubelet is connected to API servers using an internal load balancer.

    In a terminal that has access to the cluster as a cluster-admin user, run the following commands.

    1. Force a new rollout for the Kubernetes API server:

      1. $ oc patch kubeapiserver cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge

      Verify all nodes are updated to the latest revision.

      1. $ oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'

      Review the NodeInstallerProgressing status condition to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update:

      1. AllNodesAtLatestRevision
      2. 3 nodes are at revision 7 (1)
      1In this example, the latest revision number is 7.

      If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7, this means that the update is still in progress. Wait a few minutes and try again.

    2. Force a new rollout for the Kubernetes controller manager:

      1. $ oc patch kubecontrollermanager cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge

      Verify all nodes are updated to the latest revision.

      1. $ oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'

      Review the NodeInstallerProgressing status condition to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update:

      1. AllNodesAtLatestRevision
      2. 3 nodes are at revision 7 (1)
      1In this example, the latest revision number is 7.

      If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7, this means that the update is still in progress. Wait a few minutes and try again.

    3. Force a new rollout for the Kubernetes scheduler:

      1. $ oc patch kubescheduler cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge

      Verify all nodes are updated to the latest revision.

      1. $ oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'

      Review the NodeInstallerProgressing status condition to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update:

      1. AllNodesAtLatestRevision
      2. 3 nodes are at revision 7 (1)
      1In this example, the latest revision number is 7.

      If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7, this means that the update is still in progress. Wait a few minutes and try again.

  8. Verify that all control plane hosts have started and joined the cluster.

    In a terminal that has access to the cluster as a cluster-admin user, run the following command:

    1. $ oc -n openshift-etcd get pods -l k8s-app=etcd

    Example output

    1. etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h
    2. etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h
    3. etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h

To ensure that all workloads return to normal operation following a recovery procedure, restart each pod that stores Kubernetes API information. This includes OKD components such as routers, Operators, and third-party components.

On completion of the previous procedural steps, you might need to wait a few minutes for all services to return to their restored state. For example, authentication by using oc login might not immediately work until the OAuth server pods are restarted.

Consider using the system:admin kubeconfig file for immediate authentication. This method basis its authentication on SSL/TLS client certificates as against OAuth tokens. You can authenticate with this file by issuing the following command:

  1. $ export KUBECONFIG=<installation_directory>/auth/kubeconfig

Issue the following command to display your authenticated user name:

  1. $ oc whoami

Additional resources

Issues and workarounds for restoring a persistent storage state

If your OKD cluster uses persistent storage of any form, a state of the cluster is typically stored outside etcd. It might be an Elasticsearch cluster running in a pod or a database running in a StatefulSet object. When you restore from an etcd backup, the status of the workloads in OKD is also restored. However, if the etcd snapshot is old, the status might be invalid or outdated.

The contents of persistent volumes (PVs) are never part of the etcd snapshot. When you restore an OKD cluster from an etcd snapshot, non-critical workloads might gain access to critical data, or vice-versa.

The following are some example scenarios that produce an out-of-date status:

  • MySQL database is running in a pod backed up by a PV object. Restoring OKD from an etcd snapshot does not bring back the volume on the storage provider, and does not produce a running MySQL pod, despite the pod repeatedly attempting to start. You must manually restore this pod by restoring the volume on the storage provider, and then editing the PV to point to the new volume.

  • Pod P1 is using volume A, which is attached to node X. If the etcd snapshot is taken while another pod uses the same volume on node Y, then when the etcd restore is performed, pod P1 might not be able to start correctly due to the volume still being attached to node Y. OKD is not aware of the attachment, and does not automatically detach it. When this occurs, the volume must be manually detached from node Y so that the volume can attach on node X, and then pod P1 can start.

  • Cloud provider or storage provider credentials were updated after the etcd snapshot was taken. This causes any CSI drivers or Operators that depend on the those credentials to not work. You might have to manually update the credentials required by those drivers or Operators.

  • A device is removed or renamed from OKD nodes after the etcd snapshot is taken. The Local Storage Operator creates symlinks for each PV that it manages from /dev/disk/by-id or /dev directories. This situation might cause the local PVs to refer to devices that no longer exist.

    To fix this problem, an administrator must:

    1. Manually remove the PVs with invalid devices.

    2. Remove symlinks from respective nodes.

    3. Delete LocalVolume or LocalVolumeSet objects (see StorageConfiguring persistent storagePersistent storage using local volumesDeleting the Local Storage Operator Resources).