Restoring to a previous cluster state

To restore the cluster to a previous state, you must have previously backed up etcd data by creating a snapshot. You will use this snapshot to restore the cluster state.

About restoring cluster state

You can use an etcd backup to restore your cluster to a previous state. This can be used to recover from the following situations:

  • The cluster has lost the majority of control plane hosts (quorum loss).

  • An administrator has deleted something critical and must restore to recover the cluster.

Restoring to a previous cluster state is a destructive and destablizing action to take on a running cluster. This should only be used as a last resort.

If you are able to retrieve data using the Kubernetes API server, then etcd is available and you should not restore using an etcd backup.

Restoring etcd effectively takes a cluster back in time and all clients will experience a conflicting, parallel history. This can impact the behavior of watching components like kubelets, Kubernetes controller managers, SDN controllers, and persistent volume controllers.

It can cause Operator churn when the content in etcd does not match the actual content on disk, causing Operators for the Kubernetes API server, Kubernetes controller manager, Kubernetes scheduler, and etcd to get stuck when files on disk conflict with content in etcd. This can require manual actions to resolve the issues.

In extreme cases, the cluster can lose track of persistent volumes, delete critical workloads that no longer exist, reimage machines, and rewrite CA bundles with expired certificates.

Restoring to a previous cluster state

You can use a saved etcd backup to restore back to a previous cluster state. You use the etcd backup to restore a single control plane host. Then the etcd cluster Operator handles scaling to the remaining control plane hosts (also known as the master hosts).

When you restore your cluster, you must use an etcd backup that was taken from the same z-stream release. For example, an OKD 4.7.2 cluster must use an etcd backup that was taken from 4.7.2.

Prerequisites

  • Access to the cluster as a user with the cluster-admin role.

  • A healthy control plane host to use as the recovery host.

  • SSH access to control plane hosts.

  • A backup directory containing both the etcd snapshot and the resources for the static pods, which were from the same backup. The file names in the directory must be in the following formats: snapshot_<datetimestamp>.db and static_kuberesources_<datetimestamp>.tar.gz.

Procedure

  1. Select a control plane host to use as the recovery host. This is the host that you will run the restore operation on.

  2. Establish SSH connectivity to each of the control plane nodes, including the recovery host.

    The Kubernetes API server becomes inaccessible after the restore process starts, so you cannot access the control plane nodes. For this reason, it is recommended to establish SSH connectivity to each control plane host in a separate terminal.

    If you do not complete this step, you will not be able to access the control plane hosts to complete the restore procedure, and you will be unable to recover your cluster from this state.

  3. Copy the etcd backup directory to the recovery control plane host.

    This procedure assumes that you copied the backup directory containing the etcd snapshot and the resources for the static pods to the /home/core/ directory of your recovery control plane host.

  4. Stop the static pods on all other control plane nodes.

    It is not required to manually stop the pods on the recovery host. The recovery script will stop the pods on the recovery host.

    1. Access a control plane host that is not the recovery host.

    2. Move the existing etcd pod file out of the kubelet manifest directory:

      1. [core@ip-10-0-154-194 ~]$ sudo mv /etc/kubernetes/manifests/etcd-pod.yaml /tmp
    3. Verify that the etcd pods are stopped.

      1. [core@ip-10-0-154-194 ~]$ sudo crictl ps | grep etcd | grep -v operator

      The output of this command should be empty. If it is not empty, wait a few minutes and check again.

    4. Move the existing Kubernetes API server pod file out of the kubelet manifest directory:

      1. [core@ip-10-0-154-194 ~]$ sudo mv /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp
    5. Verify that the Kubernetes API server pods are stopped.

      1. [core@ip-10-0-154-194 ~]$ sudo crictl ps | grep kube-apiserver | grep -v operator

      The output of this command should be empty. If it is not empty, wait a few minutes and check again.

    6. Move the etcd data directory to a different location:

      1. [core@ip-10-0-154-194 ~]$ sudo mv /var/lib/etcd/ /tmp
    7. Repeat this step on each of the other control plane hosts that is not the recovery host.

  5. Access the recovery control plane host.

  6. If the cluster-wide proxy is enabled, be sure that you have exported the NO_PROXY, HTTP_PROXY, and HTTPS_PROXY environment variables.

    You can check whether the proxy is enabled by reviewing the output of oc get proxy cluster -o yaml. The proxy is enabled if the httpProxy, httpsProxy, and noProxy fields have values set.

  7. Run the restore script on the recovery control plane host and pass in the path to the etcd backup directory:

    1. [core@ip-10-0-143-125 ~]$ sudo -E /usr/local/bin/cluster-restore.sh /home/core/backup

    Example script output

    1. ...stopping kube-scheduler-pod.yaml
    2. ...stopping kube-controller-manager-pod.yaml
    3. ...stopping etcd-pod.yaml
    4. ...stopping kube-apiserver-pod.yaml
    5. Waiting for container etcd to stop
    6. .complete
    7. Waiting for container etcdctl to stop
    8. .............................complete
    9. Waiting for container etcd-metrics to stop
    10. complete
    11. Waiting for container kube-controller-manager to stop
    12. complete
    13. Waiting for container kube-apiserver to stop
    14. ..........................................................................................complete
    15. Waiting for container kube-scheduler to stop
    16. complete
    17. Moving etcd data-dir /var/lib/etcd/member to /var/lib/etcd-backup
    18. starting restore-etcd static pod
    19. starting kube-apiserver-pod.yaml
    20. static-pod-resources/kube-apiserver-pod-7/kube-apiserver-pod.yaml
    21. starting kube-controller-manager-pod.yaml
    22. static-pod-resources/kube-controller-manager-pod-7/kube-controller-manager-pod.yaml
    23. starting kube-scheduler-pod.yaml
    24. static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml
  8. Restart the kubelet service on all control plane hosts.

    1. From the recovery host, run the following command:

      1. [core@ip-10-0-143-125 ~]$ sudo systemctl restart kubelet.service
    2. Repeat this step on all other control plane hosts.

  9. Approve the pending CSRs:

    1. Get the list of current CSRs:

      1. $ oc get csr

      Example output

      1. NAME AGE SIGNERNAME REQUESTOR CONDITION
      2. csr-2s94x 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending (1)
      3. csr-4bd6t 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending (1)
      4. csr-4hl85 13m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending (2)
      5. csr-zhhhp 3m8s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending (2)
      6. ...
      1A pending kubelet service CSR (for user-provisioned installations).
      2A pending node-bootstrapper CSR.
    2. Review the details of a CSR to verify that it is valid:

      1. $ oc describe csr <csr_name> (1)
      1<csr_name> is the name of a CSR from the list of current CSRs.
    3. Approve each valid node-bootstrapper CSR:

      1. $ oc adm certificate approve <csr_name>
    4. For user-provisioned installations, approve each valid kubelet service CSR:

      1. $ oc adm certificate approve <csr_name>
  10. Verify that the single member control plane has started successfully.

    1. From the recovery host, verify that the etcd container is running.

      1. [core@ip-10-0-143-125 ~]$ sudo crictl ps | grep etcd | grep -v operator

      Example output

      1. 3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0
    2. From the recovery host, verify that the etcd pod is running.

      1. [core@ip-10-0-143-125 ~]$ oc get pods -n openshift-etcd | grep -v etcd-quorum-guard | grep etcd

      If you attempt to run oc login prior to running this command and receive the following error, wait a few moments for the authentication controllers to start and try again.

      1. Unable to connect to the server: EOF

      Example output

      1. NAME READY STATUS RESTARTS AGE
      2. etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s

      If the status is Pending, or the output lists more than one running etcd pod, wait a few minutes and check again.

  11. In a separate terminal window, log in to the cluster as a user with the cluster-admin role by using the following command:

    1. $ oc login -u <cluster_admin> (1)
    1For <cluster_admin>, specify a user name with the cluster-admin role.
  12. Force etcd redeployment.

    In a terminal that has access to the cluster as a cluster-admin user, run the following command:

    1. $ oc patch etcd cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge (1)
    1The forceRedeploymentReason value must be unique, which is why a timestamp is appended.

    When the etcd cluster Operator performs a redeployment, the existing nodes are started with new pods similar to the initial bootstrap scale up.

  13. Verify all nodes are updated to the latest revision.

    In a terminal that has access to the cluster as a cluster-admin user, run the following command:

    1. $ oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'

    Review the NodeInstallerProgressing status condition for etcd to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update:

    1. AllNodesAtLatestRevision
    2. 3 nodes are at revision 7 (1)
    1In this example, the latest revision number is 7.

    If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7, this means that the update is still in progress. Wait a few minutes and try again.

  14. After etcd is redeployed, force new rollouts for the control plane. The Kubernetes API server will reinstall itself on the other nodes because the kubelet is connected to API servers using an internal load balancer.

    In a terminal that has access to the cluster as a cluster-admin user, run the following commands.

    1. Update the kubeapiserver:

      1. $ oc patch kubeapiserver cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge

      Verify all nodes are updated to the latest revision.

      1. $ oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'

      Review the NodeInstallerProgressing status condition to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update:

      1. AllNodesAtLatestRevision
      2. 3 nodes are at revision 7 (1)
      1In this example, the latest revision number is 7.

      If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7, this means that the update is still in progress. Wait a few minutes and try again.

    2. Update the kubecontrollermanager:

      1. $ oc patch kubecontrollermanager cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge

      Verify all nodes are updated to the latest revision.

      1. $ oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'

      Review the NodeInstallerProgressing status condition to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update:

      1. AllNodesAtLatestRevision
      2. 3 nodes are at revision 7 (1)
      1In this example, the latest revision number is 7.

      If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7, this means that the update is still in progress. Wait a few minutes and try again.

    3. Update the kubescheduler:

      1. $ oc patch kubescheduler cluster -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge

      Verify all nodes are updated to the latest revision.

      1. $ oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type=="NodeInstallerProgressing")]}{.reason}{"\n"}{.message}{"\n"}'

      Review the NodeInstallerProgressing status condition to verify that all nodes are at the latest revision. The output shows AllNodesAtLatestRevision upon successful update:

      1. AllNodesAtLatestRevision
      2. 3 nodes are at revision 7 (1)
      1In this example, the latest revision number is 7.

      If the output includes multiple revision numbers, such as 2 nodes are at revision 6; 1 nodes are at revision 7, this means that the update is still in progress. Wait a few minutes and try again.

  15. Verify that all control plane hosts have started and joined the cluster.

    In a terminal that has access to the cluster as a cluster-admin user, run the following command:

    1. $ oc get pods -n openshift-etcd | grep -v etcd-quorum-guard | grep etcd

    Example output

    1. etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h
    2. etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h
    3. etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h

Note that it might take several minutes after completing this procedure for all services to be restored. For example, authentication by using oc login might not immediately work until the OAuth server pods are restarted.

Issues and workarounds for restoring a persistent storage state

If your OKD cluster uses persistent storage of any form, a state of the cluster is typically stored outside etcd. It might be an Elasticsearch cluster running in a pod or a database running in a StatefulSet object. When you restore from an etcd backup, the status of the workloads in OKD is also restored. However, if the etcd snapshot is old, the status might be invalid or outdated.

The contents of persistent volumes (PVs) are never part of the etcd snapshot. When you restore an OKD cluster from an etcd snapshot, non-critical workloads might gain access to critical data, or vice-versa.

The following are some example scenarios that produce an out-of-date status:

  • MySQL database is running in a pod backed up by a PV object. Restoring OKD from an etcd snapshot does not bring back the volume on the storage provider, and does not produce a running MySQL pod, despite the pod repeatedly attempting to start. You must manually restore this pod by restoring the volume on the storage provider, and then editing the PV to point to the new volume.

  • Pod P1 is using volume A, which is attached to node X. If the etcd snapshot is taken while another pod uses the same volume on node Y, then when the etcd restore is performed, pod P1 might not be able to start correctly due to the volume still being attached to node Y. OKD is not aware of the attachment, and does not automatically detach it. When this occurs, the volume must be manually detached from node Y so that the volume can attach on node X, and then pod P1 can start.

  • Cloud provider or storage provider credentials were updated after the etcd snapshot was taken. This causes any CSI drivers or Operators that depend on the those credentials to not work. You might have to manually update the credentials required by those drivers or Operators.

  • A device is removed or renamed from OKD nodes after the etcd snapshot is taken. The Local Storage Operator creates symlinks for each PV that it manages from /dev/disk/by-id or /dev directories. This situation might cause the local PVs to refer to devices that no longer exist.

    To fix this problem, an administrator must:

    1. Manually remove the PVs with invalid devices.

    2. Remove symlinks from respective nodes.

    3. Delete LocalVolume or LocalVolumeSet objects (see StorageConfiguring persistent storagePersistent storage using local volumesDeleting the Local Storage Operator Resources).