Disaster Recovery

Restoring Mon Quorum

Under extenuating circumstances, the mons may lose quorum. If the mons cannot form quorum again, there is a manual procedure to get the quorum going again. The only requirement is that at least one mon is still healthy. The following steps will remove the unhealthy mons from quorum and allow you to form a quorum again with a single mon, then grow the quorum back to the original size.

For example, if you have three mons and lose quorum, you will need to remove the two bad mons from quorum, notify the good mon that it is the only mon in quorum, and then restart the good mon.

Stop the operator

First, stop the operator so it will not try to failover the mons while we are modifying the monmap

  1. kubectl -n rook-ceph scale deployment rook-ceph-operator --replicas=0

Inject a new monmap

WARNING: Injecting a monmap must be done very carefully. If run incorrectly, your cluster could be permanently destroyed.

The Ceph monmap keeps track of the mon quorum. We will update the monmap to only contain the healthy mon. In this example, the healthy mon is rook-ceph-mon-b, while the unhealthy mons are rook-ceph-mon-a and rook-ceph-mon-c.

Take a backup of the current rook-ceph-mon-b Deployment:

  1. kubectl -n rook-ceph get deployment rook-ceph-mon-b -o yaml > rook-ceph-mon-b-deployment.yaml

Open the file and copy the command and args from the mon container (see containers list). This is needed for the monmap changes. Cleanup the copied command and args fields to form a pastable command. Example:

The following parts of the mon container:

  1. [...]
  2. containers:
  3. - args:
  4. - --fsid=41a537f2-f282-428e-989f-a9e07be32e47
  5. - --keyring=/etc/ceph/keyring-store/keyring
  6. - --log-to-stderr=true
  7. - --err-to-stderr=true
  8. - --mon-cluster-log-to-stderr=true
  9. - '--log-stderr-prefix=debug '
  10. - --default-log-to-file=false
  11. - --default-mon-cluster-log-to-file=false
  12. - --mon-host=$(ROOK_CEPH_MON_HOST)
  13. - --mon-initial-members=$(ROOK_CEPH_MON_INITIAL_MEMBERS)
  14. - --id=b
  15. - --setuser=ceph
  16. - --setgroup=ceph
  17. - --foreground
  18. - --public-addr=10.100.13.242
  19. - --setuser-match-path=/var/lib/ceph/mon/ceph-b/store.db
  20. - --public-bind-addr=$(ROOK_POD_IP)
  21. command:
  22. - ceph-mon
  23. [...]

Should be made into a command like this: (do not copy the example command!)

  1. ceph-mon \
  2. --fsid=41a537f2-f282-428e-989f-a9e07be32e47 \
  3. --keyring=/etc/ceph/keyring-store/keyring \
  4. --log-to-stderr=true \
  5. --err-to-stderr=true \
  6. --mon-cluster-log-to-stderr=true \
  7. --log-stderr-prefix=debug \
  8. --default-log-to-file=false \
  9. --default-mon-cluster-log-to-file=false \
  10. --mon-host=$(ROOK_CEPH_MON_HOST) \
  11. --mon-initial-members=$(ROOK_CEPH_MON_INITIAL_MEMBERS) \
  12. --id=b \
  13. --setuser=ceph \
  14. --setgroup=ceph \
  15. --foreground \
  16. --public-addr=10.100.13.242 \
  17. --setuser-match-path=/var/lib/ceph/mon/ceph-b/store.db \
  18. --public-bind-addr=$(ROOK_POD_IP)

(be sure to remove the single quotes around the --log-stderr-prefix flag)

Patch the rook-ceph-mon-b Deployment to run a sleep instead of the ceph mon command:

  1. kubectl -n rook-ceph patch deployment rook-ceph-mon-b -p '{"spec": {"template": {"spec": {"containers": [{"name": "mon", "command": ["sleep", "infinity"], "args": []}]}}}}'

Connect to the pod of a healthy mon and run the following commands.

  1. kubectl -n rook-ceph exec -it <mon-pod> bash
  2. # set a few simple variables
  3. cluster_namespace=rook-ceph
  4. good_mon_id=b
  5. monmap_path=/tmp/monmap
  6. # extract the monmap to a file, by pasting the ceph mon command
  7. # from the good mon deployment and adding the
  8. # `--extract-monmap=${monmap_path}` flag
  9. ceph-mon \
  10. --fsid=41a537f2-f282-428e-989f-a9e07be32e47 \
  11. --keyring=/etc/ceph/keyring-store/keyring \
  12. --log-to-stderr=true \
  13. --err-to-stderr=true \
  14. --mon-cluster-log-to-stderr=true \
  15. --log-stderr-prefix=debug \
  16. --default-log-to-file=false \
  17. --default-mon-cluster-log-to-file=false \
  18. --mon-host=$(ROOK_CEPH_MON_HOST) \
  19. --mon-initial-members=$(ROOK_CEPH_MON_INITIAL_MEMBERS) \
  20. --id=b \
  21. --setuser=ceph \
  22. --setgroup=ceph \
  23. --foreground \
  24. --public-addr=10.100.13.242 \
  25. --setuser-match-path=/var/lib/ceph/mon/ceph-b/store.db \
  26. --public-bind-addr=$(ROOK_POD_IP) \
  27. --extract-monmap=${monmap_path}
  28. # review the contents of the monmap
  29. monmaptool --print /tmp/monmap
  30. # remove the bad mon(s) from the monmap
  31. monmaptool ${monmap_path} --rm <bad_mon>
  32. # in this example we remove mon0 and mon2:
  33. monmaptool ${monmap_path} --rm a
  34. monmaptool ${monmap_path} --rm c
  35. # inject the modified monmap into the good mon, by pasting
  36. # the ceph mon command and adding the
  37. # `--inject-monmap=${monmap_path}` flag, like this
  38. ceph-mon \
  39. --fsid=41a537f2-f282-428e-989f-a9e07be32e47 \
  40. --keyring=/etc/ceph/keyring-store/keyring \
  41. --log-to-stderr=true \
  42. --err-to-stderr=true \
  43. --mon-cluster-log-to-stderr=true \
  44. --log-stderr-prefix=debug \
  45. --default-log-to-file=false \
  46. --default-mon-cluster-log-to-file=false \
  47. --mon-host=$(ROOK_CEPH_MON_HOST) \
  48. --mon-initial-members=$(ROOK_CEPH_MON_INITIAL_MEMBERS) \
  49. --id=b \
  50. --setuser=ceph \
  51. --setgroup=ceph \
  52. --foreground \
  53. --public-addr=10.100.13.242 \
  54. --setuser-match-path=/var/lib/ceph/mon/ceph-b/store.db \
  55. --public-bind-addr=$(ROOK_POD_IP) \
  56. --inject-monmap=${monmap_path}

Exit the shell to continue.

Edit the Rook configmaps

Edit the configmap that the operator uses to track the mons.

  1. kubectl -n rook-ceph edit configmap rook-ceph-mon-endpoints

In the data element you will see three mons such as the following (or more depending on your moncount):

  1. data: a=10.100.35.200:6789;b=10.100.13.242:6789;c=10.100.35.12:6789

Delete the bad mons from the list, for example to end up with a single good mon:

  1. data: b=10.100.13.242:6789

Save the file and exit.

Now we need to adapt a Secret which is used for the mons and other components. The following kubectl patch command is an easy way to do that. In the end it patches the rook-ceph-config secret and updates the two key/value pairs mon_host and mon_initial_members.

  1. mon_host=$(kubectl -n rook-ceph get svc rook-ceph-mon-b -o jsonpath='{.spec.clusterIP}')
  2. kubectl -n rook-ceph patch secret rook-ceph-config -p '{"stringData": {"mon_host": "[v2:'"${mon_host}"':3300,v1:'"${mon_host}"':6789]", "mon_initial_members": "'"${good_mon_id}"'"}}'

NOTE: If you are using hostNetwork: true, you need to replace the mon_host var with the node IP the mon is pinned to (nodeSelector). This is because there is no rook-ceph-mon-* service created in that “mode”.

Restart the mon

You will need to “restart” the good mon pod with the original ceph-mon command to pick up the changes. For this run kubectl replace on the backup of the mon deployment yaml:

  1. kubectl replace --force -f rook-ceph-mon-b-deployment.yaml

NOTE: Option --force will delete the deployment and create a new one

Start the rook toolbox and verify the status of the cluster.

  1. ceph -s

The status should show one mon in quorum. If the status looks good, your cluster should be healthy again.

Restart the operator

Start the rook operator again to resume monitoring the health of the cluster.

  1. # create the operator. it is safe to ignore the errors that a number of resources already exist.
  2. kubectl -n rook-ceph scale deployment rook-ceph-operator --replicas=1

The operator will automatically add more mons to increase the quorum size again, depending on the mon.count.

Adopt an existing Rook Ceph cluster into a new Kubernetes cluster

Situations this section can help resolve

  1. The Kubernetes environment underlying a running Rook Ceph cluster failed catastrophically, requiring a new Kubernetes environment in which the user wishes to recover the previous Rook Ceph cluster.
  2. The user wishes to migrate their existing Rook Ceph cluster to a new Kubernetes environment, and downtime can be tolerated.

Prerequisites

  1. A working Kubernetes cluster to which we will migrate the previous Rook Ceph cluster.
  2. At least one Ceph mon db is in quorum, and sufficient number of Ceph OSD is up and in before disaster.
  3. The previous Rook Ceph cluster is not running.

Overview for Steps below

  1. Start a new and clean Rook Ceph cluster, with old CephCluster CephBlockPool CephFilesystem CephNFS CephObjectStore.
  2. Shut the new cluster down when it has been created successfully.
  3. Replace ceph-mon data with that of the old cluster.
  4. Replace fsid in secrets/rook-ceph-mon with that of the old one.
  5. Fix monmap in ceph-mon db.
  6. Fix ceph mon auth key.
  7. Disable auth.
  8. Start the new cluster, watch it resurrect.
  9. Fix admin auth key, and enable auth.
  10. Restart cluster for the final time.

Steps

Assuming dataHostPathData is /var/lib/rook, and the CephCluster trying to adopt is named rook-ceph.

  1. Make sure the old Kubernetes cluster is completely torn down and the new Kubernetes cluster is up and running without Rook Ceph.
  2. Backup /var/lib/rook in all the Rook Ceph nodes to a different directory. Backups will be used later.
  3. Pick a /var/lib/rook/rook-ceph/rook-ceph.config from any previous Rook Ceph node and save the old cluster fsid from its content.
  4. Remove /var/lib/rook from all the Rook Ceph nodes.
  5. Add identical CephCluster descriptor to the new Kubernetes cluster, especially identical spec.storage.config and spec.storage.nodes, except mon.count, which should be set to 1.
  6. Add identical CephFilesystem CephBlockPool CephNFS CephObjectStore descriptors (if any) to the new Kubernetes cluster.
  7. Install Rook Ceph in the new Kubernetes cluster.
  8. Watch the operator logs with kubectl -n rook-ceph logs -f rook-ceph-operator-xxxxxxx, and wait until the orchestration has settled.
  9. STATE: Now the cluster will have rook-ceph-mon-a, rook-ceph-mgr-a, and all the auxiliary pods up and running, and zero (hopefully) rook-ceph-osd-ID-xxxxxx running. ceph -s output should report 1 mon, 1 mgr running, and all of the OSDs down, all PGs are in unknown state. Rook should not start any OSD daemon since all devices belongs to the old cluster (which have a different fsid).
  10. Run kubectl -n rook-ceph exec -it rook-ceph-mon-a-xxxxxxxx bash to enter the rook-ceph-mon-a pod,

    1. mon-a# cat /etc/ceph/keyring-store/keyring # save this keyring content for later use
    2. mon-a# exit
  11. Stop the Rook operator by running kubectl -n rook-ceph edit deploy/rook-ceph-operator and set replicas to 0.

  12. Stop cluster daemons by running kubectl -n rook-ceph delete deploy/X where X is every deployment in namespace rook-ceph, except rook-ceph-operator and rook-ceph-tools.
  13. Save the rook-ceph-mon-a address with kubectl -n rook-ceph get cm/rook-ceph-mon-endpoints -o yaml in the new Kubernetes cluster for later use.

  14. SSH to the host where rook-ceph-mon-a in the new Kubernetes cluster resides.

    1. Remove /var/lib/rook/mon-a
    2. Pick a healthy rook-ceph-mon-ID directory (/var/lib/rook/mon-ID) in the previous backup, copy to /var/lib/rook/mon-a. ID is any healthy mon node ID of the old cluster.
    3. Replace /var/lib/rook/mon-a/keyring with the saved keyring, preserving only the [mon.] section, remove [client.admin] section.
    4. Run docker run -it --rm -v /var/lib/rook:/var/lib/rook ceph/ceph:v14.2.1-20190430 bash. The Docker image tag should match the Ceph version used in the Rook cluster.

      1. container# cd /var/lib/rook
      2. container# ceph-mon --extract-monmap monmap --mon-data ./mon-a/data # Extract monmap from old ceph-mon db and save as monmap
      3. container# monmaptool --print monmap # Print the monmap content, which reflects the old cluster ceph-mon configuration.
      4. container# monmaptool --rm a monmap # Delete `a` from monmap.
      5. container# monmaptool --rm b monmap # Repeat, and delete `b` from monmap.
      6. container# monmaptool --rm c monmap # Repeat this pattern until all the old ceph-mons are removed
      7. container# monmaptool --rm d monmap
      8. container# monmaptool --rm e monmap
      9. container# monmaptool --add a 10.77.2.216:6789 monmap # Replace it with the rook-ceph-mon-a address you got from previous command.
      10. container# ceph-mon --inject-monmap monmap --mon-data ./mon-a/data # Replace monmap in ceph-mon db with our modified version.
      11. container# rm monmap
      12. container# exit
  15. Tell Rook to run as old cluster by running kubectl -n rook-ceph edit secret/rook-ceph-mon and changing fsid to the original fsid.

  16. Disable authentication by running kubectl -n rook-ceph edit cm/rook-config-override and adding content below:

    1. data:
    2. config: |
    3. [global]
    4. auth cluster required = none
    5. auth service required = none
    6. auth client required = none
    7. auth supported = none
  17. Bring the Rook Ceph operator back online by running kubectl -n rook-ceph edit deploy/rook-ceph-operator and set replicas to 1.

  18. Watch the operator logs with kubectl -n rook-ceph logs -f rook-ceph-operator-xxxxxxx, and wait until the orchestration has settled.
  19. STATE: Now the new cluster should be up and running with authentication disabled. ceph -s should report 1 mon & 1 mgr & all of the OSDs up and running, and all PGs in either active or degraded state.
  20. Run kubectl -n rook-ceph exec -it rook-ceph-tools-XXXXXXX bash to enter tools pod:

    1. tools# vi key
    2. [paste keyring content saved before, preserving only `[client admin]` section]
    3. tools# ceph auth import -i key
    4. tools# rm key
  21. Re-enable authentication by running kubectl -n rook-ceph edit cm/rook-config-override and removing auth configuration added in previous steps.

  22. Stop the Rook operator by running kubectl -n rook-ceph edit deploy/rook-ceph-operator and set replicas to 0.
  23. Shut down entire new cluster by running kubectl -n rook-ceph delete deploy/X where X is every deployment in namespace rook-ceph, except rook-ceph-operator and rook-ceph-tools, again. This time OSD daemons are present and should be removed too.
  24. Bring the Rook Ceph operator back online by running kubectl -n rook-ceph edit deploy/rook-ceph-operator and set replicas to 1.
  25. Watch the operator logs with kubectl -n rook-ceph logs -f rook-ceph-operator-xxxxxxx, and wait until the orchestration has settled.
  26. STATE: Now the new cluster should be up and running with authentication enabled. ceph -s output should not change much comparing to previous steps.