Orphaned Data Cleanup

Longhorn supports orphaned data cleanup. Currently, Longhorn can identify and clean up the orphaned replica directories on disks.

Orphaned Replica Directories

When a user introduces a disk into a Longhorn node, it may contain replica directories that are not tracked by the Longhorn system. The untracked replica directories may belong to other Longhorn clusters. Or, the replica CRs associated with the replica directories are removed after the node or the disk is down. When the node or the disk comes back, the corresponding replica data directories are no longer tracked by the Longhorn system. These replica data directories are called orphaned.

Longhorn supports the detection and cleanup of orphaned replica directories. It identifies the directories and gives a list of orphan resources that describe those directories. By default, Longhorn does not automatically delete orphan resources and their directories. Users can trigger the deletion of orphaned replica directories manually or have it done automatically.

Example

In the example, we will explain how to manage orphaned replica directories identified by Longhorn via kubectl and Longhorn UI.

Manage Orphaned Replica Directories via kubectl

  1. Introduce disks containing orphaned replica directories.

    • Orphaned replica directories on Node worker1 disks
    1. # ls /mnt/disk/replicas/
    2. pvc-19c45b11-28ee-4802-bea4-c0cabfb3b94c-15a210ed
    • Orphaned replica directories on Node worker2 disks
    1. # ls /var/lib/longhorn/replicas/
    2. pvc-28255b31-161f-5621-eea3-a1cbafb4a12a-866aa0a5
    3. # ls /mnt/disk/replicas/
    4. pvc-19c45b11-28ee-4802-bea4-c0cabfb3b94c-a86771c0
  2. Longhorn detects the orphaned replica directories and creates an orphan resources describing the directories.

    1. # kubectl -n longhorn-system get orphans
    2. NAME TYPE NODE
    3. orphan-fed8c6c20965c7bdc3e3bbea5813fac52ccd6edcbf31e578f2d8bab93481c272 replica rancher60-worker1
    4. orphan-637f6c01660277b5333f9f942e4b10071d89379dbe7b4164d071f4e1861a1247 replica rancher60-worker2
    5. orphan-6360f22930d697c74bec4ce4056c05ac516017b908389bff53aca0657ebb3b4a replica rancher60-worker2
  3. One can list the orphan resources created by Longhorn system by kubectl -n longhorn-system get orphan.

    1. kubectl -n longhorn-system get orphan
  4. Get the detailed information of one of the orphaned replica directories in spec.parameters by kubcel -n longhorn-system get orphan <name>.

    1. # kubectl -n longhorn-system get orphans orphan-fed8c6c20965c7bdc3e3bbea5813fac52ccd6edcbf31e578f2d8bab93481c272 -o yaml
    2. apiVersion: longhorn.io/v1beta2
    3. kind: Orphan
    4. metadata:
    5. creationTimestamp: "2022-04-29T10:17:40Z"
    6. finalizers:
    7. - longhorn.io
    8. generation: 1
    9. labels:
    10. longhorn.io/component: orphan
    11. longhorn.io/managed-by: longhorn-manager
    12. longhorn.io/orphan-type: replica
    13. longhornnode: rancher60-worker1
    14. ......
    15. spec:
    16. nodeID: rancher60-worker1
    17. orphanType: replica
    18. parameters:
    19. DataName: pvc-19c45b11-28ee-4802-bea4-c0cabfb3b94c-15a210ed
    20. DiskName: disk-1
    21. DiskPath: /mnt/disk/
    22. DiskUUID: 90f00e61-d54e-44b9-a095-35c2b56a0462
    23. status:
    24. conditions:
    25. - lastProbeTime: ""
    26. lastTransitionTime: "2022-04-29T10:17:40Z"
    27. message: ""
    28. reason: ""
    29. status: "True"
    30. type: DataCleanable
    31. - lastProbeTime: ""
    32. lastTransitionTime: "2022-04-29T10:17:40Z"
    33. message: ""
    34. reason: ""
    35. status: "False"
    36. type: Error
    37. ownerID: rancher60-worker1
  5. One can delete the orphan resource by kubectl -n longhorn-system delete orphan <name> and then the corresponding orphaned replica directory will be deleted.

    1. # kubectl -n longhorn-system delete orphan orphan-fed8c6c20965c7bdc3e3bbea5813fac52ccd6edcbf31e578f2d8bab93481c272
    2. # kubectl -n longhorn-system get orphans
    3. NAME TYPE NODE
    4. orphan-637f6c01660277b5333f9f942e4b10071d89379dbe7b4164d071f4e1861a1247 replica rancher60-worker2
    5. orphan-6360f22930d697c74bec4ce4056c05ac516017b908389bff53aca0657ebb3b4a replica rancher60-worker2

    The orphaned replica directory is deleted.

    1. # ls /mnt/disk/replicas/
  6. By default, Longhorn will not automatically delete the orphaned replica directory. One can enable the automatic deletion by setting orphan-auto-deletion to true.

    1. # kubectl -n longhorn-system edit settings.longhorn.io orphan-auto-deletion

    Then, set the value to true.

    1. # kubectl -n longhorn-system get settings.longhorn.io orphan-auto-deletion
    2. NAME VALUE AGE
    3. orphan-auto-deletion true 26m
  7. After enabling the automatic deletion and wait for a while, the orphan resources and directories are deleted automatically.

    1. # kubectl -n longhorn-system get orphans.longhorn.io
    2. No resources found in longhorn-system namespace.

    The orphaned replica directories are deleted.

    1. # ls /mnt/disk/replicas/
    2. # ls /var/lib/longhorn/replicas/

    Additionally, one can delete all orphaned replica directories on the specified node by

    1. # kubectl -n longhorn-system delete orphan -l "longhornnode=<node name>”

Manage Orphaned Replica Directories via Longhorn UI

In the top navigation bar of the Longhorn UI, click Setting > Orphaned Data. Orphaned replica directories on each node and in each disk are listed. One can delete the directories by Operation > Delete.

By default, Longhorn will not automatically delete the orphaned replica directory. One can enable the automatic deletion in Setting > General > Orphan.

Exception

Longhorn will not create an orphan resource for an orphaned directory when

  • The orphaned directory is not an orphaned replica directory.
    • The directory name does not follow the replica directory’s naming convention.
    • The volume volume.meta file is missing.
  • The orphaned replica directory is on an evicted node.
  • The orphaned replica directory is in an evicted disk.
  • The orphaned data cleanup mechanism does not clean up a stable replica, also known as an error replica. Instead, the stale replica is cleaned up according to the staleReplicaTimeout setting.

© 2019-2024 Longhorn Authors | Documentation Distributed under CC-BY-4.0