Troubleshooting OpenEBS - NDM

General guidelines for troubleshooting

Blockdevices are not detected by NDM in some of the nodes

Unable to discover some of disks in Proxmox servers by OpenEBS

Unable to claim blockdevices by NDM operator

Blockdevices are not detected by NDM from some of the nodes

One disk is attached per Node in a 3 Node cluster in a VM Environment where CentOS is underlying OS and kubectl get blockdevice -n openebs return only one disk. Also if the particular node is restarted, from where the disk is detected then the description of the disk attached to that node gets modified. lsblk output from one of the nodes:

  1. NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
  2. sda 8:0 0 32G 0 disk
  3. |-sda1 8:1 0 1G 0 part /boot
  4. `-sda2 8:2 0 31G 0 part
  5. |-centos-root 253:0 0 27.8G 0 lvm /
  6. `-centos-swap 253:1 0 3.2G 0 lvm [SWAP]
  7. sdb 8:16 0 50G 0 disk
  8. sr0 11:0 1 1024M 0 rom

Troubleshooting:

Check kubectl get blockdevice -o yaml of one of the blockdevice and check its serial number. Also ensure that serial number of other 2 blockdevices are different. NDM detect and recognise the blockdevice based on their WWN, Model, Serial and Vendor. If the blockdevice have all the parameters same then NDM cannot differentiate the blockdevice and will create only 1 BlockdDevice CR for each unique parameter. To troubleshoot the same user has to make sure the blockdevices are having at least any one unique parameter from WWN, Model, Serial and Vendor. Usually this issue faced in virtualization environment like vSphere, KVM etc.

Resolution:

Download custom blockdevice CR YAML file from here and apply with the details of each block device. In the sample spec, ndm.io/managed: is set to false. So NDM will not manage this
blockdevice.

Note: If you are creating a block device CR manually for a custom device path, then you must add the corresponding device path under exclude filter so that NDM will not select the particular device for BD creation. For example, if block device CR is creating for /dev/sdb manually, then you must add /dev/sdb under exclude filter of NDM configuration. See here for customizing the exclude filter in NDM configuration.

Unable to discover some of disks in Proxmox servers by OpenEBS

User is having a 3 node cluster with 8 disks attached on each node. But kubectl get bd -n openebs is not detecting all the blockdevices. It is detecting some of the blockdevices from each node. This information can be obtained by running kubectl describe bd <bd_cr_name> -n openebs.

Troubleshooting:

Check kubectl get blockdevice -o yaml of one of the blockdevice and its serial number. Also, ensure that the serial number of other 2 blockdevices are different. NDM detect and recognize the blockdevice based on their WWN, Model, Serial and Vendor. If the blockdevice have all the parameters same then NDM cannot differentiate the blockdevice and will create only 1 BlockdDevice CR for each unique parameter. To troubleshoot the same user has to make sure the blockdevices are having at least anyone unique parameter from WWN, Model, Serial and Vendor. Usually this issue is faced in virtualization environment like vSphere, KVM etc. More details abour NDM daemon set functionalities can be read from here.

Resolution:

This can be resolved this by modifying the configuration file of a VM:

  • Open conf file by following command

    1. vi /etc/pve/qemu-server/101.conf
  • Add a serial number by following way:

    1. scsi1:
    2. images:vm-101-disk-1,cache=writeback,discard=on,size=120G,ssd=1,serial=5fb20ba17c2f
  • Restart the VM:

    1. qm shutdown 101 && qm start 101
  • Verify the disk path for all the disks in a VM:

    1. ls -lah /dev/disk/by-id
  • Repeat the same procedure on other nodes and ensure the uniqueness of disks in all the Nodes.

Unable to claim blockdevices by NDM operator

BlockDeviceClaims may remain in pending state, even if blockdevices are available in Unclaimed and Active state. The main reason for this will be there are no blockdevices that match the criteria specified in the BlockDeviceClaim. Sometimes, even if the criteria matches the blockdevice may be in an Unclaimed state.

Troubleshooting: Check if the blockdevice is having any of the following annotations: 1.)

  1. metadata:
  2. annotations:
  3. internal.openebs.io/partition-uuid: <uuid>
  4. internal.openebs.io/uuid-scheme: legacy

or

2.)

  1. metadata:
  2. annotations:
  3. internal.openebs.io/fsuuid: <uuid>
  4. internal.openebs.io/uuid-scheme: legacy

If 1.) is present, it means the blockdevice was previously being used by cstor and it was not properly cleaned up. The cstor pool can be from a previous release or the disk already container some zfs labels. If 2.) is present, it means the blockdevice was previously being used by localPV and the cleanup was not done on the device.

Resolution:

  1. ssh to the node in which the blockdevice is present
  2. If the disk has partitions, run wipefs on all the partitions
  1. wipefs -fa /dev/sdb1
  2. wipefs -fa /dev/sdb9
  1. Run wipefs on the disk
  1. wipefs -fa /dev/sdb
  1. Restart NDM pod running on the node
  2. New blockdevices should get created for those disks and it can be claimed and used. The older blockdevices will go into an Unknown/Inactive state.

See Also:

FAQs

Seek support or help

Latest release notes