rbdmap – map RBD devices at boot time

Synopsis

rbdmap map

rbdmap unmap

Description

rbdmap is a shell script that automates rbd map and rbd unmapoperations on one or more RBD (RADOS Block Device) images. While the script can berun manually by the system administrator at any time, the principal use case isautomatic mapping/mounting of RBD images at boot time (and unmounting/unmappingat shutdown), as triggered by the init system (a systemd unit file,rbdmap.service is included with the ceph-common package for this purpose).

The script takes a single argument, which can be either “map” or “unmap”.In either case, the script parses a configuration file (defaults to /etc/ceph/rbdmap,but can be overridden via an environment variable RBDMAPFILE). Each lineof the configuration file corresponds to an RBD image which is to be mapped, orunmapped.

The configuration file format is:

  1. IMAGESPEC RBDOPTS

where IMAGESPEC should be specified as POOLNAME/IMAGENAME (the poolname, a forward slash, and the image name), or merely IMAGENAME, in whichcase the POOLNAME defaults to “rbd”. RBDOPTS is an optional list ofparameters to be passed to the underlying rbd map command. These parametersand their values should be specified as a comma-separated string:

  1. PARAM1=VAL1,PARAM2=VAL2,...,PARAMN=VALN

This will cause the script to issue an rbd map command like the following:

  1. rbd map POOLNAME/IMAGENAME --PARAM1 VAL1 --PARAM2 VAL2

(See the rbd manpage for a full list of possible options.)For parameters and values which contain commas or equality signs, a simpleapostrophe can be used to prevent replacing them.

When run as rbdmap map, the script parses the configuration file, and foreach RBD image specified attempts to first map the image (using the rbd mapcommand) and, second, to mount the image.

When run as rbdmap unmap, images listed in the configuration file willbe unmounted and unmapped.

rbdmap unmap-all attempts to unmount and subsequently unmap all currentlymapped RBD images, regardless of whether or not they are listed in theconfiguration file.

If successful, the rbd map operation maps the image to a /dev/rbdXdevice, at which point a udev rule is triggered to create a friendly devicename symlink, /dev/rbd/POOLNAME/IMAGENAME, pointing to the real mappeddevice.

In order for mounting/unmounting to succeed, the friendly device name musthave a corresponding entry in /etc/fstab.

When writing /etc/fstab entries for RBD images, it’s a good idea to specifythe “noauto” (or “nofail”) mount option. This prevents the init system fromtrying to mount the device too early - before the device in question evenexists. (Since rbdmap.serviceexecutes a shell script, it is typically triggered quite late in the bootsequence.)

Examples

Example /etc/ceph/rbdmap for three RBD images called “bar1”, “bar2” and “bar3”,which are in pool “foopool”:

  1. foopool/bar1 id=admin,keyring=/etc/ceph/ceph.client.admin.keyring
  2. foopool/bar2 id=admin,keyring=/etc/ceph/ceph.client.admin.keyring
  3. foopool/bar3 id=admin,keyring=/etc/ceph/ceph.client.admin.keyring,options='lock_on_read,queue_depth=1024'

Each line in the file contains two strings: the image spec and the options tobe passed to rbd map. These two lines get transformed into the followingcommands:

  1. rbd map foopool/bar1 --id admin --keyring /etc/ceph/ceph.client.admin.keyring
  2. rbd map foopool/bar2 --id admin --keyring /etc/ceph/ceph.client.admin.keyring
  3. rbd map foopool/bar2 --id admin --keyring /etc/ceph/ceph.client.admin.keyring --options lock_on_read,queue_depth=1024

If the images had XFS file systems on them, the corresponding /etc/fstabentries might look like this:

  1. /dev/rbd/foopool/bar1 /mnt/bar1 xfs noauto 0 0
  2. /dev/rbd/foopool/bar2 /mnt/bar2 xfs noauto 0 0
  3. /dev/rbd/foopool/bar3 /mnt/bar3 xfs noauto 0 0

After creating the images and populating the /etc/ceph/rbdmap file, makingthe images get automatically mapped and mounted at boot is just a matter ofenabling that unit:

  1. systemctl enable rbdmap.service

Options

None

Availability

rbdmap is part of Ceph, a massively scalable, open-source, distributedstorage system. Please refer to the Ceph documentation athttp://ceph.com/docs for more information.

See also

rbd(8),